January 4, 2015
Wrote earlier about why interfaces are important and in this post want to elaborate on their advantages for building reusable services. Service interfaces contain only the operation or method definitions and have no implementations. They can be used in a variety of ways:
- Package service interfaces into a separate artifact to make it easy for client teams to integrate with services without pulling in bulky set of transitive dependencies
- Bind the interfaces to one or more transport / integration technology via standard Dependency Injection (DI). For example, service interfaces can be integrated with a REST-ful Resource or a EMS listener.
- Service interfaces can be backed by stub and/or mock implementation for automated unit and regression testing.
- Service interfaces can be decorated with common cross-cutting concerns to separate them from the implementation. This is the strategy implemented via the Java Dynamic Proxy example.
- Service interfaces can be implemented as a proxy to a remote implementation. For example, the client invokes the functionality via the interface but the runtime implementation makes a call to a server side API. This is useful if your teams need the flexibility to swap local / remote implementations depending on performance / dependency management related requirements
January 3, 2015
A lot of teams are building services for clients both internal and external to your organization. Typically, there is quite a bit of focus on succeeding from a functional sense – did we get the key requirements addressed? does it cover the plethora of rules across markets / jurisdictions? and so on. Some of the more experienced teams, consider the non-functional aspects as well – e.g. logging, auditing, exception handling, metrics, etc. and I talked about the value of service mediation for addressing these in an earlier post.
There is an expanded set of capabilities that are also necessary when addressing non-functional requirements – those that are very relevant specially when your service grows in popularity and usage. They fall under two categories: operational agility and fault-tolerance. Here are a few candidate capabilities in both these categories:
Operational Agility / Supportability:
- Ability to enable / disable both services and operations within a service
- Ability to provision additional service instances based on demand (elastic scaling)
- Maintenance APIs for managing resources (reset connection pool, individual connections, clear cached data, etc.)
- Ability to view Service APIs that are breaching and ones that are the risk of breaching operational SLAs
- Model and detect out of band behavior with respect to resource consumption, transaction volumes, usage trends during a time period etc.
- Failing fast when there is no point executing operations partially
- Ability to detect denial of service attacks from malicious clients
- Ability to gracefully handle unexpected usage spikes via load shedding, re-balancing, deferring non-critical tasks, etc.
- Detecting failures that impact individual operations as well as services as a whole
- Dealing with unavailable downstream dependencies
- Leveraging time outs when integrating with one or more dependencies
- Automatically recovering from component failures
In future posts, I will expand on each of these topics covering both design and implementation strategies. It is also important to point out that both these aspects are heavily interconnected and influence each other.
May 27, 2012
Working with clients who are consuming your services? Here is a mini-checklist of questions to ask:
- While executing request/reply on the service interface is there a timeout value set on the call?
- Is there code/logic to handle SOAP Faults /system exceptions when invoking the service?
- Is building service header separated from the payload? This will facilitate reuse across services that share common header parameters
- If there are certain error codes that the calling code can handle, is there logic for each of them?
- Is the physical end point information (URL string for HTTP, Queue connection and name for MQ/EMS) stored in an external configuration file?
- Is UTF-8 encoding used while sending XML requests to the service i.e. by making use of platform-specific UTF encoding objects?
- If using form-encoding are unsafe characters such as ‘&’, ‘+’, ‘@’ escaped using appropriate %xx (hexadecimal) values?
- While processing the service response is the logic for parsing/processing SOAP and service-specific headers decoupled from processing the business data elements?
- Is the entire request/reply operation – invocation and response handling logic – encapsulated into its own class or method call?
- While performing testing, is the appropriate testing environment URL/queue manager being used?
- Is a valid correlation id being used in the service request? This is very essential for aynchronous request/reply over JMS (JMS Header) or HTTP (callback handler)
January 17, 2012
When service capabilities get reused across applications and processes, high availability becomes imperative – key question: do you detect availability issues before your clients do? This is important for several reasons:
- Unlike stand alone applications/processes, shared services impact several consumers. Not every consumer might be okay with your service being unavailable for an extended period of time. The same service might be in the critical path for some and not so much for others
- For some service capabilities, running them in a partial mode might be acceptable – e.g. operating out of a cached copy of data rather than fetching it from a live database, or servicing only read only operations during an unexpected outage, etc.
- Some consumers might have regulatory processes that are dependent on services being available – a service being unavailable might cause SLA breaches
Finally, consumer trust is key for systematic reuse – if they perceive service availability as a limiting factor, it will be harder to convince them to use services – including current and upcoming integrations
December 27, 2011
New episode added to the Software Reuse Podcast Series on service governance covering design, implementation, testing, and provisioning and how they enable reuse.
Like this post? Subscribe to RSS feed or get blog updates via email.
December 24, 2011
Service driven systematic reuse takes conscious design decisions, governance, and disciplined execution – project after project. In order to sustain long running efforts such as service orientation, it is critical to track, report, and get buy-in from senior management in the organization. So what metrics are useful? Here are a few:
- Total number of service operations reused in a time period
- Total effort saved due to systematic reuse in a time period
- Number of new service consumers in a time period
- Number of new consumer integrations in a time period (this includes integrations from both new and existing consumer
- Service integrations across transports/interface points (for instance, the service operation could be accessed SOAP over HTTP, or as SOAP over JMS, or REST, etc.)
What metrics do your teams track?
December 24, 2011
Pursuing service based systematic reuse or business process development? Then, these five practices will help your teams achieve increased level of service reuse.
- Manage a common set of domain objects that are leveraged across service capabilities. This could be a library of objects (e.g. Plain Old Java Objects) or XML Schema definitions or both. Depending on the number of service consumers and the complexity in the domain, there will be need for supporting multiple concurrent versions of these objects.
- Provide common utilities for not only service development but WSDL generation, integration and performance testing, and ensure interoperability issues are addressed
- Appropriate functional experts are driving the service’s requirements and common capabilities across business processes are identified early in the software development lifecycle
- Governance model guidelines are clearly documented and communicated – for example, there are a class of changes that can be made to a public interface such as a WSDL that don’t impact existing service clients and there are some that do.
- Performance testing needs to be done not only during development but during service provisioning – i.e. integrating a new service consumer. If your teams aren’t careful, one heavy volume consumer, can overwhelm a service impacting both new and existing consumers. Execute performance testing in an automated fashion – every time you integrate with a new client to reduce risks of breaching required SLAs
What additional practices do your teams follow?
March 13, 2011
The Decoupled Contract pattern separates the service contract or interface from the physical implementation. The service interface is independent of implementation yet aligned with other services in the domain or enterprise service inventory.This is a follow up to the podcast episode on building contract-first services
Distributed development technologies such as .NET and Java allow easy creation of service contracts from existing or newly created components. Although this is relatively easy to accomplish the service contract tends to get expressed using the underlying technology platform – one of the many anti-patterns to avoid. This inhibits interoperability and increases integration effort for service consumers. Additionally, service consumers get forced to a specific implementation thereby considerably increasing technology coupling. Finally, this is problematic for the provider as well. If there is a need to upgrade/replace/change the service implementation, there is high probability that existing consumers will need to change their code.
The Decoupled Contract pattern solves above problems by decoupling the service contract from the implementation. Instead of auto-generating service contract specifications via a WSDL document, this pattern advocates the creation of contracts without taking implementation environment/technology into consideration. This contract will be free of physical data models instantiated in a backend data store and proprietary tokens or attributes specific to a technology platform. Consequently, the service consumers are bound to the decoupled contract and not a particular implementation.
The service and the consumers will still be coupled with any limitations associated with a service implementation (for instance, an implementation may not be able to realize all features of the contract or guarantee policy requirements etc.). It is also possible for a particular implementation technology to impose deficiencies across the service inventory in its entirety. But even given these impacts, the contract-first approach that this pattern facilitates is significant and foundational to service-orientation.
This pattern is beneficial to several design techniques that use Web Services or benefit from a decoupled service contract. Contract Centralization and Service Refactoring patterns are greatly enhanced by this pattern. Service Façade is often applied alongside this pattern as well to realize Concurrent Contracts.
March 5, 2011
Compensating Service Transaction pattern helps consistently handle composition runtime exceptions while eliminating the need for locking resources.
Service compositions could generate various runtime exceptions as part of fulfilling service functionality. For example, imagine a service composition that invokes a decision service to validate request data, proceeds to update a customer entity via an entity service, and then sends a message on a queue for an external process to consume. Consider the three steps as part of a single unit of work – a service transaction that execute in sequence.
If runtime exceptions associated with the composition are unhandled there is an increased likelihood of compromising data and business integrity. Similarly, if all the steps are executed as an atomic transaction each service invocation will tie up backend resources (e.g. using various kinds of locks) hurting performance and scalability.
The compensating service transaction pattern introduces additional compensation steps in order to handle runtime exceptions without locking system resources. These additional steps can be included as part of the composition logic or made available as separate undo service capabilities. Continuing the earlier example, a call to undo the customer update (essentially resetting the data back to the earlier state) can be made to place the underlying data in a valid state.
The Compensating Service Transaction pattern provides several benefits:
- Eliminates the need for executing distributed service invocations within a transactional context.
- Provides a mechanism for services/resources that don’t support atomic transactions to participate in service compositions.
- Reduces load on underlying systems and databases by only invoking compensation routines when failure occurs. Instead of locking resources in case failures happen, the idea is to handle runtime exceptions when they actually occur.
- Allows reuse of compensation steps across service compositions. E.g. two business services that update a data entity can reuse the undo Update capability.
Compensating transactions tend to be more open-ended when compared to atomic transactions and their actual effectiveness varies. The extent to which the service composition can apply this pattern is directly dependent on the undo capabilities provided by various service capabilities being invoked. Consequently, this pattern cannot be leveraged if compensating transactions don’t exist. Likewise, it is necessary to ensure that the compensating steps themselves execute successfully. Error alerts/notifications may need to be sent in case compensations fail and manual interventions are needed.
February 23, 2011
The Utility abstraction pattern enables non-business centric logic to be separated, reused, and governed independent of other services.
Utility Abstraction pattern encapsulates cross-cutting functionality across service capabilities into a separate component. For instance, cross-cutting functionality such as logging, notification, and auditing are required by several service capabilities. Dedicated services based on the utility service model guided by enterprise architecture specifications.
This pattern provides several benefits:
- Reduces and potentially eliminates cross-cutting logic from individual services. This keeps the individual services domain-specific and lightweight.
- Reduces and potentially eliminates redundant cross-cutting logic that might be implemented across the service inventory. This will reduce development and testing costs while minimizing duplication.
- This pattern also enables the reuse of utility capabilities across both business processes and service capabilities.
- In addition to being the central component for a cross-cutting function, this abstraction will facilitate changes to implementation. For example, logging may use a file store and later switch to a database-driven solution. Likewise, with a utility abstraction component, it is simpler to migrate to an alternate provider – replace an in-house implementation with a cheaper cloud provider.
Separating the utility component also has benefits from a non-functional standpoint. The utility function can be executed asynchronously (to save response time for a service) or additional instances can be supported to enable concurrent processing.
This pattern also makes it easier to perform additional functions surrounding the core cross-cutting function. Taking logging as an example again, if archiving/backup policies change there will be one piece of logic to update rather than touch individual services. It is important to note however that this pattern can increase the size, complexity, and performance demands of service compositions.
There are a variety of strategies to realize this pattern – via service mediation layer in a service bus based architecture or using a lightweight proxy that intercepts service methods.