Benefits of Decoupled Service Contracts

March 13, 2011

The Decoupled Contract pattern separates the service contract or interface from the physical implementation. The service interface is independent of implementation yet aligned with other services in the domain or enterprise service inventory.This is a follow up to the podcast episode on building contract-first services

Distributed development technologies such as .NET and Java allow easy creation of service contracts from existing or newly created components. Although this is relatively easy to accomplish the service contract tends to get expressed using the underlying technology platform – one of the many anti-patterns to avoid. This inhibits interoperability and increases integration effort for service consumers. Additionally, service consumers get forced to a specific implementation thereby considerably increasing technology coupling. Finally, this is problematic for the provider as well. If there is a need to upgrade/replace/change the service implementation, there is high probability that existing consumers will need to change their code.

The Decoupled Contract pattern solves above problems by decoupling the service contract from the implementation. Instead of auto-generating service contract specifications via a WSDL document, this pattern advocates the creation of contracts without taking implementation environment/technology into consideration. This contract will be free of physical data models instantiated in a backend data store and proprietary tokens or attributes specific to a technology platform. Consequently, the service consumers are bound to the decoupled contract and not a particular implementation.

The service and the consumers will still be coupled with any limitations associated with a service implementation (for instance, an implementation may not be able to realize all features of the contract or guarantee policy requirements etc.). It is also possible for a particular implementation technology to impose deficiencies across the service inventory in its entirety. But even given these impacts, the contract-first approach that this pattern facilitates is significant and foundational to service-orientation.

This pattern is beneficial to several design techniques that use Web Services or benefit from a decoupled service contract. Contract Centralization and Service Refactoring patterns are greatly enhanced by this pattern. Service Façade is often applied alongside this pattern as well to realize Concurrent Contracts.

Advertisements

Rationale for Compensating Service Transactions

March 5, 2011

Compensating Service Transaction pattern helps consistently handle composition runtime exceptions while eliminating the need for locking resources.

Service compositions could generate various runtime exceptions as part of fulfilling service functionality. For example, imagine a service composition that invokes a decision service to validate request data, proceeds to update a customer entity via an entity service, and then sends a message on a queue for an external process to consume. Consider the three steps as part of a single unit of work – a service transaction that execute in sequence.

If runtime exceptions associated with the composition are unhandled there is an increased likelihood of compromising data and business integrity. Similarly, if all the steps are executed as an atomic transaction each service invocation will tie up backend resources (e.g. using various kinds of locks) hurting performance and scalability.

The compensating service transaction pattern introduces additional compensation steps in order to handle runtime exceptions without locking system resources. These additional steps can be included as part of the composition logic or made available as separate undo service capabilities. Continuing the earlier example, a call to undo the customer update (essentially resetting the data back to the earlier state) can be made to place the underlying data in a valid state.

The Compensating Service Transaction pattern provides several benefits:

  • Eliminates the need for executing distributed service invocations within a transactional context.
  • Provides a mechanism for services/resources that don’t support atomic transactions to participate in service compositions.
  • Reduces load on underlying systems and databases by only invoking compensation routines when failure occurs. Instead of locking resources in case failures happen, the idea is to handle runtime exceptions when they actually occur.
  • Allows reuse of compensation steps across service compositions. E.g. two business services that update a data entity can reuse the undo Update capability.

Compensating transactions tend to be more open-ended when compared to atomic transactions and their actual effectiveness varies. The extent to which the service composition can apply this pattern is directly dependent on the undo capabilities provided by various service capabilities being invoked. Consequently, this pattern cannot be leveraged if compensating transactions don’t exist. Likewise, it is necessary to ensure that the compensating steps themselves execute successfully. Error alerts/notifications may need to be sent in case compensations fail and manual interventions are needed.


Abstract Utility Functions for Effective Service Reuse

February 23, 2011

The Utility abstraction pattern enables non-business centric logic to be separated, reused, and governed independent of other services.

Utility Abstraction pattern encapsulates cross-cutting functionality across service capabilities into a separate component. For instance, cross-cutting functionality such as logging, notification, and auditing are required by several service capabilities. Dedicated services based on the utility service model guided by enterprise architecture specifications.

This pattern provides several benefits: 

  • Reduces and potentially eliminates cross-cutting logic from individual services. This keeps the individual services domain-specific and lightweight.
  • Reduces and potentially eliminates redundant cross-cutting logic that might be implemented across the service inventory. This will reduce development and testing costs while minimizing duplication.
  • This pattern also enables the reuse of utility capabilities across both business processes and service capabilities.
  • In addition to being the central component for a cross-cutting function, this abstraction will facilitate changes to implementation. For example, logging may use a file store and later switch to a database-driven solution. Likewise, with a utility abstraction component, it is simpler to migrate to an alternate provider – replace an in-house implementation with a cheaper cloud provider.

Separating the utility component also has benefits from a non-functional standpoint. The utility function can be executed asynchronously (to save response time for a service) or additional instances can be supported to enable concurrent processing.

This pattern also makes it easier to perform additional functions surrounding the core cross-cutting function. Taking logging as an example again, if archiving/backup policies change there will be one piece of logic to update rather than touch individual services. It is important to note however that this pattern can increase the size, complexity, and performance demands of service compositions.

There are a variety of strategies to realize this pattern – via service mediation layer in a service bus based architecture or using a lightweight proxy that intercepts service methods.


Pursuing BPM? Avoid Reuse Inhibitors

December 20, 2010

Implementing business process re-engineering and/or automation involves several moving parts from a technology perspective – stateful business process data, interfaces for external systems/user interfaces to interact with process information, and rules that determine several aspects of process behavior.

Here are some practices to avoid/minimize when pursuing BPM projects:

  • Building monolithic process definitions – as the number of process definitions increase there will be opportunities to refactor and simplify orchestration logic. As projects start to build on existing processes, there will be an increasing need for modular process definitions that can be leveraged as part of larger process definitions.
  • Placing business rules that validate domain objects in the process orchestration graphs. This will effectively tightly couple rules and a specific process hurting the ability to reuse domain-specific rules across process definitions. Certain class of rules could change often and need to be managed as a separate artifact.
  • Invoking vendor specific services directly from the process definition. Over time this will result in coupling between a vendor implementation and multiple process definitions. This also applies to service capabilities hosted on legacy systems. Hence, the need to use a service mediation layer.
  • Defining key domain objects within the scope of a particular process definition. This will result in multiple definitions of domain objects across processes. Over time, this will increase maintenance effort, data transformation, and redundant definitions across processes and service interfaces.
  • Integrating clients directly to the vendor-specific business process API. This will result in multiple clients implementing boiler plate logic – better practice will be to service enable these processes and consolidate entry points into the BPM layer.

This isn’t an exhaustive list – intent is to to highlight the areas where tight coupling could occur in the business process layer.


Reusable Exception Handling for Services – Podcast Episode

November 13, 2010
Want to listen using iTunes?

Using iTunes?

podcast

New episode added to the Software Reuse Podcast Series on designing reusable exception handling for SOA efforts. It elaborates on using a simple set of components for handling exceptions, capturing relevant metadata, and determining notifications.

Like this post? Subscribe to RSS feed or get blog updates via email.


Bringing Events, Processes, and Services Together

November 6, 2010

When working on implementing business process automation solutions, the team needs to bring together several related concepts: chief among them are events, process instances, and service capabilities. I like to think of them in the context of an overall enterprise architecture being aligned with individual solutions and projects. Bringing these concepts together holistically has several benefits:

  • Better alignment between BPM & SOA initiatives (wrote earlier about the risks of not doing so)
  • Consistent interfaces for event producers and consumers (e.g. clients who initiate changes to data or alter a process will be able to reuse a standardized set of messages)
  • Simpler solution architecture – aligning data models across event processing and services will reduce data transformation needs. It will greatly reduce the need for data mapping and complex adapters.

Events can be internal or external and can impact business processes in several ways. An external event could initiate a business process (thereby a new process instance is created), or alter an existing process (execution is moved from some kind of wait state or an existing instance could be terminated). These events could be data, state, or a combination of the two impacting a business process. The business process instance might take an alternate path or spawn a new instance of another process as appropriate.

Process instances themselves can be implemented by leveraging several service capabilities. These service capabilities need to share a consistent set of domain-relevant interfaces. The process instances can then be loosely coupled with services thus enabling the service implementations to evolve over time. As state changes happen in a process instance, outbound events can be generated – these are publications or notifications that are of interest to potentially multiple consumers. Service capabilities can be leveraged for not just creating/changing data but also when encapsulating business rules for decision steps, as well as for publishing messages to downstream systems/processes.


Capture Service Metrics

October 22, 2010

Metrics are very useful to understand service usage, volume trends (growth/decline) – leading to improving performance/capacity planning,  diversity of user base, etc.

A service mediation layer can initialize and persist metrics and can help efforts to mine that data to generate reports using the data. What specific metrics can be captured? Here are a few attributes:

incoming time, outgoing time (or publication time) , transport (whether service was invoked via HTTP, JMS, or some other transport), requesting host name/queue name etc. Additionally, if a request processing resulted in an error – error details including stack trace can be captured. Finally, service/operation-specific metrics can be captured – if you don’t have demanding reporting requirements,  these attributes can be stored as a set of name/value pairs.

In a follow up post, will elaborate on how specific metrics can be captured throughout service processing.


%d bloggers like this: