Part 1 of a two part Article series on SOA adoption is up on the SOA magazine
New episode added to the Software Reuse Podcast Series on designing reuse-friendly XML schema definitions as part of SOA efforts. Elaborates on a core set of practices that will make your service contracts more maintainable and reusable.
There are a variety of components – that can encapsulate business logic (either simple algorithm/calculations or even complex orchestrations). These components can be invoked from business process orchestrations as well as stateless services. These reusable components could be business rules integrated as decision services or legacy services wrapped using a more decoupled interface. Business events such as a new account being opened or a new security getting added to a portfolio could trigger a core piece of logic – e.g. get statement preferences – that can be used to fulfill both these needs.
For instance, in the diagram below that two business processes and a stateless service invoke a common component via a request dispatcher (or a router module).
If you tightly couple a piece of logic that is applicable across business processes or service capabilities you can refactor it to create a new resuable component. This is all the more reason why it is a good idea to go through a service mediation layer when leveraging legacy services from business processes. If you decide to reuse the legacy service in a new orchestration it will be straightforward to plugin a new consumer.
Many teams that build service capabilities have to manage multiple versions – this is a problem for any shared asset really – be it a library, component, or service. Using extensible schema contracts (also referred to as Consumer Driven Contracts) you can design service contracts that allow both provider to evolve and consumer to integrate in a flexible manner. In this post, I want to suggest five additional tips when managing web services:
1. Figure out how many versions your team will support concurrently. Too little will force all your consumers to be on a single version and too many will become a maintenance nightmare for you (the provider). In past implementations, I have maintained upto 3 versions while actively moving all service consumers towards one target version. A related approach is to have multiple flavors for your service capability one that returns data that most consumers want, the second that provides the minimal set of attributes, and a third flavor that returns the full list of data items. This may or may not be possible in your case, but something to consider when designing contracts.
2. Figure out how you are going to support multiple versions as a service provider. You can use xml schema namespaces to indicate versions: http://some-company.com/services/CustomService_ver1_0.xsd, ver1_1.xsd and so on. Consider creating a service adapter that can translate back and forth between a new service implementation and the existing one. This can potentially help you with one server side implementation of the functional logic and still service your current and new consumers. This adapter component can perform necessary data transformations, error code & error message translations, and massage response with data attributes as appropriate.
3. Communicate the change in the service capability and gauge the appetite with existing consumers for their ability to absorb the changes in the same release time frame that you are targeting to drop your new version. If you co-ordinate the release, you can get them to new version when you go live. However, for mission critical applications you will want to support both your current and new version concurrently for a small time period before switching the old one off.
4. When you design forward-compatible schemas, you can test the data-binding against multiple platforms. For example, use WSDL2Java if you are using Apache Axis in Java or wsdl.exe if you are in .NET and generate appropriate web service proxy classes and data binding classes. What i have done is to implement JUnit and NUnit automated test cases that run everytime there is a new WSDL or service contract (XSD) change. This will not only validate the service functional logic, but also the forward-compatibility of existing clients. Make sure your when you generate bindings that you generate with both the new schema/wsdl (your updated version) and the existing schema/wsdl files (the version currently used by production clients).
5. Establish lightweight service governance – it is critical to plan how many service flavors and versions you will support, when will they get upgraded, deprecated, decommissioned, etc. and communicate those with your consumers. Identify checkpoints in your development process where contracts can be reviewed and service behavior can be discussed openly. The well thought out service orientation strategy is a benefit for both the provider and the consumers in your organization.
What other tips/techniques have you used?
I wrote earlier about the idea of using consistent error codes for reusable assets. As a follow up, here is a document with a list of reusable return codes that can be used when building service capabilities as part of SOA initiatives. They are categorized into:
- request processing
- data processing
- dependency access
This isn’t an exhaustive list by any means but can get you started in terms of achieving consistency across services and projects.The service consumer can use the return code to understand the service provider’s response. This consistent, uniform categorization of return codes can help you reuse error handlers (e.g. handle a particular error in the same way regardless of which service capability raised it). It will also help with production support and troubleshooting – less learning curve for support staff and developers to categorize errors at runtime.
Note: The return code derived based on errors need not necessarily be the return code sent back to the service consumer. It is entirely possible that you return a friendly error to the consumer and a detailed error to production support.
Are there additional ones to include in this list?
In part 1 of this series, the rationale for a data services product line was introduced. In this part, I want to identify specific benefits in taking a service oriented approach to building a data services product line:
Abstraction: Enterprise data entities are complex to assemble and deliver. Core data entities might be physically stored in several repositories often requiring several dozen queries to assemble a customer or product. The enterprise data service needs to abstract the data service consumer from the physical implementation details of multiple sources and source–specific data access logic and semantics. From a consumer perspective, the service is a capability that they can access and interact with as though it is a single record. SOA can provide this abstraction in the form of one or more web services.
Federation: If all the repositories and services for a core enterprise data entity reside in a single department or division federation will be less attractive. But, given mergers and acquisitions as well as the need for getting access to enterprise data from several touch points exist federation is a real business need. The data service can federate across systems or repositories based data category or other criteria. Alternatively, the data service can be part of a composition of services that are used to support a larger enterprise-wide requirement. Either way, SOA provides the necessary machinery to federate data.
Interoperability-Data services need to be interoperable across several computing platforms and technologies. This is a fundamental requirement because the enterprise has several different technologies that aren’t going to suddenly become homogenous. SOA stacks with support for open web-based standards such as SOAP, HTTP, and XML are ideal to realize this need.
Integration-Enterprise data services need to be integrated with external consuming applications via two primary message exchange patterns namely on-demand (request/reply) and event driven (publish/subscribe). The service logic needs to be implemented by integrating across multiple data repositories and applications. The integration glue layer has connectivity components, error handling considerations, data transformations, data cleansing/lookup rules, and various data source specific considerations.
Reuse-Data services are meant to be reusable assets that get leveraged across the enterprise. Services can be reused across multiple physical transports, distribution channels, and business process orchestrations. The real benefit provided by SOA is the ability to reuse a service from a number of different computing platforms via open protocols. The ease of interoperability is a key enabler for service reuse.
Versioning-Multiple versions for services as a whole as well as fine grained service capabilities are essential for the product line. SOA provides mechanisms to host multiple service versions either as co-existing units of logic (e.g. using XML schema namespace-based versioning) or by having an adapter layer that can support multiple service versions (the adapter can translate the prior version request to a newer version and vice versa). This provides service consumers the flexibility to upgrade to newer versions of the service gracefully over time.
- Good software design practice to bind to an interface as opposed to an implementation. So individual applications won’t be directly coupled with an external vendor solution.
- Provides you the flexibility to augment solution using multiple vendors. Related to above point, you can utilize one vendor for a subset of capabilities and another for a different set.
- This API can be the ideal place to integrate your enterprise capabilities within the context of BPM solutions. Instead of making one-off or tactical modifications to a vendor solution that could be both expensive and proprietary, you can augment missing capabilities using the abstraction API. For example, if the BPM solution doesn’t support authentication based on active directory, this abstraction API can provide that capability (most likely you already have this component in your enterprise). Additionally, this is also the place to integrate horizontal capabilities such as message routing, metrics, monitoring, and error handling. Do you want your BPM solutions to report exceptions differently than other applications? In the same vein, this API can integrate with services or libraries that encrypt/hide sensitive data attributes before returning process state to a calling application. This has the potential to reduce duplication of such logic across user interfaces that integrate with related business processes.
- Can potentially simplify complexities associated with a native API. If the native API needs a set of steps for starting process instances or get task/work items for a particular user – this abstraction API can simplify those calls. This not only makes it easier for integration but reduces the opportunities for errors across development efforts. This API in essence can act as a façade.
- The API standardizes access to BPM capabilities and reduces the possibility of competing integration mechanisms across development efforts. If one team uses the native API as-is and another builds a new one on top – you have two ways of accessing the BPM engine. This problem gets worse as additional teams start to use BPM.
- This API could also make every business process support a core set of functions in addition to start/stop/suspend/resume calls. For instance, every business process can provide a getCurrentStatus() or reassignProcessInstance() that will make it easier for managing processes at runtime.
This API could be realized as a web service depending on the level of heterogeneity and performance requirements. This would essentially act as a service enabler for your business processes.
The above list isn’t exhaustive – are there additional ones to add to this list?
Service capabilities can be reused mainly from the user interface tier, the business services tier, batch processes, and real-time processes and could be consumed from a plethora of platforms. These capabilities could be accessed via message exchange patterns – request/reply (tight or relaxed SLA) and publish/subscribe. All of these patterns drive your SOA and systematic reuse efforts. Some capabilities might always be available only via a single exchange mechanism but you will increasingly be offering similar capabilities across these three patterns.
Note: the illustrations below depict a data source behind the service capability. This isn’t a requirement but if you are exposing core data as entity services or business services that access underlying data you will be needing one or more data sources for the service to be functional.
Most common pattern when executing on-demand data services. This is typical of interactive applications that send requests and block on a response. When doing a synchronous request/reply via JMS, a temporary/physical response queue could be created. Regardless of transport used, the idea is to get a response very quickly.
#2 Request/Reply (relaxed SLA – typically asynchronous)
This pattern is used when executing long running service capabilities. The consumer sends a request and does not block on a reply. When the response is sent from the data service, the consumer can use a callback mechanism (a message listener) to process the data. This pattern also typically uses correlation identifiers to relate request and response messages. When using JMS, a physical queue is used to obtain the response messages. A queue receiver drains the message from the queue and proceeds with processing.
This pattern is used by publication services that execute based on a business event or a data operation even. The service will publish standardized messages that align with your business-specific or domain-specific data model. This is very useful when multiple consumers need to get notified upon updates/changes to core data. Using this model, new consumers can be added via configuration in a message broker as opposed to writing code for each integration. The service will publish to a destination (i.e. a Topic) and subscribers (consumer applications or processes) will each get the appropriate publication.
Tip #19 – Design loosely Coupled Service Contracts
Loosely coupled service capabilities help achieve agility in business processes and technology solutions. Why? Because, they make fewer assumptions about the implementation technologies and consumption characteristics. Designing contracts or interfaces is a critical activity when building service capabilities. The interface needs to avoid specifying technology specific, implementation environment specific constructs. This includes operation system variables, vendor-specific variables, database layout/structure etc. The idea is to not tie yourself too closely with a particular implementation. Ask yourself – what is the impact to the service capability if the underlying database is moved to a different vendor? what if we skip the database completely and load data from a file cache? You shouldn’t have to move mountains to accommodate such changes if the interface is not tied to a single implementation. This concept must sound familiar :-) Indeed it is the same idea that has been espoused with object oriented design. Separate interface from implementation. A service capability may utilize a legacy provider, packaged application, an externally hosted solution (a cloud maybe). It is essential that the service capability is carefully designed when integrating with these dependencies. If there are attributes internal to a legacy system ensure that they are not specified on the service contract.