5 Reasons for Building Tactical Services

October 7, 2010

Tactical services introduce higher than necessary coupling between service providers and consumers and have brittle contracts that forcing service implementation changes on consumers. They do not reuse standard schemas and datatypes increasing data transformation and integration costs and could tightly couple service business logic and transport-specific logic.

In short, tactical services inhibit reusability, increase maintenance costs and reduce the overall effectiveness of service oriented architecture (SOA) efforts.

So, why do teams end up with tactical services? Here are five reasons:

  1. Lack of time to design proper service interfaces – service contracts are rushed to clients exposing needless internal details, introducing redundant business object definitions, and providing inconsistent behavior
  2. Lack of a conceptual data model – if there isn’t a conceptual data model, capturing key domain concepts and their relationships – it is natural that multiple service operations start to define concepts in their unique way.
  3. Insufficient coordination between teams building service capabilities within the domain – when teams don’t talk to each other, many opportunities to reuse schemas, service semantics, behavior, and utilities are lost. As the number of teams increase, there is a greater need for service governance and alignment across projects.
  4. Lack of coherent strategy tying business process management, business events, and messaging within the context of service development. Business processes could be service enabled and standard business schemas can be used to notify interested consumers. However,without an overall strategy – teams will look at these independently thereby increasing implementation costs and missing opportunities for greater alignment.
  5. Insufficient technical leadership – when confronting multiple projects that are either occurring within a short time window or back to back, it is critical to demonstrate leadership. Why? there needs to be strong voice evangelizing use of business facing services, loosely coupled interfaces, and mediating service requests.

Designing Reuse-friendly Schemas for SOA – New Podcast Episode

July 11, 2010
Want to listen using iTunes?

Using iTunes?


New episode added to the Software Reuse Podcast Series on designing reuse-friendly XML schema definitions as part of SOA efforts. Elaborates on a core set of practices that will make your service contracts more maintainable and reusable.

Like this post? Subscribe to RSS feed or get blog updates via email.

SOA Patterns and Practices

October 4, 2009

In an earlier post I listed a set of anti-patterns when pursuing service orientation within the context of building reusable capabilities. As promised, here are a set of patterns and practices that will increase the likelihood of success with reusable services.

Build with an enterprise mindset – Build service capabilities that are project/initiative specific and pursue reuse in a systematic manner. When designing and building services, try your best to decouple them from implementation-specific, distribution channel-specific, or consumer-specific logic. The goal should be to build reusable services unless you cannot. Similarly, actively refactor existing capabilities to be more reusable as necessary.

Be transport agnostic – Support transports other than HTTP; specifically, reliable transports. Reliable transports help decouple the sender and receiver from being available at the same time for data exchange. Reliable delivery, automatic retry, and a host of other features relevant to SOA are provided by messaging middleware out of the box.reuse-investment

Complement on-demand with event driven offerings – Asynchronous messaging for supporting request/reply and publications will need to be supported sooner or later. Asynchronous request/reply will provide consumers reliability and flexibility to consume data. Standard publications will facilitate large scale reuse of data services. When a new consumer needs an existing service, they can be subscribed to a standard publication.

Embrace platform heterogeneity – platform homogeneity is an illusion. Your services will be invoked from several platforms. .Platforms such as NET , Java/J2EE, mainframe systems, perl may all be applicable. Use tools such as WS-I to ensure interoperability.

Mediate Service Access introduce a service mediation layer that can provide protocol bridging, data transformation, enforce security policy, and capture metrics. They are specially relevant when wrapping legacy capabilities and you don’t want consumers to be too tightly coupled with systems slated to retire.

Achieve semantic & syntactic symphony – XML schemas, data types, web service contracts should be aligned with your business domain entities and not individual systems or packaged products. Achieve consistent naming, definitions, across operations – this will not only make it easy for consumers but will be. Effectively reuse business data types and data object definitions across operations.

Validate service requests- service capabilities need to validate incoming requests prior to interacting with operational data stores and transactional systems. If your validation logic is spread across capabilities and is complex, strongly consider externalizing decisions in a rules engine.

Robust consumer integration – High quality customer integration function will reduce pressure on the development team and help in creating and maintaining service documentation. More importantly, you will build useful knowledge on integration challenges and issues with consumers. This function will be the bridge between consumers & service development teams providing feedback and enhancements. They can also report on access patterns, invalid messages/exceptions, consumer usage trends, SLA violations/adherence etc. Finally, they provide a consistent experience for consumers during provisioning, integration, and while in production.

This isn’t an exhaustive list but will put you on the path to building the right set of reusable capabilities for automating business processes and modernizing applications.

Like this post? Subscribe to RSS feed or get blog updates via email.

Slide 3


ØWe need to build services that are project agnostic and foster reuse across initiatives. Reuse needs to be given a ‘first class citizen’ status when designing and building data services. The goal should be to build reusable services – reuse needs to be in our DNA!
ØWe need to support transports other than HTTP; specifically, reliable transports such as MQ even if web service continue being a popular metaphor.
§Reliable transports such as MQ will be able to handle large payloads
§Queues decouple the sender and receiver from being available synchronously for data exchange
§Reliable delivery, retry, are features provided by messaging middleware out of the box
ØIn addition to on demand services, we need to build event driven services. Asynchronous messaging for supporting request/reply and publications will need to be supported.
§Asynchronous request/reply will provide consumers reliability and flexibility to consume data
§Standard publications will facilitate large scale reuse of data services. When a new consumer needs data from an existing service, they can be subscribed to a publication via configuration
ØPlatform homogeneity is an illusion. Data services will be invoked from several platforms:
§.NET platform
§Java/J2EE platform
§ JavaScript.

add to del.icio.us: Digg it : post to facebook: Stumble It! : :

Driving Systematic Reuse With MDM

September 17, 2009

I have been espousing the need for pursuing systematic reuse in conjunction with other initiatives such as SOA, BPM, object oriented programming, in a agile manner. Master Data Management (MDM) aims to manage core enterprise data as a strategic asset for the organization. It impacts data quality, data governance, data services as well as business processes that access/update core data assets. MDM can play an important role in your systematic reuse efforts as well. How? Let’s think about the intent behind MDM – the primary driver is to reduce costs and enable revenue generation using enterprise data assets such as customer data, account data, product data etc. These goals not only require technology but also processes and governance.

You can use MDM to drive systematic reuse using the following ways:

  • Opportunistically create fine grained and coarse grained data services as dictated by your business needs. Your MDM data store will evolve as the strategic data store for all business processes eventually. But, while you get there, you will have to incrementally and iteratively build out a service inventory. This service inventory will be reusable for multiple projects and initiatives while giving you the flexibility to change underlying data structure and processing logic. More importantly, you will build service capabilities that you know at least one client will use.
  • While developing data services built on top of your MDM solution, your information modelers and analysts can re-examine the domain and update data entities, relationships, and business rules. All this information will guide your canonical data models plus can help in building object libraries and domain specific language toolkits. Basically, you are reusing the analysis efforts for service and object capabilities. You can even use XML-object data binding tools to generate classes from XML schemas and vice versa. A more likely outcome of such an exercise is also identifying refactorings to the existing codebase. Your service capabilities and object models may not reflect the business domain accurately and you can make those changes in conjunction with business deliverables.
  • Related to the point above, you can develop reusable decision services including specific rule sets that can not only fulfill MDM based processing but also for other problems in your domain. If the entities and rules are getting reused you will go a long way in reducing costs when building business processes.
  • In an earlier post I talked about the importance of easing integration for consumers. MDM will streamline data processing and improve data quality. But it also presents an opportunity for you to create easy to use integration toolkits for consumers to get the improved data. If you know marketing applications consume core data in a certain way, would it not make sense to make consumption as easy as possible?
  • Integrate data access/update policies, data quality checks, as well as use of specific data governance workflows into design and code reviews. As MDM practices mature in your organization you will get smarter about how different applications, processes, and external partners need to interact with your MDM data store. In essence, you can mandate interaction via MDM data via standardized, managed interfaces. This over time will surely drive reuse of data services as well as data governance workflows and business rules.

This list isn’t exhaustive but my intent was to illustrate how MDM can help your systematic reuse efforts. The key message is basically – don’t pursue reuse in isolation with other initiatives.

Like this post? Subscribe to RSS feed or get blog updates via email.

add to del.icio.us: Digg it : post to facebook: Stumble It! : :

10 Design Assumptions That Hurt Your SOA Efforts

August 24, 2009

Building service capabilities that are strategic for your enterprise is a key aspect of SOA and lays the foundation for agile business processes in your organization. Many teams are engaged in building service capabilities both as part of SOA initiatives and bottom-up ones motivated by technology teams pursuing the benefits of service orientation.

How do you ensure that the service capabilities are in line with business objectives?
Are th
ere assumptions that hurt your SOA? I believe so! Here are 10 design assumptions that you need to watch out for:

  1. Service capabilities are always Web Services
  2. Services need to support only one data format i.e. SOAP
  3. Services are implemented first and then contracts are extracted (aka code-first approach)warningsign
  4. Service contracts don’t have to be reviewed or governed
  5. Service capabilities from vendor solutions can be consumed out of the box without mediation
  6. Services handle exceptions in an ad-hoc manner
  7. Services implement consumer specific logic
  8. Service interfaces are always non-validating
  9. Service capabilities are always accessed in a on-demand fashion. No need to support event driven interactions.
  10. Service consumers will all migrate to new versions simultaneously. No need to support multiple versions.

Please add to this list any additional assumptions that inhibit the effectiveness of SOA efforts!

Like this post? Subscribe to RSS feed or get blog updates via email.

add to del.icio.us: Digg it : post to facebook: Stumble It! : :

%d bloggers like this: