SOA Anti-Patterns When Building Services

September 30, 2009

When building services and service capabilities for your enterprise you have ensure that you have right set of practices to succeed as a service provider. Here are a set of anti-patterns to avoid building enterprise service capabilities as part of your SOA initatives. You can use this as a list of things to watch out for and possibly course correct when/if you recognize them in your teams.

Project Focus Only – Service capabilities are being built for projects and their behavior tends to be project-specific. This myopic view tends to introduce needless tight coupling into your services (making assumptions about client behavior, persistence of state, as well as how the request/response models would be structured etc.). Being too project focused also doesn’t give your firm the opportunity to evaluate existing services, overlapping/redundant functionality.

Ad-hoc Reuse – ad-hoc, minimal reuse occurs if you are lucky. Lots of redundant service capabilities being built. More worrying sign: each of these capabilities don’t leverage the same underlying implementation environments, object libraries, etc. increasing development, maintenance, and testing costs.caution

Assuming all your services will be Web services – SOAP over HTTP is the default and only supported mechanism for services. Services only get consumed via on-demand request/reply. No support for asynchronous messaging or REST-ful services. Idea isn’t to have all these flavors – the point is to have the flexibility to meet business needs and not be rigid about the packaging and transport choices.

Building services for a single platform – Near universal platform homogeneity. Assumption is, majority (if not all) consumers are invoking service capabilities from a single platform.

Semantic and syntactic cacophony – Service contracts are minimally aligned with your enterprise’s logical data model resulting in inconsistent naming, definitions, across services. Additionally, there are redundant data type and business entity definitions across services. This results in incompatible data bindings as well as increased maintenance costs. Not to forget it is downright confusing for your consumers to keep having to write code to parse related service capabilities.

Not Validating Service Requests – non-validating services provide a legitimate entry point for invalid data to be interacting with your reference data and operational systems. If you have got no data validation rules, how will you know you have got bad data? You will often end up fixing corrupt data so they don’t cascade to batch jobs or processes.

Minimal or no integration assistance – integration efforts tend to happen on a per-project and/or per-service capability basis. This results in increased pressure on development teams to help current and prospective consumers evaluate services. When there is inadequate documentation combined with ad-hoc knowledge sharing across projects it is a lose-lose situation for both the service provider and consumers. If you aren’t learning and capturing technical/process/data related integration issues it is sure to be a disruptive experience for every consumer that uses your capability.

Do you have additional ones to add to this list? In the next post, I will explain how to turn these around so you can build strategic services for your organization.

Like this post? Subscribe to RSS feed or get blog updates via email.

add to Digg it : post to facebook: Stumble It! : :

Relationship Between BPM & SOA

September 28, 2009

What is the need for pursuing SOA & BPM? Isn’t SOA enough? Here is a post on BPM and its relationship with SOA. wondering

BPM stands for Business Process Management and is a discipline that aims to model, realize, manage, and even simulate business processes that involve both human and system tasks. These system steps might be automated via batch processes, on demand services, asynchronous processes. One key aspect of BPM is the focus on business metrics. Improving revenue, saving costs, improving customer experience, increasing productivity are what it aspires to achieve. Whether or not you explicitly model and manage business processes, they exist in your organization. BPM provides the technology infrastructure to better capture, streamline, and manage the realizations of business processes.

Typically, a visual tool is used to capture process flows which are decoupled from technology constraints and limitations. this rough technology-agnostic business flow can then be translated into an execution flow that will need to take technology choices, scalability, user preferences, integration with front end and back end services. BPM process flows can invoke data services, business services, perform human workflow tasks – delegation, escalation, routing – all within a stateful container.

Where does SOA fit into this picture?

Services ideally should be devoid of process-specific couplings – meaning they should be able to execute a series of steps based on a set of inputs and provide a set of outputs. Service capabilities need to be built for reuse across multiple business processes.  BPM processes should avoid directly invoking service providers from legacy systems, packaged applications, etc. – although technically feasible it isn’t a wise move for the long term.

Why? Because without a mediation layer you lose significant benefits:

  1. abstraction (legacy system might go away, r u going to force all ur consumers to be impacted?)
  2. alignment with enterprise data models (minimize data transformations across calls, reuse schema types both at the business entity level and the individual types)
  3. database changes- structure, data typing, etc.
  4. freedom to switch service providers (example: moving to a cloud based infrastructure for saving costs)
  5. scalability (independently scale the service layer by deploying redudnant listeners or physical nodes etc.)

SOA can manage the messiness of enterprise integration (protocol bridging, data translations, transaction management). Most SOA capabilities tend to be stateless while BPM processes persists process state for several business processes.  SOA can focus on building strategic services while BPM leverages them as much as possible to build new business capabilities. In other words, SOA and BPM can feed off each other – it is a symbiotic relationship wherein there is both a need for reusable service capabilities and well managed business processes.

Like this post? Subscribe to RSS feed or get blog updates via email.

add to Digg it : post to facebook: Stumble It! : :

Building Reusable Assets With Agile Practices

September 25, 2009

When you start building reusable assets there is considerable awkwardness with trying to align your reuse strategy with iteration goals. The real challenge is when you are not sure about refactoring existing assets. You will discover hidden couplings to implementation technology or platform, undocumented assumptions about how something will work, and all kinds of duplications sprinkled across your codebase. Soon, you will find yourself asking the questions such as – What can we reuse? Didn’t we just solve the same problem? Is this reusable as-is or needs to be refactored? It will get easier to align your assets to a product line with time and practice. Product lines tend to grow and your understanding of the business domain expands. Your ability to spot common needs and variations in these common needs also improves over time. You will deliver on your immediate goals and still be building towards the systematic reuse strategy.

You are doing right if

  • You whiteboard a design as a team and in a few hours identify new and existing reusable assets
  • You design identifies gaps in an existing asset that needs refactoring so the asset can be reused
  • Add items to your refactoring list as and when you identify a gap in an existing asset.
  • The team collaborates on aligning your systematic reuse strategy with your iteration goals.
  • You are able to recognize variations in your domain and apply that to your reusable asset design
  • You are decoupling connectivity components from business logic components
  • A family of message types are defined and used for integration with external systems
  • Design patterns are being leveraged to support essential variations in your domain

Warning signs

  • You are spending week after week in design and architecture. There are no signs of working code.
  • New design patterns and technologies are being introduced without reason (showcasing architectural complexity or technical brilliance don’t count!)
  • You don’t organize existing assets in any consistent manner forcing your team to recall capabilities every time they want to evaluate existing code for reuse.
  • Nothing in the business domain tends to vary and you have a tough time finding common patterns across user stories
  • The codebase is sprinkled with several design patterns that increase complexity without any domain alignment
  • You create only CRUD type interfaces assuming that will address all product line variations
  • Every asset in your codebase raises an ad-hoc set of error codes
  • You are modeling all the complexity in your domain and trying to cram choices instead of meeting iteration goals

Like this post? Subscribe to RSS feed or get blog updates via email.

add to Digg it : post to facebook: Stumble It! : :

Refactoring To Reuse #3

September 25, 2009

#3 Separate Formatting from Core Domain Entities

You may have a core set of classes or services that represent the domain. These classes need to relate to each other cohesively. In the same vein, they should avoid getting too bulky with functionality that isn’t aligned with the business domain. What could they be? Classic examples include data access and remote host connectivity. Also common are formatting logic that is specific to a business channel or view. I talked about decoupling connectivity earlier so in this post I will expand on formatting logic and is a continuation of the earlier post on formatting. Just like any other aspect of your design you can make formatting as complex as you need. The key thing is to encapsulate formatting into its own layer and not let it pollute your core domain entities. This layer might vary by locale (internationalization), distribution/marketing channel (retail branch, online web, kiosk etc.), file format (HTML, PDF, XML), and device or medium (print, web, mobile devices). Even if you have a simple formatting requirement it makes sense often to isolate the logic away from code that does complex calculations and/or executes business decisions.  At the very least, you can create a FormatUtility class to move formatting code there. It would be better to identify the formatting logic and identify an interface to create. You can implement the interface for a specific need and evolve it over time.

Let’s say you have a class that execute business decisions and also contains formatting logic to create data into a text file as a comma separated value (CSV) file. Refactor the file creation out of this class. Look closer on what exactly the formatting is doing. Is it changing values from numeric to text? Is it changing values from non-compliant value to a standard one? Is it doing locale-specific formatting? So, this code may not just be writing to a text file but transforming data and then writing to a file. You want to reuse the transformation logic as well – if you end up writing an XML document instead of a text file you will need this piece of work again. Now, the file creation logic itself can be reused. It may not need to know that a particular process or function is creating it. This will be useful if all your files have standard header records or place specific processing instructions in them for consuming applications. If there isn’t any, don’t refactor this yet. Finally, the text file format may vary – today it is CSV and you might need fixed width. If you have a definite need you can have the file writer take additional configuration information (e.g.  field name and width for a row).

Like this post? Subscribe to RSS feed or get blog updates via email.

add to Digg it : post to facebook: Stumble It! : :

Building Data Services Product Line Using SOA

September 23, 2009

data_svcs_product_lineThe current issue of The Architecture Journal (a Microsoft publication) featured my 5-minute video presentation on building reusable data services as a product line . This is part of the complimentary videos section of the journal’s current issue on Service Oriented Architecture (SOA). [View video]

Like this post? Subscribe to RSS feed or get blog updates via email.

add to Digg it : post to facebook: Stumble It! : :

Software Reuse Quick Tip #18

September 19, 2009

Tip #18 -Separate Request Origination From Processing

Often times you have a core piece of processing logic that weaves together a variety of components and services in order to fulfill use cases. The processing logic might be a sequential procedure, a stateful business process, or a stateless service capability. You typically implement complex processing within the context of projects that has a defined manner in which the processing logic will be invoked. It is useful to decouple request origination from processing. The idea is that you want to reuse the processing logic in a variety of scenarios and don’t want to depend on the specific nature of how requests are sent for processing from a particular application. For example, let’s say you implement an order processing service that looks up the order details, executes business rules to validate order information, and runs inventory and credit check and notifies a shipping module. This might be built for an application that sends requests using a web-based form. You can isolate the logic that constructs order processing requests that assumes the execution environment and user type etc. That way the order processing modules can be used via asynchronous messaging, for business-to-business integrations, composite services, etc. Additionally, if you want to invoke processing via bulk upload using an excel file or text extracts you can construct the request accordingly and submit it to the same processing modules. If your processing logic is too tightly bound to a web-based form being the input your reuse potential is diminished. If not, do plan to refactor the tightly coupled logic.

Like this post? Subscribe to RSS feed or get blog updates via email.

add to Digg it : post to facebook: Stumble It! : :

Driving Systematic Reuse With MDM

September 17, 2009

I have been espousing the need for pursuing systematic reuse in conjunction with other initiatives such as SOA, BPM, object oriented programming, in a agile manner. Master Data Management (MDM) aims to manage core enterprise data as a strategic asset for the organization. It impacts data quality, data governance, data services as well as business processes that access/update core data assets. MDM can play an important role in your systematic reuse efforts as well. How? Let’s think about the intent behind MDM – the primary driver is to reduce costs and enable revenue generation using enterprise data assets such as customer data, account data, product data etc. These goals not only require technology but also processes and governance.

You can use MDM to drive systematic reuse using the following ways:

  • Opportunistically create fine grained and coarse grained data services as dictated by your business needs. Your MDM data store will evolve as the strategic data store for all business processes eventually. But, while you get there, you will have to incrementally and iteratively build out a service inventory. This service inventory will be reusable for multiple projects and initiatives while giving you the flexibility to change underlying data structure and processing logic. More importantly, you will build service capabilities that you know at least one client will use.
  • While developing data services built on top of your MDM solution, your information modelers and analysts can re-examine the domain and update data entities, relationships, and business rules. All this information will guide your canonical data models plus can help in building object libraries and domain specific language toolkits. Basically, you are reusing the analysis efforts for service and object capabilities. You can even use XML-object data binding tools to generate classes from XML schemas and vice versa. A more likely outcome of such an exercise is also identifying refactorings to the existing codebase. Your service capabilities and object models may not reflect the business domain accurately and you can make those changes in conjunction with business deliverables.
  • Related to the point above, you can develop reusable decision services including specific rule sets that can not only fulfill MDM based processing but also for other problems in your domain. If the entities and rules are getting reused you will go a long way in reducing costs when building business processes.
  • In an earlier post I talked about the importance of easing integration for consumers. MDM will streamline data processing and improve data quality. But it also presents an opportunity for you to create easy to use integration toolkits for consumers to get the improved data. If you know marketing applications consume core data in a certain way, would it not make sense to make consumption as easy as possible?
  • Integrate data access/update policies, data quality checks, as well as use of specific data governance workflows into design and code reviews. As MDM practices mature in your organization you will get smarter about how different applications, processes, and external partners need to interact with your MDM data store. In essence, you can mandate interaction via MDM data via standardized, managed interfaces. This over time will surely drive reuse of data services as well as data governance workflows and business rules.

This list isn’t exhaustive but my intent was to illustrate how MDM can help your systematic reuse efforts. The key message is basically – don’t pursue reuse in isolation with other initiatives.

Like this post? Subscribe to RSS feed or get blog updates via email.

add to Digg it : post to facebook: Stumble It! : :

Speaking at the IT Architecture Regional Conference

September 17, 2009

I am speaking about Transitioning from Waterfall to Agile at the upcoming IT Architecture Regional Conference in New York City. Want to learn more about the event? Do check out the topics and I encourage you to go ahead and register here and experience the event firsthand!

Like this post? Subscribe to RSS feed or get blog updates via email.

The AssetMap – Useful Resource

September 12, 2009

Often times, developers and technical leads don’t know what reusable assets are already available. In the same vein, they may not be aware of what is already being planned – either refactorings on an existing asset or an entirely new one in keeping with your overall vision. Recognition is much easier than recall and one tool that I have successfully used in the past is the AssetMap. It is a concise artifact that is made up of two pages – one page with technical assets and another with business ones.

Where do you use this artifact? Fill it up and customize per your team’s needs and distribute it to your teams. Make sure every developer, technical lead, and designer has it. Print out and give it to your new hires as well! More importantly, ensure that they are aware the purpose behind the artifact and use it to implement user stories/business requirements. You can also place it as a document on the Intranet with links to individual asset details. The intent is to have the AssetMap as the summary artifact and not contain every bit of detail about every asset.

Here are the typical business assets that you can place in the AssetMap:assetmap

  • Data Assets (core data entities such as customer, account, product, etc. as well as reference data such as currencies/country codes/zip codes etc.)
  • Business Assets (business orchestration services, task services,
  • Decision Assets (including rulesets, eligibility criteria, policies)
  • Legacy Assets (capabilities in legacy systems that are reusable or have reuse potential)
  • Business Workflows (approval chains, human/system workflow that can be reused in more complex flows)

and some candidate technical assets:

  • authentication/authorization
  • localization
  • legacy integration (e.g. service adapters when invoking CICS modules)
  • reliable messaging
  • event processing
  • testing utilities
  • data transformation
  • routing
  • auditing
  • monitoring
  • notification (to send email messages for example)
  • logging
  • error handling
  • data binding

The above list isn’t exhaustive and is meant to be a starting point for your teams. Here is a candidate AssetMap template that I have used in the past – feel free to customize it to suit your needs. For instance, you could add notations to indicate asset type (whether it is a service, library, component, UML diagram etc.) and readiness (deployed, being developed, needs bug fixes, etc.).

Like this post? Subscribe to RSS feed or get blog updates via email.

add to Digg it : post to facebook: Stumble It! : :

Five Ways To Save Money With Systematic Reuse

September 11, 2009

There are several benefits when an organization pursues systematic reuse. Chief among them? Saving valuable money of course! So here are 5 fives to save money with systematic reuse. Before you get carried away do remember that systematic reuse is not like winning a lottery ticket but more a carefully nurtured long term investment!business drivers

  1. Save time developing, integrating, and testing core domain components that get reused across projects. You did separate them from project-specific code, didn’t you? 🙂
  2. Reuse service capabilities across business processes and when building composite service capabilities
  3. Minimize point to point integrations when exchanging data among systems. Pursue event driven publish/subscribe and have interested parties subscribe to a standard set of messages. An ideal place to publish these standardized messages? business process orchestrations.
  4. Reuse legacy system capabilities as long as you are leveraging them via a mediation layer.
  5. Pursue refactorings and align capabilities towards reuse within the context of user stories. You have to do that work anyways to get the story to be functional. Why not save development time for subsequent iterations?

Like this post? Subscribe to RSS feed or get blog updates via email.

add to Digg it : post to facebook: Stumble It! : :

%d bloggers like this: