Capture Service Metrics

October 22, 2010

Metrics are very useful to understand service usage, volume trends (growth/decline) – leading to improving performance/capacity planning,  diversity of user base, etc.

A service mediation layer can initialize and persist metrics and can help efforts to mine that data to generate reports using the data. What specific metrics can be captured? Here are a few attributes:

incoming time, outgoing time (or publication time) , transport (whether service was invoked via HTTP, JMS, or some other transport), requesting host name/queue name etc. Additionally, if a request processing resulted in an error – error details including stack trace can be captured. Finally, service/operation-specific metrics can be captured – if you don’t have demanding reporting requirements,  these attributes can be stored as a set of name/value pairs.

In a follow up post, will elaborate on how specific metrics can be captured throughout service processing.

5 Signs Indicating Need for Service Governance

August 21, 2010

The word ‘governance’ seems to conjure up all sorts of negative images for IT folks – needless bureaucracy seems to top that list. However, without lightweight governance, SOA and systematic reuse efforts will fail to achieve their full potential. Can you spot signs that indicate need for governance? I believe so and here are five:

  1. Every project seems to reinvent business abstractions that are fundamental to your problem domain. No sharing or consistency of information models – this will be painfully evident when projects are happening back to back and your teams seem to be running into overlapping data modeling, data definition, and data validation issues.
  2. Directly linked to above – is service definitions seem to not reuse schemas – i.e. each service has a unique schema definition for a customer or product (or some key object from your domain) and your service consumers are forced to deal with multiple conflicting schemas.
  3. Legacy capabilities are leveraged as-is without mediation – increasing coupling between consumers and legacy systems. Tell tale sign here is if you see arcane naming and needless legacy data attributes sprinkled all over service interfaces.
  4. Services seem to have inconsistent runtime characteristics – new service capabilities are added without regard to performance requirements – issues tend to manifest in production where users or entire processes/applications get impacted to service behavior.
  5. Business processes bypass a service layer and directly access underlying data stores – if you have seen a business process invoking several stored procedures, doing FTP, publishing to multiple messaging destinations – all from a monolithic process definition that is a clear sign that governance is non existent.

These are few signs but key indicators that services are being built in a tactical fashion. In a follow up post, will expand on how governance can be leveraged appropriately to address these issues.

Systematic Reuse & The Art of Less

July 10, 2010

I came across these 8 key lessons on the Art of Less from the Japanese Ink and Wash painting technique Sumi-e. When I read them, I couldn’t help but think about their applicaton to reusable software. Here are some initial thoughts (Sumi-e lessons in bold):

1. More can be expressed with less–  One effective way to add value with reusable assets is to enable developers to be more productive. With less code, can they implement business functionality faster? can they reuse their knowledge of one asset when using another?
2. Never use more (color) when less will do: only support known variations. Needless flexibility not only increases development time, it also adds complexity and results in speculative design.
3. Omit useless details to expose the essence: what are the key interfaces that developers need to know about? If there are obscure configuration options or edge use-cases that most developers don’t care about – do they have to be communicated in an all-or-nothing fashion? When in doubt enable developers to peel away at your asset on a need-to basis.
4. Careful use of light-dark is important for creating clarity and contrast: There is a reason why your API was created and why it abstracts away certain details while exposing others. It is important to communicate the intent behind your design keeping in mind the developer’s need to get productive quickly.
5. Use color with a clear purpose and informed intention. Reusable assets should be part of a whole. Think constantly about how one asset relates to another and how a combination of them can make it easier for new automation opportunities. The key concept is to break it down into domain-relevant concepts that are part of a larger story. The colors do paint a picture, don’t they?
6. Clear contrast, visual suggestion, and subtlety can exist harmoniously in one composition. You don’t want every reusable asset to support completely different idioms – can you make it impossible for errors? can you provide warnings with actionable, informational messages for the developer? Think about suggesting good practices to the developer while they are inside a development environment.

7. In all things: balance, clarity, harmony, simplicity. As far as possible, reusable assets should be simple to setup and use. Strive for providing flexibility but follow the 80/20 rule – what do most developers want from an asset? Support those objectives and make it very obvious to achieve.

8. What looks easy is hard (but worth it). Reusable assets should make hard things possible – making it easy should be a goal to aspire to as always without taking the route of hiding too much information. Goal is to avoid repetitive code and configuration across multiple projects. It is extremely hard but well worth it!

What do you think? Do these principles resonate with you as a software professional?

Refactor Reusable Assets With Domain Understanding

April 18, 2010

There is always an element of additional complexity when building resuable assets – whether it is because of externalizing configuration, adding additional interfaces/classes to provide flexibility, or introduction of additional layers of abstraction (to name a few). There is also the increased cost of design, testing, and integration with reusable assets. The question isn’t whether reusable assets are more or less complex – a more pragmatic question would be:

is the additional complexity absolutely essential and worth the cost?

This is a straightforward question to answer if you have visibility into what is coming up in the near future. If this isn’t the case – which is “most of the time” – the question becomes a trickier one. One way to address this issue is to introduce domain-relevant abstractions.

Instead of guessing what should and shouldn’t be an abstraction – work with domain experts and uncover true domain-relevant concepts. For example: Let’s say you decide to build a reusable library for providing customer data. Say this is used to populate a financial statement that shows a customer’s net worth (i.e. financial assets). Initially your software might have used the checking account balance to report this value. Why? Because, that was the only kind of account that was used to calculate this value. In the initial version the team might have thought ‘net worth’ was a data attribute – a number that is derived from the checking account balance. Over time, as domain understanding deepens, ‘net worth’ might become – not just a single account’s balance but a whole host of other things (e.g. accounts where the customer is a joint account holder including not only checking accounts but also savings, brokerage, etc.). ‘Net worth’ might still be a single number but it would have evolved to encompass a different meaning than just a single account’s balance. Not to mention business rules that have to be applied based on a plethora of account and client characteristics and quickly this would evolve into something more complex.

At the time of the initial release your team might not have this knowledge – and that is perfectly okay! Key is to evolve the codebase along with your increased understanding of the domain. Will net worth be it’s own class or interface in the initial version? If you don’t have even the preliminary understanding, why introduce needless complexity? wouldn’t it be better to refactor to reuse rather than build for reuse with an imperfect understanding?

Checklist for Testing Service Capabilities in your SOA

April 17, 2010

Download Checklist

Here is a checklist for ensuring that your service capabilities are unit tested effectively.   These questions can come in handy when validating test coverage or when doing code reviews with fellow team members. I have used this checklist extensively in all my SOA development efforts and has helped with improving the quality of the services. The check list covers:

  • functional testing
  • error handling
  • data validation/formatting
  • performance testing
  • data binding/transport interfaces

Feel free to add/customize this checklist based on your team’s unique needs. I hope you find this resource useful!

Building Assets Iteratively – An Example

April 13, 2010

Building reusable assets iteratively helps your team in several ways: reduces technical and business risk, reduces time to market, and increases the odds of real-life usage across applications. In this post, I wanted to walk through an example of iteratively building a reusable asset. Our task was to create a suite of services for providing core data such as customer data and product data to various internal applications. To support these services in production several non-functional capabilities were required. These capabilities needed to be reusable across services – i.e. we didn’t want to develop something that would only work with customer data and not product data. Note: this effort was before mature SOA governance tools started to appear – so if you are thinking “why did they build this?” – because it didn’t exist at that point in time 🙂

Iteration 1: Simple logging to log requests, responses, errors

Iteration 2: Configurable logging – ability to change logging levels without restarting the service container

Iteration 3: Ability to enable/disable service capabilities via a service interface – a web service end point that would turn on/off services (we had business reasons to support this capability)

Iteration 4: Toggle service capabilities via a web interface – integrated functionality from Iteration3 to be able to perform the toggle via a browser-based front end application.

Iteration 5: Get statistics about a service capability – usage metrics, distribution of error codes, etc. was available for every service capability.

Iteration 6: Enable/disable HTTP endpoint – to enable/shut off access to a HTTP port that was listening to service requests.

Iteration 7: Enable/disable JMS endpoint – to enable/shut off access to a JMS queue that was listening to service requests.

Iteration 8: Toggle transport endpoints via a web interface – integrate functionality from iteration 6 and 7 with our web based console application.

Iteration 9: Get usage statistics filtered by consumer, date and time, and various other fields.

Iteration 10: Integrate usage statistics querying with web based console application.

These iterations were executed alongside business functionality – the interesting aspect of this – and something that all agile methods emphasize – is that we learned from real-world service usage and troubleshooting in production. We didn’t have to dream up requirements – as we gained deeper knowledge of how services function in production, the needs emerged naturally. Coupled with reviewing logs/production statistics and interviews with service consumers we were able to prioritize the supportability tools that were needed.

Share Software Assets With Care

April 12, 2010

An often ignored aspect of systematic software reuse is the need for integration maturity – both with respect to existing and new consumers of shared reusable assets. With all the excitement about the benefits of sharing software assets across projects, it is easy to forget the ramifications of reuse. Remember:

  • Thoroughly and carefully evaluate changes to interface contracts before changing them. Will the interface change break existing consumers? will they need to regression test or perform code changes? If the answer to any of these questions is “no” – then you need to co-ordinate the changes across your consumers. Depending on the complexity of the change, consider formal governance processes as well.
  • Do not refactor reusable assets without a comprehensive suite of automated tests. When you are both the provider and consumer of an asset – you can fix bugs and broken builds without anyone else outside your team becoming aware of them. Even in this case, you absolutely must create automated tests – i am not suggesting you don’t need tests if you don’t reuse :-). Automated tests become an imperative when you bring in multiple consumers in the mix – do you want completely independently managed applications, processes, and services to break/buggy because of your “minor” refactoring? Of course not – this will not only generate bad press for your team but also make it more expensive to fix, test, and verify multiple consumers.
  • Ensure you make design assumptions and known issues explicit. When your consumer integrates with a reusable asset and walk away thinking “this thing can solve all my problems” – that is a sure warning sign. As a provider, make your assumptions and limitations explicit. Get a wiki or at the very least maintain a document that you can provide. If you limit the number of concurrent requests/connections or have restrictions based on distinct IP-addresses making calls to your service – communicate that to your consumer. This is especially important for critical revenue-generating business processes in your firm. Can you afford to adversely impact order processing, account opening, or other critical processes?

These are integration related challenges that are super-critical to your success with reuse. Additionally, there is the need for robust documentation, useful/consistent error handling semantics, and the need to be scalable with increased volume.

SOA Patterns and Practices

October 4, 2009

In an earlier post I listed a set of anti-patterns when pursuing service orientation within the context of building reusable capabilities. As promised, here are a set of patterns and practices that will increase the likelihood of success with reusable services.

Build with an enterprise mindset – Build service capabilities that are project/initiative specific and pursue reuse in a systematic manner. When designing and building services, try your best to decouple them from implementation-specific, distribution channel-specific, or consumer-specific logic. The goal should be to build reusable services unless you cannot. Similarly, actively refactor existing capabilities to be more reusable as necessary.

Be transport agnostic – Support transports other than HTTP; specifically, reliable transports. Reliable transports help decouple the sender and receiver from being available at the same time for data exchange. Reliable delivery, automatic retry, and a host of other features relevant to SOA are provided by messaging middleware out of the box.reuse-investment

Complement on-demand with event driven offerings – Asynchronous messaging for supporting request/reply and publications will need to be supported sooner or later. Asynchronous request/reply will provide consumers reliability and flexibility to consume data. Standard publications will facilitate large scale reuse of data services. When a new consumer needs an existing service, they can be subscribed to a standard publication.

Embrace platform heterogeneity – platform homogeneity is an illusion. Your services will be invoked from several platforms. .Platforms such as NET , Java/J2EE, mainframe systems, perl may all be applicable. Use tools such as WS-I to ensure interoperability.

Mediate Service Access introduce a service mediation layer that can provide protocol bridging, data transformation, enforce security policy, and capture metrics. They are specially relevant when wrapping legacy capabilities and you don’t want consumers to be too tightly coupled with systems slated to retire.

Achieve semantic & syntactic symphony – XML schemas, data types, web service contracts should be aligned with your business domain entities and not individual systems or packaged products. Achieve consistent naming, definitions, across operations – this will not only make it easy for consumers but will be. Effectively reuse business data types and data object definitions across operations.

Validate service requests- service capabilities need to validate incoming requests prior to interacting with operational data stores and transactional systems. If your validation logic is spread across capabilities and is complex, strongly consider externalizing decisions in a rules engine.

Robust consumer integration – High quality customer integration function will reduce pressure on the development team and help in creating and maintaining service documentation. More importantly, you will build useful knowledge on integration challenges and issues with consumers. This function will be the bridge between consumers & service development teams providing feedback and enhancements. They can also report on access patterns, invalid messages/exceptions, consumer usage trends, SLA violations/adherence etc. Finally, they provide a consistent experience for consumers during provisioning, integration, and while in production.

This isn’t an exhaustive list but will put you on the path to building the right set of reusable capabilities for automating business processes and modernizing applications.

Like this post? Subscribe to RSS feed or get blog updates via email.

Slide 3


ØWe need to build services that are project agnostic and foster reuse across initiatives. Reuse needs to be given a ‘first class citizen’ status when designing and building data services. The goal should be to build reusable services – reuse needs to be in our DNA!
ØWe need to support transports other than HTTP; specifically, reliable transports such as MQ even if web service continue being a popular metaphor.
§Reliable transports such as MQ will be able to handle large payloads
§Queues decouple the sender and receiver from being available synchronously for data exchange
§Reliable delivery, retry, are features provided by messaging middleware out of the box
ØIn addition to on demand services, we need to build event driven services. Asynchronous messaging for supporting request/reply and publications will need to be supported.
§Asynchronous request/reply will provide consumers reliability and flexibility to consume data
§Standard publications will facilitate large scale reuse of data services. When a new consumer needs data from an existing service, they can be subscribed to a publication via configuration
ØPlatform homogeneity is an illusion. Data services will be invoked from several platforms:
§.NET platform
§Java/J2EE platform
§ JavaScript.

add to Digg it : post to facebook: Stumble It! : :

%d bloggers like this: