Governance Enables Service Reuse – New Podcast Episode

December 27, 2011
Want to listen using iTunes?

Got iTunes?

podcast

New episode added to the Software Reuse Podcast Series on service governance covering design, implementation, testing, and provisioning and how they enable reuse.

Like this post? Subscribe to RSS feed or get blog updates via email.


Systematic Reuse Mythbuster #1 – Reusable Doesn’t Mean Perfection

December 26, 2011

One common criticism against systematic software reuse is the myth that it implies perfection – creating a reusable asset automatically conjures up visions of a perfect design, something that is done once and done right.¬† Many developers and managers confuse reusability with design purity. However, reusability is a quality attribute like maintainability, scalability, or availability in a software solution. It isn’t necessary or advisable to pursue a generic design approach or what one believes is highly reusable without the right context.

The key is to go back to the basics of good design: identify what varies and encapsulate it.

The myth that you can somehow create this masterpiece that is infinitely reusable and should never be touched is just that – it is a myth and is divorced from reality. Reusable doesn’t imply:

  • that you invest a lot in big up front design effort
  • you account for everything that will vary in the design – the critical factor is to understand the domain – well enough, deep enough, so you can identify the sub-set of variability that truly matters

In the same vein, reusablility strives for separating concerns that should be kept distinct. Ask repeatedly:

  • Are there multiple integration points accessing the core domain logic?
  • Is there a requirement to support more than one client and if so, how will multiple clients use the same interface?
  • What interfaces do your consumers need? is there a need to support more than one?
  • What are the common input parameters and what are those that vary across the consumer base?

These are the key questions that will lead the designer to anticipate the appropriate places where reuse is likely to happen.¬† Finally, it is important that we don’t build for unknown needs in the future – so the asset is likely to solve a particular problem, solve it well, solve it for more than one or two consumers, and finally has potential to be used beyond the original intent. At each step there are design decisions made, discarded, continuous refactoring, refinements to the domain model – if not re-definition altogether.

Don’t set out trying to get to the end state or you will run the risk of adding needless complexity and significant schedule risk.


Track Service Reuse Metrics

December 24, 2011

Service driven systematic reuse takes conscious design decisions, governance, and disciplined execution – project after project. In order to sustain long running efforts such as service orientation, it is critical to track, report, and get buy-in from senior management in the organization. So what metrics are useful? Here are a few:

  • Total number of service operations reused in a time period
  • Total effort saved due to systematic reuse in a time period
  • Number of new service consumers in a time period
  • Number of new consumer integrations in a time period (this includes integrations from both new and existing consumer
  • Service integrations across transports/interface points (for instance, the service operation could be accessed SOAP over HTTP, or as SOAP over JMS, or REST, etc.)

What metrics do your teams track?


5 Service Governance Practices for Effective Reuse

December 24, 2011

Pursuing service based systematic reuse or business process development? Then, these five practices will help your teams achieve increased level of service reuse.

  1. Manage a common set of domain objects that are leveraged across service capabilities. This could be a library of objects (e.g. Plain Old Java Objects) or XML Schema definitions or both. Depending on the number of service consumers and the complexity in the domain, there will be need for supporting multiple concurrent versions of these objects.
  2. Provide common utilities for not only service development but WSDL generation, integration and performance testing, and ensure interoperability issues are addressed
  3. Appropriate functional experts are driving the service’s requirements and common capabilities across business processes are identified early in the software development lifecycle
  4. Governance model guidelines are clearly documented and communicated¬† – for example, there are a class of changes that can be made to a public interface such as a WSDL that don’t impact existing service clients and there are some that do.
  5. Performance testing needs to be done not only during development but during service provisioning – i.e. integrating a new service consumer. If your teams aren’t careful, one heavy volume consumer, can overwhelm a service impacting both new and existing consumers. Execute performance testing in an automated fashion – every time you integrate with a new client to reduce risks of breaching required SLAs

What additional practices do your teams follow?


%d bloggers like this: