Use Incidents to Accelerate Platform Maturity

June 5, 2016

Production incidents are one of the best avenues to accelerate the maturity of a managed platform. While incidents are stressful when we are dealing with them they provide clear and direct feedback on gaps in the platform. First, don’t indulge in blame games and don’t waste time fretting that it has happened. Second, if you step back from the heat incidents are an excellent means to learn more about your assumptions and risks.

  • Did you assume that an external dependency will always be available? More specifically, did you assume that the dependency will respond within a certain threshold latency window?
  • Was there manual effort involved in identifying the problem? if so, how much time did it take to get to the root cause? what was missing in your supportability tooling? Every manual task opens the door for additional risks so examining them is key. Think about how to get to the root cause faster:
    • Instrumentation about what was happening during the incident – were there pending transactions? pending events to be processed? how “busy” was your process or service and was that below or above expected thresholds?
    • Is there a particular poison / rogue message that triggered a chain reaction of sorts? did your platform get overwhelmed by too many requests within a certain time window?
    • Did you get alerted? if so, was the alert about a symptom or did it provide any clues to the underlying root cause? did it include enough diagnostic information for additional troubleshooting? was there an opportunity to treat the issue as an intermittent failure – instead of alerting, could the platform have automatically healed itself?
    • Was the issue caused by a ill-behaved component or external dependency? If so, has this happened before (routine) or is it new behavior?
  • Think about defect prevention  and proactive controls. There are a variety of strategies to achieve this: load shedding, deferring non-critical maintenance activities, monitoring trends for out of band behavior, and so on. Invest in automated controls that warn threshold breaches: availability of individual services within the platform,  unusual peak / drop in requests, rogue clients that hog file system or other critical platform resources, etc.

The above isn’t an exhaustive list but the key message is to use the incident as an opportunity to improve the managed platform holistically. Don’t settle for a band-aid that will simply postpone a repeat incident!


Proactively Manage Platform Adoption Risks

June 4, 2016

Creating a managed platform is a powerful strategy – key is to help your clients and proactively manage adoption risks. Risks are everywhere from losing control on infrastructure, release management, upgrades to reduced learning curve and operational supportability. Here are a few strategies to manage adoption risks – these will not only help your clients but help the platform team as well:

  • Understand key technical drivers for platform adoption – what do your clients care about the most? Is it faster functional development? ease of deployment? rich tooling? testability? ability to dip into a rich developer ecosystem?
  • Provide an integrated console for integrating provisioning, runtime management, and operational support. The key word here is integrated – an integrated toolset that makes it easier for a team to provision a resource, deploy / activate it, elastically scale it , and troubleshoot problems is extremely important.
  • Empathize with your client’s adoption challenges: they are losing direct control and access in exchange for a host of powerful platform benefits. But they still need answers to questions like:
    • how rich and useful is the instrumentation (for transparency into transactions or events or requests being handled, for errors / warnings whilst processing, historical metrics / trends)?
    • how do I get access to log messages? are the logs linked to particular request ids or transaction references? how much is the latency between actual processing and log messages reflecting them?
    • can I help myself is something goes wrong during production use? e.g. what if a process or execution takes longer than expected? what if it crashes mid-way? is there support for automatic alerting? how easy or difficult is train my devops team members?
  • Provide automated controls to reduce risk when hosting untrusted code. Let’s face it – managed platforms take on a large amount of risk by hosting code that is largely outside it’s control. It is therefore, very critical to reduce defects and address risks via automated controls. You can check for unsupported API calls in your SDK, risky or unsafe libraries being packaged, etc. to address risks while provisioning. This is a vast topic and I will author a follow up post on controls and why they are indispensable to create stable managed platforms

5 Reasons Why Managed Platforms Fail

June 2, 2016

In an earlier post, I wrote why managed platforms can drive large scale reuse alongside other benefits. Given their importance, it is imperative to think about avoiding failure. Below are 5 reasons why platform efforts fail:

  1. Not providing a public API that speaks to your client’s domain and exposing implementation details. Remember if you don’t manage this, whatever is shared will become the de facto public API. This in turn will tie your hands considerably when you need to refactor, improve, refine the API.
  2. Ignoring developers and their experience when using the platform’s public APIs and tools.
  3. Not investing in automated tests that can certify platform functionality
  4. Making it difficult (or sometimes impossible) to customize behavior. Clients forced to learn platform-specific terms and practices at the expense of their actual problem.
  5. Assuming your team has all the answers – this manifests in the form of inability to listen to what your clients are saying, not collaborating with them effectively, and not exploring opportunities to co-create / co-evolve the platform solutions

The Write Once Reuse Many Times Myth

June 1, 2016

How many times have you heard someone say – “we want to implement this once so we can reuse it over and over again…” – or some variation of this theme? The underlying assumption here is that it is better to get to the right implementation of a component so the team doesn’t have to touch it again. Let’s make it perfect, is the reasoning.

I have rarely seen this work in practice. In fact, it is very difficult to create a single perfect software implementation. After all, your team’s understanding of the nuances and subtleties of your domain grows with time and experience. That experience is earned using a combination of trying out abstractions, continuously validating functional assumptions, and ensuring that your software implementation is providing the right hooks to model and accommodate variations.

Instead of trying for perfection, focus on continuous alignment between your domain and the software abstractions. Instead of trying to write once and reuse many times, focus instead on anticipating change and continuous validation of requirements and associated assumptions. Instead of pursuing the one right implementation, enable easy pluggability of behavior and back it up with a robust set of automated tests. This way, you can ensure your team’s domain understanding is reflected appropriately in the software implementation.

You won’t write once – specially if your team lacks hard-won experience to create high quality abstractions. Embrace the idea that you will write something multiple times – not because it is desirable, more because it is inevitable. Deliver value to your business iteratively and deepen your understanding of both the problem and solution spaces. You will be pleasantly surprised with the results. Remember, pursuing reuse without continuous value is the wrong goal.

 


Reducing Friction to Drive Platform Adoption

May 31, 2016

I wrote earlier about easing integration when providing reusable software assets. In this post, will elaborate on specific techniques for driving platform adoption & their rationale. Recognize that most developers who will evaluate your platform are trying to solve a specific business / functional problem. They want to quickly ascertain if what you are providing is a good fit or not. Instead of convincing them, help them arrive at a decision. Fast. How exactly do you do that? Here are a few ideas:

  • Provide details on the kinds of use cases your platform is designed to address. Equally important is to be transparent about the use cases that you don’t support. Not now and never ever will.
  • Create developer accelerators – e.g. a Maven Project Archetype or a sample project to try out common functionality
  • Identify areas where developers can extend the platform functionality – how will they supply or override new behavior? How will you make it possible to inject, easy to test, and safe to execute? There are lots of techniques that you can use but first you have to decide to what extent you want to allow this in the first place.
  • Make your platform available in “localhost” mode – i.e. conducive for use with the IDE toolset. This is more challenging than what it sounds – e.g. if your platform isn’t modular, making it work in local mode will be very challenging. Ditto if your platform relies on external services / connectivity / data stores, etc. that aren’t easily replaceable with in-memory / mock equivalents.
  • Allow developers to discover your platform via multiple learning paths. Some might want to explore using a series of Kata lessons that tackle increasingly complex use cases. Others might be looking for answers to a specific problem. You need a user guide, code kata, examples, and more importantly, you need to make them easy to access.
  • Identify which areas of the platform adoption curve are the most time consuming and figure out how to reduce if not eliminate them entirely. For instance – does your platform require an elaborate onboarding process? Are there steps that can be deferred till production deployment?

Driving Large Scale Reuse Using Managed Platforms

May 29, 2016

Managed platforms are a very effective and pragmatic way to drive systematic reuse across an organization. Managed platforms can provide a number of benefits ranging from simplified developer experience, out of the box productivity components and tooling, and most importantly, a whole host of non-functional concerns being addressed in an integrated fashion. Good managed platforms  exhibit some common traits. They:

  • Solve a specific problem really, really well
  • Are easy to signup, develop in, and  use via a developer SDK
  • Provide a number of integrated components that address specific pain points
  • Are extensible and provide clear and safe injection points via a public, published, well maintained API
  • Free the developer from having to procure hardware or orchestrate deployment activities
  • Make it easy to report bugs, fixes, and contribute enhancements

The best part of managed platforms? They can dramatically alter the productivity curve for your development teams. Every developer doesn’t have to worry about high availability, horizontal scaling, capacity management, public APIs, version management, backward compatibility, and ongoing care and feeding of core reusable components. The platform provides value and more specifically peace of mind via powerful abstractions.

This isn’t an exhaustive list but given the general push towards cloud based architectures, good platforms will give your teams much more than just reuse! I will elaborate on each of the above traits in follow up posts.


Building reusable Assets from existing applications

May 28, 2016

Although starting from scratch is simpler when building reusable assets, reality is that you are probably maintaining one or more legacy applications. Refactoring existing legacy assets has several benefits for the team. Here are a few:

  • The refactoring effort will make you more knowledgeable about what you already own
  • Will help you utilize these assets for making your systematic reuse strategy successful
  • Saves valuable time and effort with upfront domain analysis (of course this is assuming what you own is relevant to your present domain)
  • Make your legacy system less intimidating and more transparent.
  • Provides the opportunity to iteratively make the legacy assets consistent with your new code

If you cannot readily identify which legacy module or process is reusable you have two places to get help – your customer and your internal subject matter experts. Your customer can help you with clarifying the role of a legacy process. Likewise, your team probably has members who understand the legacy system and have deep knowledge of the domain. Both of them can guide your refactoring efforts.

The act of examining a legacy module or process also has several benefits. You can understand the asset’s place in the overall system. The existing quality of documentation around this asset and usage patterns can be understood as well. Now, you can make an informed opinion of the current state of what you have and how you want to change it. Before making any changes though it helps to consider the next few moves ahead of you. Ask a few questions:

  1. Is the capability only available in the legacy application and not in any other system that you active maintain and develop?
  2. Is it available only to a particular user group, channel, or geography?
  3. Is the capability critical to your business sponsor or customer? If so, are they happy with the existing behavior?
  4. How is the capability consumed currently? Is it invoked as a service or via a batch process?
  5. How decoupled is the legacy capability from other modules in the application?

These questions will help you get clarity on the role of the legacy asset, its place in the overall application, and a high level sense of the effort involved in refactoring it to suit your requirements


%d bloggers like this: