Design Review Checklist for Service Capabilities

November 9, 2009
checklist

Download Checklist

Here is a checklist for performing design reviews when building service capabilities that are part of Service Oriented Architecture (SOA) initiatives. I have found this checklist to be very useful and can serve as a key document as part of the service development lifecyle. This checklist contains close to 50 questions and covers the following areas:

  • Functionality
  • Design decomposition
  • Documentation
  • Coupling
  • Reuse
  • Consistency
  • Integration
  • Performance
  • Reliability
  • Deployment

Feel free to customize this checklist based on your needs.

Like this post? Subscribe to RSS feed or get blog updates via email.

tweet this add to del.icio.us post to facebook

Advertisements

Automated Tests are Foundational to Systematic Reuse

October 8, 2009

I gave a quick tip on automated tests earlier – In this post I want to elaborate on why automated tests are foundational for achieving systematic reuse. Automated tests help with a number of aspects:

  • Confidence that leads to trust: tests give you high confidence in the functional aspect of reusable assets. If you advertise that the asset can do X, Y, & Z things, tests back that up objectively. They don’t manipulate, inflate, or misrepresent the state of an asset. Without tests, you will get into lengthy debates regarding a reusable asset’s ability to fulfill user stories (either partially or completely). Tests also will ultimately lead to trust with your consumers. No one will want to listen to your pontifications on reuse unless you can demonstrate viability and stability. Tests will communicate to your consumers that you have thought through issues and have the code under control.
  • Bug fixes: tests help fix bugs faster. When you have reusable asset that is leveraged by several consumers – you want faster bug fixes. You want bug fixes that don’t introduce new ones. A set of comprehensive automated tests will help with regression testing as well. Not sure if your enhancement has unintended side-effects? No problem- just run the tests and you will know.
  • Defect prevention – the success of reuse depends on high quality software. Without tests it will be hard for your team to prevent defects from creeping into the code base. Worse: one or more consumers telling you that your advertised feature actually doesn’t work. Would you rather hear that from teams across your floor or via automated tests? 🙂 This ties into the previous point as well – the faster you fix bugs the higher the confidence in your team’s ability to produce higher quality.
  • Validate assumptions: tests help you validate design assumptions. Often, as you write tests you will discover that a particular test scenario challenges underlying design assumptions. Assumptions that you have taken for granted but may not be true or only partially valid. This often occurs in your domain layer and you may or may not have that knowledge in the team. Tests force you to confront these assumptions head-on. You may assume that there is a domain entity always associated with another. Another nice side-effect: this will help you spot refactoring opportunities. If the test is hard to write – ask yourself is the domain model natural? Are you exposing too many details where things should be encapsulated? Are you repeating code that is perhaps a new candidate for a reusable asset?
  • Validate dependencies: tests also expose dependencies on internal and external libraries. Your code might still compile and work in a developer workspace but can you execute it from any machine? What about a different operating system? Your tests will start to break as you discover such issues. This isn’t a bad thing at all. You are much better off discovering these issues now rather than wait for surprises on release day.
  • Highlight data quality problems: Good test data is hard to come by in development. When you write tests that are of high quality you are forced to get data that has to correlate across systems. Different tests might force you to use varying set of data attributes. Regardless of the source of the data (database/file/mocked file whatever) – tests will expose data quality problems early. Reusable assets might also be relevant to create data utilities that archive, synchronize, and massage data. Write the test, watch it fail, and source the right data to make it pass. Many projects wait till the end, execute ad-hoc manual tests that may or may not reflect data in the production environment and wonder why they have issues in production 🙂

Systematic reuse cannot be successful without other teams in your department or organization having this confidence. Remember many reuse initiatives fail due to lack of trust and hence these tests will be crucial in building confidence. Without automated tests you will have a tough time building high quality software let alone convince others in your team or your organization reuse assets. If you aren’t confident of asset quality, how can you convince your existing and potential consumers to leverage them?

Like this post? Subscribe to RSS feed or get blog updates via email.

add to del.icio.us: Digg it : post to facebook: Stumble It! : :


%d bloggers like this: