SAF – Architecture Evaluation – Evaluation in Code
In SAF - Architecture Evaluation (Introduction) I said there are two approaches to evaluating a software architecture. This post talks about the first approach – evaluating an architecture in code.
The first evaluation-by-code tool is the Proof of Concept (POC for short). Building a POC is about building a minimal amount of code implementing a focused area of the architecture or the architecture’s technology mapping. The aim of the POC is to help weight alternatives (when you are contemplating which way to go), lower technical risks or lower stakeholders’ anxiety over an architectural choice.
POCs map quite well into XPs spikes
Lets look at a few POC examples (examples from my past projects)
Example 1: Validate the feasibility of an architectural direction
On one project we inherited this ugly application incorporating its own proprietary cgi web-server, the architectural decision was to keep this as a black-box and develop the project using a better more scalable architecture (though we still needed to utilize functionality form the C++ server now and then). The challenge to make this happen was to be able to maintain and pass the session from the rest of the application (JSP, Servlets and J2EE) onto the C++. A (successful) POC that tackled this issue allowed us to advance in the chosen architectural direction, reducing the risk significantly.
Example 2: Validate a technology mapping
On another project I worked for (when I was with Microsoft) we analyzed the project quality attributes and found that there’s a need for near-fault tolerance (fail-over in 5 seconds or less). The architectural solution that we decided on was to use an active server and a semi-active one (on-line, ready to take over server that constantly applies state from the active server)* – for technology mapping we considered several options (e.g. fault-tolerant hardware ). One of the option considered was using SQL Server 2005 Database mirroring to keep the two servers synchronized (DB mirroring gives you a failover of the DB in about 5 seconds or less). In order to verify this direction I set up a small Proof of concept to verify that this direction is viable. I was told that after I left Microsoft, further investigation of the issues found led to Microsoft’s decision to postponed Mirroring for now.
Example 3: comparing alternatives
We wanted to compare MSMQ vs. using an existing distributed object middleware both in terms of performance and usability (is it developer-friendly). We crafted 2 POCs one for each technology which enabled us to compare the two approaches head-to-head.
POCs help evaluate alternatives and lower risk in specific areas of the architecture (and design for that matter). However, POCs will not give you a feel on how the overall architecture will play together – enter prototypes
A prototype is basically a working simplified model of the system. There are many characteristics to distinguish between different types of prototypes (hi-fidelity/low-fidelity, global/local etc.) – let focus on two:
- Horizontal prototype – which models wide aspects of a single layer, i.e. many features with little details. The most common example for Horizontal prototype is a user interface prototype which is used to test the overall interaction with the system.
- Vertical prototype -Implementing some sub-system or a limited set of features across all layers /modules.
The Vertical prototype is useful way evaluating, getting a feel and understanding of how the different components, that makes the architecture, work in unison without getting bogged down in all the fine details of the system’s functional requirements.
Example: Using a prototype to evaluate an architecture alternative.
We were getting ready to embark on a rather large project (we did the prototype around the release of .Net 1.0 and the project is still going on…). We wanted to understand the capabilities and limitations of .NET. We chose a limited aspect of the system (which we considered the most risky), chose some of the designated team-leaders and took an architect from Microsoft Consulting Services to help us build the “by the book” architecture.
We did a very extensive prototype, total effort of 3-4 man-years including all the preliminary work and the post-mortem analysis. We gained a lot of insights on what .NET can and cannot give us out of the box, we understood the limitations of the components we integrated (e.g. ESRI’s limitations in displaying near-realtime moving objects) and we also used the preliminary prototype (which was a performance hog) as a platform for running POCs for other architectural and technological directions. Additionally, once we solved the performance problems, we also used it as a demo for the client.
By the way, this experience also had some additional positive residual effects like, getting the team leaders up-to-speed on the (then) new technology, jelling the core team etc.
Taking all the information gathered during the prototype we were able to design a better, more robust architecture for the project itself (which the architect, that came after I left the project, managed to mangle – However that’s another story altogether :) )
I’ve found that in most cases exploratory prototypes, or “Throwaway prototypes” are more useful as they really let you get to the crux of the matter quickly, i.e. getting all the components connected the way the architecture dictates to test their interactions and usage. Again, the idea here is to focus on evaluating the architecture, not on the implementation details of the overall solution. Nevertheless, once the architecture is more mature you may choose one of the prototypes and evolve it into the actual system (sort of turning it into an architectural skeleton).
Once you’ve decided on a candidate architecture (i.e. the architecture you want to use for the project) your first iteration or two (This might not be the first iteration as you may already done a couple or so prototype iterations) should be focused on creating the architecture skeleton.
Architecture skeleton is about implementing the minimal set (bare bones, so to speak) of the project’s functionality that is needed to connect all the pieces in a meaningful, integrated way (for example it can includes an implementation of a single thread in a use case or a important story). It is somewhat similar to a prototype, with 2 differences :
- It has to implement real functionality of the system (though the functionality is, usually, very thin)
- You don’t throw it away (hopefully anyway)
Most current methodologies (RUP, MSF for CMMI Improvement, XP etc.) support the notion of architectural skeleton (though not using this name). In RUP, for example, you would have the architectural skeleton up and running at the end of the elaboration phase – a running architecture which you can expand and functionality to in the construction phase.
It is important to implement a skeleton (vs. starting to implement the different components and try to integrate them later) as it gives you a relatively early opportunity to actually test if your architecture holds, and it is much better to find errors, especially architectural ones, as early as possible.
I demonstrated 3 “tools” to enable evaluation of architectural decisions in general and the overall architecture in particular:
- POC – focused on a specific area
- Prototype – overall architecture with “simulated” behavior
- Skeleton – “barely running” implementation of the chosen architecture.
The problem with these approaches, especially prototype and skeletons is that they require a relatively long time as well as resources to implement. We need some additional tools in our evaluation toolset to allow us to focus on architecture alternatives that are most likely to match our needs.
I think that there are such tools, and on the next post on architecture evaluation I will try give my view on what they are and how to use them.
* other options are active-active and active-passive (e.g. Windows clustering)