Skip to content

SAF – Architecture Evaluation – Evaluation in Code

In the previous post, I mentioned there are two approaches to evaluating a software architecture. This post talks about the first approach – evaluating an architecture in code.

POCs

The first evaluation-by-code tool is the  Proof of Concept (POC for short). Building a POC is about building minimal code implementing a focused area of the architecture or the architecture’s technology mapping. The aim of the POC is to help weigh alternatives (when you are contemplating which way to go), lower technical risks, or lower stakeholders’  anxiety over an architectural choice.

POCs map quite well into XP spikes

Let’s look at a few POC examples (examples from my past projects)

Example 1: Validate the feasibility of an architectural direction

On one project, we inherited this ugly application incorporating its own proprietary cgi web server. The architectural decision was to keep this as a black box and develop the project using a better, more scalable architecture (though we still needed to utilize functionality from the C++ server now and then). The challenge to making this happen was to be able to maintain and pass the session from the rest of the application (JSP, Servlets, and J2EE) onto C++. A (successful) POC that tackled this issue allowed us to advance in the chosen architectural direction, reducing the risk significantly.

Example 2: Validate a technology mapping

On another project I worked on (when I was with Microsoft), we analyzed the project quality attributes and found that there’s a need for near-fault tolerance (fail-over in 5 seconds or less). The architectural solution that we decided on was to use an active server and a semi-active one (online, ready-to-take-over-server that constantly applied state from the active server)* – for technology mapping. We considered several options (e.g., fault-tolerant hardware ). One of the options considered was using SQL Server 2005 Database mirroring to keep the two servers synchronized (DB mirroring gives you a failover of the DB in about 5 seconds or less). In order to verify this direction, I set up a small proof of concept (PoC)  to verify that this direction is viable. I was told that after I left Microsoft, further investigation of the issues led to Microsoft’s decision to postpone Mirroring for now.

Example 3: comparing alternatives

We wanted to compare MSMQ vs. using an existing distributed object middleware both in terms of performance and usability (is it developer-friendly). We crafted 2 POCs one for each technology, which enabled us to compare the two approaches head-to-head.

POCs help evaluate alternatives and lower risk in specific areas of the architecture (and design, for that matter). However, POCs will not give you a feel for how the overall architecture will play together – enter prototypes.

Prototypes

A prototype is basically a working simplified model of the system. There are many characteristics to distinguish between different types of prototypes (hi-fidelity/low-fidelity, global/local, etc.) – let’s focus on two:

  • Horizontal prototype – which models wide aspects of a single layer, i.e., many features with little detail. The most common example of a Horizontal prototype is a  user interface prototype, which is used to test the overall interaction with the system.
  • Vertical prototype -Implementing some sub-system or a limited set of features across all layers /modules.

The Vertical prototype is a useful way of evaluating and getting a feel and understanding of how the different components that makes the architecture work in unison without getting bogged down in all the fine details of the system’s functional requirements.

Example: Using a prototype to evaluate an architecture alternative.

We were getting ready to embark on a rather large project (we did the prototype around the release of .Net 1.0, and the project is still going on…). We wanted to understand the capabilities and limitations of. NET. We chose a limited aspect of the system (which we considered the most risky), chose some of the designated team leaders and took an architect from Microsoft Consulting Services to help us build the “by the book” architecture.

We did a very extensive prototype, a total effort of 3-4 man-years, including all the preliminary work and the post-mortem analysis. We gained a lot of insights on what .NET can and cannot give us out of the box. We understood the limitations of the components we integrated (e.g. ESRI’s limitations in displaying near-realtime moving objects). We also used the preliminary prototype (a performance hog) as a platform for running POCs for other architectural and technological directions. Once we solved the performance problems, we also used it as a demo for the client.

By the way, this experience also had some additional positive residual effects, getting the team leaders up-to-speed on the (then) new technology, jelling the core team, etc.

Taking all the information gathered during the prototype, we were able to design a better, more robust architecture for the project itself (which the architect that came after I left the project managed to mangle – However, that’s another story altogether :) )

Throwaway prototypes

I’ve found that in most cases, exploratory prototypes, or “Throwaway prototypes,” are more useful as they really let you get to the crux of the matter quickly, i.e. getting all the components connected the way the architecture dictates to test their interactions and usage. Again, the idea here is to focus on evaluating the architecture, not on the implementation details of the overall solution. Nevertheless, once the architecture is more mature, you may choose one of the prototypes and evolve it into the actual system (sort of turning it into an architectural skeleton).

Architectural Skeletons

Once you’ve decided on a candidate architecture (i.e., the architecture you want to use for the project), your first iteration or two (This might not be the first iteration as you may have already done a couple or so prototype iterations) should be focused on creating the architecture skeleton.

Architecture skeleton is about implementing the minimal set (bare bones, so to speak) of the project’s functionality that is needed to connect all the pieces in a meaningful, integrated way (for example, it can include an implementation of a single thread in a use case or an important story). It is somewhat similar to a prototype, with two differences :

  • It has to  implement the real functionality of the system (though the functionality is usually very thin)
  • You don’t throw it away (hopefully anyway)

Most current methodologies (RUP, MSF for CMMI Improvement, XP etc.) support the notion of architectural skeleton (though not using this name). In RUP, for example, you would have the architectural skeleton up and running at the end of the elaboration phase – a running architecture that you can expand and add functionality to in the construction phase.

It is important to implement a skeleton (vs. starting to implement the different components and trying to integrate them later) as it gives you a relatively early opportunity to test if your architecture holds, and it is much better to find errors, especially architectural ones, as early as possible.

Summary

I demonstrated  3 “tools” to enable the evaluation of architectural decisions in general and the overall architecture in particular:

  • POC – focused on a specific area
  • Prototype – overall architecture with “simulated” behavior
  • Skeleton – “barely running” implementation of the chosen architecture.

The problem with these approaches, especially prototypes and skeletons, is that they require a relatively long time and resources to implement. We need additional tools in our evaluation toolset to allow us to focus on architecture alternatives most likely to match our needs.

The next installment looks at some on-paper methods to evaluate software architectures that are less precise on the one hand but can rule out some alternatives quickly on the other.


* other options are active-active and active-passive (e.g. Windows clustering)

Published inSAF

One Comment

Comments are closed.