Skip to content

SAF – Evaluation part II – the “Formal Methods”

In the previous post on architecture evaluation, I talked about evaluating a candidate- architecture in code. This post is dedicated to evaluation on paper.

I remember one system I was working on. I was keen on making the architecture asynchronous and message-oriented (it was all circa 2001, by the way). However, I was new on the team, and my role (as the project’s architect) wasn’t well defined, so I had to make many compromises and get wide acceptance in order to get anything going forward. We set up a team to try to come up with a suitable architecture; since each of the team members had his/her own experience, we came out of these meetings with more than 20 (!) different candidate architectures (actually, there were fewer architecture variations but they were multiplied by the possible technology mappings). Trying to decide which was the best option to follow, we tried to conduct some sort of a QFD process where several members were in charge of the weights, and the rest were in charge of evaluating and scoring the different categories (per option). Like most “design by committee” efforts, this also proved doomed from the start – and the option everybody disliked got the highest score. If you are wondering what happened – we scraped this effort and started from scratch in a more sensible way (which included a detailed prototype) – what’s important for the purpose of this post is that it got me thinking that there must be a better way to evaluate architectures. Well, a lot of research and several projects later, I think that there are few techniques that give much better results.

The first methodology I stumbled upon was ATAM (short for Architecture Tradeoffs Analysis Method), developed by SEI.

ATAM is a rather lengthy and formal method of evaluating architectures, and it requires a lot of preparation and commitment from the different stakeholders. You can get an overview of the process from the following (~130K) ATAM presentation I prepared a few years ago (While this is probably not the best presentation in terms of presenting it to a crowd (I know better now :) )  it does provide a good overview of the 9 ATAM steps).

ATAM is explained in more detail in “Evaluating Software Architectures.”, the book also details two more evaluation methods SAAM (which I’ll let you read in the book) and ARID (Active Reviews for Intermediate Designs).

ARID, like ATAM, is a scenario-based technique, meaning that as part of the evaluation process, you need to identify scenarios where the system’s quality attributes (see Quality attributes – Introduction ) occur or manifest themselves. The main idea in ARID is that for each (prioritized) scenario, the participants try to draft code that solves that scenario utilizing/following the design under test. The results of the effort are then evaluated for ease of use, correctness, etc.

There’s a good introductory whitepaper on ARID in SEIs website.

Note that ARID is more suited to agile/iterative development (compared with ATAM) since (as its name implies) it doesn’t require the architecture to be completed and finalized up front.

While I was working for Microsoft, I stumbled upon another evaluation method called LAAAM (which is now a part of MSF 4.0 for CMMI Improvement). LAAAM, which stands for Lightweight Architecture Alternative Analysis Method, is also scenario-based and like ARID, is more agile alternative to ATAM.

In LAAAM you create a matrix that has scenarios on one dimension and architectural approach, decision or strategy on the other dimension. Each cell is evaluated on three criteria:

  • Fit – how viable is the approach to solve the scenario (including risk, alignment to the organization’s standards, etc.)
  • Development Cost
  • Operations Cost

LAAAM was developed by Jeromy Carriere while he was working for Microsoft (he is now working for Fidelity Investments in Boston).

SAF works well with all of these techniques, as one of the basic steps is to identify the quality attributes and write down scenarios where these attributes manifest themselves in the system (see Utility Trees – Hatching quality attributes )

To sum things up –

There are several ways to evaluate software architectures on paper – ATAM, ARID, LAAAM, and few others (I didn’t discuss here)

Scenarios-based evaluations help verify quality attributes are being taken care of by the suggested architecture.

Paper-based evaluations can help reduce the number of options to few (hopefully one or two) leading solutions, which can then be evaluated in code (as the previous post on this subject suggested)

Published inSAF