By: Juan Carlos (John Charles) Olamendy Turruellas
The foundation of any software system is the architecture. The architecture defines the structure of the system through its underlying components and their relationship as well as the properties and behavior that are exposed to the external world. The software architecture is basically shaped by the architecture drivers: functional requirements, non-functional requirements and business constraints.
Because it’s very important to define a correct architecture (from a list of competing architectures) in order to run a successful project with a robust software product, thus we (as architect) need to validate in advanced if our architectural decision are well founded in order to mitigate risks. The earlier we find a problem in the design phases, the better off you are (less cost to fix an error), however the architecture evaluation can be carried out at many points during the development process. You evaluate software architecture (architectural decisions) that is going to be developed before the project begins the construction phase. You also evaluate the architecture of legacy systems before executing modifications, porting, upgrading and integrating with other systems. And finally, you evaluate acquiring software systems in order to understand the underlying architecture and the impact on the organization.
Today, we have several methods (applied to a dozen of architectures of all complexity in a wide variety of domains) to evaluate the software architecture in a relatively inexpensive way:
- ATAM: Architecture Tradeoff Analysis Method
- SAAM: Software Architecture Analysis Method
- ARID: Active Reviews for Intermediate Designs
These methods have in common that they are questioning techniques that use scenarios and quality attribute evaluation as the way for asking probing questions about how the hypothetical architecture responds to these scenarios. Other questioning techniques include checklist and questionnaires.
The architecture evaluation produces an evaluation report checking that the selected architecture is “suitable” for the software system and providing the list of risks on architectural decisions to mitigate with further analysis and design, prototyping, etc. Now let’s explain the concept of “suitability” with some examples. On evaluating architecture from competing hypothetical architectures, we first identify the most important goals and then highlight the weaknesses and strengths for each candidate. After decisions make process, we might have the selected architecture. Sometimes, the selected architecture is “suitable” for some goals and problematic for other goals, in this case, we need to prioritize the business goals and we have to include in the report the weak and strong points of the architecture. Sometimes, we select the suitable architecture, and sometimes none of the architectures are selected so we have improve the most acceptable candidate architecture or to design a new candidate architecture. It’s remarkable to say that this tradeoff is inherent to the design process.
Now let’s start talking about Architecture Tradeoff Analysis Method (ATAM), one of the most used architectural decision method. ATAM was developed by the Software Engineering Institute (SEI) at the Carnegie Mellon University. According to SEI, the purpose of ATAM is to assess the consequences of architectural decisions in light of quality attribute requirements and business goals, in common English, it means discovering risks where a quality attribute of interest is affected by architectural decisions (a trade-off between the quality attributes), and so we can reason about the structure of the system and the underlying rationale.
One important concept related to any architecture evaluation method is the quality attribute. In short, functional requirements specify what software has to do and non-functional requirements (quality attributes) specify how well it should be done. Functionality and quality attributes are orthogonal. In complex systems, quality attributes can never achieved in isolation. In order to achieve one quality, other quality is affected (sometimes negative and sometimes positive), so the architectural decisions are really a trade-off between the quality attributes in order to support the business goals.
We can group the quality attributes into three main categories:
- End-user perspective: performance, availability, usability and security
- Technical perspective: modificability, portability, reusability, testability, interoperability
- Business community perspective: time to market, cost and benefits, projected life time, project budget
In order to evaluate the software architecture using quality attributes, we need to characterize them in a proper way using quality attribute scenario. A scenario is a short statement describing an interaction of one the stakeholders with the system. Then, a quality attribute scenario is a way to concretize the quality attributes.
A quality attribute scenario is composed by six parts:
- Stimulus: The events that shape the system architecture
- Source of stimulus: The entity that generates the stimulus (human, computer system, etc)
- Environment: The stimulus occurs under conditions
- Artifact: Some artifact is stimulated
- Response: The response is the activity undertaken after the arrival of the stimulus
- Response measure: The response must be measured in order for the requirements to be tested.
The following figure shows graphically the relationship between the six parts of quality attribute scenarios.
Let´s understand the previous concepts with an example. For example, let´s analyze the availability quality attribute. Availability is concerned with system failure. A system failure occurs when the system no longer provides a service. One way to measure the availability is through the probability that it will be operational when it´s needed, for example 99.9% of availability. One quality attribute scenario to describe this requirement is: A message arrives to the system for the functionality XXX under normal conditions and the response is available with the probability of 98%.
Another example for the performance quality attribute. Performance is concerned about response timing. An example of a quality attribute scenario is: A message from an external system arrives to the system to execute the functionality YYY under normal conditions and a response is provided at least in 8 seconds to be considered acceptable.
Now that we have understood the concepts related to quality attributes, we follow talking about ATAM method. In order to execute correctly the ATAM process, we need three groups cooperating each other:
- Evaluation team. A group of experienced architects (three to five people)
- Project decision makers. People with the authority to make changes in the project (project manager, customer, manager)
- Architecture stakeholders. People interested in a good architecture for doing correctly his job (developers, testers, integrators, maintainers, performance engineering and users)
The output of the ATAM method must include at least the following artifacts:
- The documentation of the selected architecture. The key artifact to specify the architecture is the SAD (Software Architecture Document)
- An evaluation report that recaps the ATAM method, captures the scenario analysis (quality requirements captured in the form of scenarios), explains the candidate architectures and the underlying rationale in the architectural decision process to select the right architecture and a summary of all the work done. We need to specify the architectural decisions in terms of the quality requirements, that is, for each quality scenario, we have to specify the strategies to achieve it. We also need to specify the sensitivity or tradeoff points, that is, decisions that have an important effect on one or more quality attributes. For example, the decision for securing a functionality affects the performance of the system, so there is a tradeoff between security and performance. And finally, we need to specify the architectural risks, that is, undesirable effects when we make decisions on quality attributes. After, identifying the risks, we are able to develop the underlying mitigation plan.
Now, we´re going to talk about the steps to successfully execute the ATAM method. The ATAM method is based on four phases:
- Phase 1 – Partnership and preparation. The evaluation team, the customer and the key project decision makers meet to understand the ATAM method (objectives, inputs, outputs, stakeholders, etc), business drivers and the possible architecture approaches to be evaluated. They agree the final report to be delivered, the formalities (statement of work, non-disclosure agreement), documentation of system architecture, evaluation expectation, etc. This phase may last over a few weeks.
- Phase 2 – Initial evaluation and Phase 3 – Complete evaluation. These phases are for evaluation purposes. So far, the evaluation team has studied the possible architecture approaches and they have a good insight about the business drivers, business and system objectives, constraints, most important quality attributes. The customer provides information related to the business and quality scenario. With this information in hand, the evaluation team can choose the appropriate architecture approach and produce the evaluation report. The two phases consists of nine steps. This phase may last 3 or 4 days with elapsed time of 2 to 3 weeks.
- Phase 4 – A follow-up. In this phase, the evaluation team writes the final report. The key decision makers along with the chief architect agree to stop, change or re-evaluate other possible architecture approaches. It´s also an evaluation improvement period in order to execute future evaluations more efficiently. This phase may last one week.
Now let’s talk about the evaluation phases (Phase 2 and Phase 3). These phases consist of 9 steps. Steps 1 through 6 are carried out in phase 1 and the rest of steps are carried out in the phase 2 as shown in the following list.
- Phase 1
- Present the ATAM method
- Present the business drivers
- Present the architecture
- Identify architectural approaches
- Generate quality attribute utility tree
- Analyze architectural approaches
- Phase 2
- Brainstorm and prioritize scenarios
- Analyze architectural approaches
- Present results
Now let’s explain in details the evaluation phases.
Step 1. Present the ATAM method
The evaluation team presents an overview of the ATAM process such as the key steps, techniques (utility tree generation, architecture elicitation and analysis, scenario brainstorming) and output (architectural approaches, utility tree, scenarios, risks, sensitivity points).
Step 2. Present the business drivers
The project’s stakeholder and the evaluation team try to understand the context of the system and the primary business drivers motivating its development. The project decision maker presents the system from the business point of view including the following information:
- Business goals and context
- Major stakeholders
- High-level functional requirements (described as use cases or user stories) that impact on the system architecture
- Most important quality attributes (described as quality scenarios) that impact on the system architecture
- Constraints such as technical, managerial, economic and political
Step 3. Present architecture
The lead architect makes a presentation describing the architectural approach used to meet the requirements and the constraints. In order to describe the architecture, it’s very useful to use the 4+1 architectural view model developed by Kruchten. This view model describes the contextual view (relationship with humans and other systems), the logical view (module, layers, relationship), process view (process, threads, pipeline, synchronization, data flow, events) and deployment view (CPU, storage, devices, network). And finally, we need to describe risks associated in order to meet the architectural requirements. As a rule of thumb, the architect must present the views that are most important during the creation of the architecture.
Step 4. Identify architectural approaches
The evaluation team tries to identify what key architectural approaches are used for realizing the requirements and constraints. Possible architectural approaches are: client-server, multi-layer application, service-oriented architecture, component-based application, publish-subscribe, etc.
In this step, the evaluation team analyzes deeply the architecture presented in the step 3, and then it has a good idea of what patterns and approaches the architect used at designing the system.
Step 5. Generate quality attribute utility tree
In this step, the evaluation team (along with the project decision maker) identifies, prioritizes and refines the most important quality attribute goals (expressed by quality scenario) by building a utility tree. The output is a characterization and a prioritization of quality attribute requirements by producing a prioritized list of scenarios that tells to the evaluation team where to probe the architecture approaches and discover risks. So, we can have something tangible to assess the system architecture, by the way, we choose scenarios one by one and evaluate how well the architecture responds to this scenario.
A utility tree is top-down approach for characterizing the quality attribute requirements, selecting the most important quality goals to be the high-level nodes (performance, security, availability, modificability, maintanibility) and the leaves of the tree are quality scenarios evaluated by important (success of the system) and difficulty (architect’s assessment).
The utility tree begins with utility as the root. Utility is an expression of the correctness of the system. Quality attributes are the second-level nodes and they are specified in the step 2. The most common quality attributes are security, performance, availability, modificability, maintanibility and usability. The third-level nodes comprise of other level of quality attributes or quality attributes scenarios that are concrete enough for analysis and prioritization. The scenarios are the leaves of the tree, grouped by the quality attribute they express. A utility tree can contain several scenarios in its leaves, so we need to prioritize the leaves by assigning a value such as High, Medium and Low. After that, the scenarios are prioritized a second time by associating order pair (most important, most difficult) with values High, Medium and Low. An example of a utility tree is shown in the Figure 2.
Step 6 – Analyze Architectural Approaches
In this step, the evaluation team examines the highest-ranked scenarios one at time in order to understand how the proposed architecture supports each one as well as to identify and document the architecture decisions and its rationale, risks, non-risks, sensitivity points and tradeoffs.
You can record this information using the following form shown in the Figure 3, where we capture the analysis of the architecture approach for a scenario.
At this point the phase 1 of the evaluation process is ended. The evaluation team starts to document the summaries in an elapsed time of one or two weeks. More scenarios can be analyzed and we can resolve questions.
When the evaluation team and the project decision maker are ready to resume the evaluation process, then the stakeholders are assembled and the phase 2 starts. The focus of this phase is to elicit the points of view of several stakeholders in order to verify the results of the phase 1.
Step 7 – Brainstorm and prioritize scenarios
In this step, the stakeholders generate scenarios using a facilitated brainstorming process. Once the scenarios are generated, they must be prioritized. First, stakeholders are asked to merge scenarios, they think, represent the same quality attribute requirement. After that, they must vote for those scenarios, they think, are most important. Each stakeholder is given a number of votes equal to 30% of the number of scenarios, rounded up. For example, if we have twenty scenarios collected, each stakeholder is given six votes. Each stakeholder casts his votes publicly. Once the votes are tallied, the evaluation leader orders the scenarios by vote total and establishes a line limit for the scenarios. Scenarios above the line are used for next steps. For example, the team must consider the top five scenarios.
The prioritized list of brainstormed scenarios is compared with those in the utility tree in the step 5. If they agree, it indicates an alignment of what the stakeholders want and what the solution the architect is providing. If additional scenarios are discovered, it indicates some risks in the propose architecture. So, the new scenarios are added to the utility tree and the architecture is re-evaluated against them.
Step 8 – Analyze Architectural Approaches
In this step, the evaluation team guides the architect in the process of carrying out the highest ranked scenarios from step 7. The architect identifies how the architectural approaches are impacted by the scenarios generated in the previous step. Risks, non-risks, sensitivity points and tradeoffs continue to be identified and architectural decisions are specified.
Step 9 – Present Results
And finally, the information generated by the ATAM process needs to be presented to stakeholders. The evaluation team can write a report and present the ideas with slides. The lead architect must report the business context and drivers, requirements and constraints as well as the documentation of the selected architecture, the set of prioritized scenarios, the utility tree and the discovered risks, non-risks, sensitivity points and tradeoffs.
Now that you know this important analysis method to evaluate the hypothetical software architecture, you can apply it to your own business scenario.