PPT – Quality Attributes PowerPoint Presentation, free download – ID:4346091

    Quality Attributes. Often know as “–ilities” … Reliability, Availability, Portability, Scalability, Performance (!) … but much more than this Part of a system’s non-functional requirements “how” the system achieves its functional requirements.

    What are Attributes of a Quality Mentor? – . exploring our beliefs about mentoring and three approaches to mentoring.

    What are Attributes of a Quality Mentor? – . exploring our beliefs about mentoring and three approaches to mentoring.

  1. Quality Attributes • Often know as “–ilities” … • Reliability, Availability, Portability, Scalability, Performance (!) • … but much more than this • Part of a system’s non-functional requirements • “how” the system achieves its functional requirements

  2. Architecture and Quality Attributes • Achieving quality attributes must be considered throughout design, implementation, and deployment • Satisfactory results are a matter of getting ‘the big picture’ • Architecture is critical to the realization of many qualities of interest in a system, • these qualities should be designed in and can be evaluated at the architectural level. • Architecture, by itself, is unable to achieve qualities. • It provides the foundation for achieving quality, but this foundation will be to no avail if attention is not paid to the details. • They influence each-other.

  3. Classes of Quality Attributes • Qualities of the system. • availability, modifiability, performance, security, testability, usability, scalability … • Business qualities • Time to market • Cost and benefit • Projected lifetime of the system • Rollout schedule • Qualities of the architecture itself • Conceptual integrity, • Correctness and completeness • Buildability

  4. Comments • Business qualities (such as time to market) are affected by the architecture. • Iterlinked: Qualities of the architecture itself • indirectly affect other qualities, such as modifiability. • E.g., conceptual integrity,

  5. System Quality Attributes • Availability, modifiability, performance, security, testability, usability, scalability … • Warning: use operational definitions! • Architects are often told: • “My application must be fast/secure/scale” • Far too imprecise to be any use at all • Quality attributes (QAs) must be made precise/measurable for a given system design, e.g. • “It must be possible to scale the deployment from an initial 100 geographically dispersed user desktops to 10,000 without an increase in effort/cost for installation and configuration.”

  6. System Quality Attribute Specification • Ok, QA’s must be concrete • But what about testable? • Test scalability by installing system on 10K desktops? • Often careful analysis of a proposed solution is all that is possible • “It’s all talk until the code runs” • Can you do better?

  7. System Quality Attribute Problems • Definitions are not operational. • A focus of discussion is often on which quality a particular aspect belongs to. • Is a system failure an aspect of availability, an aspect of security, or an aspect of usability? All three attribute communities would claim ownership of a system failure. • Each attribute community has developed its own vocabulary. • performance community ”events” • security community  “attacks” • the availability community  “failures” • the usability community  “user input.” • All of these may actually refer to the same occurrence, • Solution: use quality attribute ‘scenarios’ and unified la language

  8. An Analysis Framework for Specifying Quality Attributes QUALITY ATTRIBUTE SCENARIOS [For each quality-attribute-specific requirement.] • Source of stimulus. This is some entity (a human, a computer system, or any other actuator) that generated the stimulus. • Stimulus. A condition that needs to be considered when it arrives at a system. • Environment. The stimulus occurs within certain conditions. The system may be in an overload condition or may be idle when the stimulus occurs. • Artifact. Some artifact is stimulated. This may be the whole system or some pieces of it. • Response. The activity undertaken after the arrival of the stimulus. • Response measure. When the response occurs, it should be measurable in some fashion so that the requirement can be tested.

  9. Availability Scenario Example • System reaction to unanticipated message

  10. Modifiability Scenario Example • Specifying a modifiability QA requirement

  11. Concern: • How to generate meaningful scenarios in practice? • Completeness?

  12. Availability • Measured by the proportion of the required time the application is useable. • E.g. • 100% available during business hours • No more than 2 hours scheduled downtime per week • 24x7x52 (100% availability) • MTTF / (MMTF + MTTR) • Note: scheduled downtime is excluded • Period of loss of availability determined by: • Time to detect failure • Time to correct failure • Time to restart application • Distinguish between faults and failures

  13. Availability Scenarios • Source of stimulus. Internal or external indications of faults or failure since the desired system response may be different. • Stimulus. A fault of one of the following classes occurs. • omission. A component fails to respond to an input • crash. The component repeatedly suffers omission faults. • timing. A component responds but the response is early or late. • response (byzantine). A component responds with an incorrect value. • Artifact. Specifies the resource that is required to be available, • Environment. The state of the system when the fault or failure occurs may also affect the desired system response • Response. Possible reactions to a system failure: • logging the failure, notifying selected users or other systems, switching to a degraded mode with either less capacity or less function, shutting down external systems, or becoming unavailable during repair. • Response measure. Metric of success: e.g., availability percentage, or time to repair, etc.

  14. Designing for Availability • Faults vs. Failures • Tactics • Fault detection • Fault recovery • Fault prevention

  15. Tactics for Availability • Fault detection • Ping/echo; • Heartbeat; • Exceptions • Fault recovery • Mostly redundancy based • [byzantine faults] Voting: multiple processes working in parallel. • [crash, timing] Active redundancy – hot restart • [crash] Passive redundancy (warm restart), spare. • Reintroduction: shadow operation, resynchronization, checkpoint/rollback • Fault prevention • Removal from service; Transactions

  16. Quality Attributes • Often know as “–ilities” … • Reliability, Availability, Portability, Scalability, Performance (!) • … but much more than this • Part of a system’s non-functional requirements • “how” the system achieves its functional requirements

  17. Modifiability • Modifications to a software system during its lifetime are a fact of life. • Ideal: modifiable systems that are easier to change/evolve • Modifiability should be assessed in context of how a system is likely to change • No need to facilitate changes that are highly unlikely to occur • Impact of designing for modifiability is rarely easy to quantify • One strategy: Minimizing dependencies • Changes isolated to single components likely to be less expensive than those that cause ripple effects across the architecture.

  18. Modifiability • Source of stimulus. Who makes the changes – e.g., developer, a system administrator, or an end user. • Stimulus. What changes? Addition of a function, the modification of an existing function, deletion of a function; changing the qualities of the system • Artifact. Specifies what is to be changed-the functionality of a system, its platform, its user interface, its environment, or another system with which it interoperates. • Environment. When the change can be made-design time, compile time, build time, initiation time, or runtime. • Response. Constraints on the change, how to test and deploy it. • Response measure. Quantitative measure of cost.

  19. Modifiability Tactics • Goals: • Reduce the number of modules affected by a change •  localize modifications • Limited modifications of these modules •  prevent ripple effects • Control deployment time and cost •  defer binding time

  20. Modifiability Tactics • Goals: • Reduce the number of modules affected by a change •  localize modifications • Limited modifications of these modules •  prevent ripple effects • Control deployment time and cost •  defer binding time • Techniques • Maintain semantic code coherence • Coupling & cohesion metrics • Abstract common services • Anticipate expected changes • Generalize the module

  21. Modifiability Tactics • Goals: • Reduce the number of modules affected by a change •  localize modifications • Limited modifications of these modules •  prevent ripple effects • Control deployment time and cost •  defer binding time A taxonomy of dependencies between modules • Syntax of data and service invocation • Semantics of data and service • Sequence of data and control • Identity of interfaces • Location of called service • QoS provided Tactics: • Hide information • Maintain existing interfaces • Restrict communication paths • Use an intermediary (proxy)

  22. Modifiability Tactics • Goals: • Reduce the number of modules affected by a change •  localize modifications • Limited modifications of these modules •  prevent ripple effects • Control deployment time and cost •  defer binding time • Issues: • Reduce time to deploy • Allow non-programmers to make changes • Tactics • Runtime registration • Configuration files • Dynamically loaded code

  23. Performance • Many examples of poor performance in enterprise applications • Performance requirements: • Multiple metrics: Throughput, response time, deadlines • Average (sustained) vs. peak. • Guarantees? Often specified as median and 99%tile.

  24. Performance – Throughput • Measure of the amount of work an application must perform in unit time • E.g., transactions per second, messages per minute • Is required throughput: • Average? • Peak? • Many system have low average but high peak throughput requirements

  25. Throughput Example • Throughput of a message queuing system • Messages per second (msp) • Maximum sustainable throughput (MST) • Note throughput changes as number of receiving threads increases

  26. Something to watch for … • Definitions for what is a • Transaction? • Message? • Request? • All are application / context specific measures. • System must achieve 100 mps throughput • BAD!! • System must achieve 100 mps peak throughput for PaymentReceived messages • GOOD!!!

  27. Performance Tactics • Contributors to response time • Resource consumption • Blocked time • Contention for resources • Availability of resources • Dependency on other computations

  28. Testability • Estimate: 40% of development cost goes to testing • Testability: assuming that the software has at least one fault, the probability that this will be detected in the next testing round • Need a system that is controllableand observable • Testing harness: control internal state of components,, pass inputs to the system, observe output

  29. Scalability • “How well a solution to some problem will work when the size of the problem increases.” • 4 common scalability issues in IT systems: • Request load • Connections • Data size • Deployments

  30. Scalability – The reality • Adding more hardware should improve performance: • scalability must be achieved without modifications to application architecture • The reality as often is different! • Applications may exhibit a decrease in throughput and a subsequent exponential increase in response time. • increased load causes increased contention for resources such as CPU, network and memory • each request consumes some additional resource (buffer space, locks, and so on) in the application, and eventually these are exhausted

  31. Scalability – connections • What happens if number of simultaneous connections to an application increases • If each connection consumes a resource? • Exceed maximum number of connections? • ISP example: • Each user connection spawned a new process • Virtual memory on each server exceeded at 2000 users • Needed to support 100Ks of users • Tech crash ….

  32. Scalability – Data Size • How does an application behave as the data it processes increases in size? • Chat application sees average message size double? • Database table size grows from 1 million to 20 million rows? • Image analysis algorithm processes images of 100MB instead of 1MB? • Can application/algorithms scale to handle increased data requirements?

  33. Scalability – Deployment • How does the effort to install/deploy an application increase as installation base grows? • Install new users? • Install new servers? • Solutions typically revolve around automatic download/installation • E.g. downloading applications from the Internet

  34. Design Trade-offs • QAs are rarely orthogonal • They interact, affect each other • highly secure system may be difficult to integrate • highly available application may trade-off lower performance for greater availability • high performance application may be tied to a given platform, and hence not be easily portable • Architects must create solutions that makes sensible design compromises • not possible to fully satisfy all competing requirements • Must satisfy all stakeholder needs • This is the difficult bit!

  35. Summary • QAs are part of an application’s non-functional requirements • Many QAs • Architect must decide which are important for a given application • Understand implications for application • Understand competing requirements and trade-offs