What is Software Quality? – ppt video online download
Presentation on theme: “What is Software Quality?”— Presentation transcript:
0
Course Notes Set 5: Software Quality Assurance
Computer Science and Software Engineering Auburn University
1
What is Software Quality?
Simplistically, quality is an attribute of software that implies the software meets its specification This definition is too simple for ensuring quality in software systems Software specifications are often incomplete or ambiguous Some quality attributes are difficult to specify Tension exists between some quality attributes, e.g. efficiency vs. reliability
2
Software Quality Attributes
Modularity Complexity Portability Usability Reusability Efficiency Learnability Safety Security Reliability Resilience Robustness Understandability Testability Adaptability
3
Software Quality Conformance to explicitly stated functional and performance requirements, explicitly documented development standards, and implicit characteristics that are expected of all professionally developed software Software requirements are the foundation from which quality is measured. Lack of conformance to requirements is lack of quality. Specified standards define a set of development criteria that guide the manner in which software is engineered. If the criteria are not met, lack of quality will almost surely result. There is a set of implicit requirements that often goes unmentioned. If software conforms to its explicit requirements but fails to meet its implicit requirements, software quality is suspect. [Adapted from Pressman 4th Ed]
4
Software Quality Assurance
To ensure quality in a software product, an organization must have a three-prong approach to quality management: Organization-wide policies, procedures and standards must be established. Project-specific policies, procedures and standards must be tailored from the organization-wide templates. Quality must be controlled; that is, the organization must ensure that the appropriate procedures are followed for each project Standards exist to help an organization draft an appropriate software quality assurance plan. ISO ANSI/IEEE standards External entities can be contracted to verify that an organization is standard-compliant.
5
A Software Quality Plan
ISO 9000 model Organization quality plan Project A quality plan Project B quality plan Project C quality plan [Adapted from Sommerville 5th Ed]
6
SQA Activities Applying technical methods
To help the analyst achieve a high quality specification and a high quality design Conducting formal technical reviews A stylized meeting conducted by technical staff with the sole purpose of uncovering quality problems Testing Software A series of test case design methods that help ensure effective error detection Enforcing standards Controlling change Applied during software development and maintenance Measurement Track software quality and asses the ability of methodological and procedural changes to improve software quality Record keeping and reporting Provide procedures for the collection and dissemination of SQA information
7
Advantages of SQA Software will have fewer latent defects, resulting in reduced effort and time spent during testing and maintenance Higher reliability will result in greater customer satisfaction Maintenance costs can be reduced Overall life cycle cost of software is reduced
8
Disadvantages of SQA It is difficult to institute in small organizations, where available resources to perform necessary activities are not available It represents cultural change – and change is never easy It requires the expenditure of dollars that would not otherwise be explicitly budgeted to software engineering or QA
9
Quality Reviews The fundamental method of validating the quality of a product or a process. Applied during and/or at the end of each life cycle phase Point out needed improvements in the product of a single person or team Confirm those parts of a product in which improvement is either not desired or not needed Achieve technical work of more uniform, or at least more predictable, quality than what can be achieved without reviews, in order to make technical work more manageable Quality reviews can have different intents: review for defect removal review for progress assessment review for consistency and conformance
10
Quality Reviews Requirements Analysis Specification Review 1x Design
Code Code Review 10x Test Review Testing 15-70x Customer Feedback Maintenance x [Adapted from Pressman 4th Ed]
11
Cost Impact of Software Defects
Errors from Previous Steps Errors Passed to Next Step [Adapted from Pressman 4th Ed]
12
Defect Amplification and Removal
Preliminary Design 10 Detailed Design 6 4 37 Code/Unit Testing 10 27 94 37 116 To integration testing…
13
Defect Amplification (cont’d)
94 Integration Testing 94 47 Validation Testing 47 24 94 System Testing 24 12 47 24 Latent Errors
14
Review Checklist for Systems Engineering
Are major functions defined in a bounded and unambiguous fashion? Are interfaces between system elements defined? Are performance bounds established for the system as a whole and for each element? Are design constraints established for each element? Has the best alternative been selected? Is the solution technologically feasible? Has a mechanism for system validation and verification been established? Is there consistency among all system elements? [Adapted from Behforooz and Hudson]
15
Review Checklist for Software Project Planning
Is the software scope unambiguously defined and bounded? Is terminology clear? Are resources adequate for the scope? Are resources readily available? Are tasks properly defined and sequenced? Is the basis for cost estimation reasonable? Has it been developed using two different sources? Have historical productivity and quality data been used? Have differences in estimates been reconciled? Are pre-established budgets and deadlines realistic? Is the schedule consistent?
16
Review Checklist for Software Requirements Analysis
Is the information domain analysis complete, consistent, and accurate? Is problem partitioning complete? Are external and internal interfaces properly defined? Are all requirements traceable to the system level? Is prototyping conducted for the customer? Is performance achievable with constraints imposed by other system elements? Are requirements consistent with schedule, resources, and budget? Are validation criteria complete?
17
Review Checklist for Software Design (Preliminary Design Review)
Are software requirements reflected in the software architecture? Is effective modularity achieved? Are modules functionally independent? Is program architecture factored? Are interfaces defined for modules and external system elements? Is data structure consistent with software requirements? Has maintainability been considered?
18
Review Checklist for Software Design (Design Walkthrough)
Does the algorithm accomplish the desired function? Is the algorithm logically correct? Is the interface consistent with architectural design? Is logical complexity reasonable? Have error handling and “antibugging” been specified? Is local data structure properly defined? Are structured programming constructs used throughout? Is design detail amenable to the implementation language? Which are used: operating system or language dependent features? Is compound or inverse logic used? Has maintainability been considered?
19
Review Checklist for Coding
Is the design properly translated into code? (The results of the procedural design should be available at this review) Are there misspellings or typos? Has proper use of language conventions been made? Is there compliance with coding standards for language style, comments, module prologue? Are incorrect or ambiguous comments present? Are typing and data declaration proper? Are physical constraints correct? Have all items on the design walkthrough checklist been reapplied (as required)?
20
Review Checklist for Software Testing (Test Plan)
Have major test phases been properly identified and sequenced? Has traceability to validation criteria/requirements been established as part of software requirements analysis? Are major functions demonstrated early? Is the test plan consistent with the overall project plan? Has a test schedule been explicitly defined? Are test resources and tools identified and available? Has a test recordkeeping mechanism been established? Have test drivers and stubs been identified, and has work to develop them been scheduled? Has stress testing for software been specified?
21
Review Checklist for Software Testing (Test Procedure)
Have both white and black box tests been specified? Have all independent logic paths been tested? Have test cases been identified and listed with expected results? Is error handling to be tested? Are boundary values to be tested? Are timing and performance to be tested? Has acceptable variation from expected results been specified?
22
Review Checklist for Maintenance
Have side effects associated with change been considered? Has the request for change been documented, evaluated, and approved? Has the change, once made, been documented and reported to interested parties? Have appropriate FTRs been conducted? Has a final acceptance review been conducted to assure that all software has been properly updated, tested, and replaced?
23
Formal Technical Review (FTR)
Software quality assurance activity that is performed by software engineering practitioners Uncover errors in function, logic, or implementation for any representation of the software Verify that the software under review meets its requirements Assure that the software has been represented according to predefined standards Achieve software that is developed in a uniform manner Make projects more manageable FTR is actually a class of reviews Walkthroughs Inspections Round-robin reviews Other small group technical assessments of the software
24
The Review Meeting Constraints Components
Between 3 and 5 people (typically) are involved Advance preparation should occur, but should involve no more that 2 hours of work for each person Duration should be less than two hours Components Product – A component of software to be reviewed Producer – The individual who developed the product Review leader – Appointed by the project leader; evaluates the product for readiness, generates copies of product materials, and distributes them to 2 or 3 reviewers Reviewers – Spend between 1 and 2 hours reviewing the product, making notes, and otherwise becoming familiar with the work Recorder – The individual who records (in writing) all important issues raised during the review
25
Review Reporting and Recordkeeping
Review Summary Report What was reviewed? Who reviewed it? What were the findings and conclusions? Review Issues List Identify the problem areas within the product Serve as an action item checklist that guides the producer as corrections are made
26
Guidelines for FTR Review the product, not the producer
Set an agenda and maintain it Limit debate and rebuttal Enunciate the problem areas, but don’t attempt to solve every problem that is noted Take written notes Limit the number of participants and insist upon advance preparation Develop a checklist for each product that is likely to be reviewed Allocate resources and time schedules for FTRs Conduct meaningful training for all reviewers Review your earlier reviews (if any)
27
Reviewer’s Preparation
Be sure that you understand the context of the material Skim all product material to understand the location and the format of information Read the product material and annotate a hardcopy Pose your written comments as questions Avoid issues of style Inform the review leader if you cannot prepare
28
Results of the Review Meeting
All attendees of the FTR must make a decision Accept the product without further modification Reject the product due to severe errors (and perform another review after corrections have been made) Accept the product provisionally (minor corrections are needed, but no further reviews are required) A sign-off is completed, indicating participation and concurrence with the review team’s findings
29
Software Reliability Probability of failure-free operation for a specified time in a specified environment. This could mean very different things for different systems and different users. Informally, reliability is a measure of the users’ perception of how well the software provides the services they need. Not an objective measure Must be based on an operational profile Must consider that there are widely varying consequences for different errors
30
IO Mapping Input Set Subset of inputs causing erroneous outputs
Output Set Software Subset of inputs causing erroneous outputs Erroneous [Adapted from Sommerville 5th Ed]
31
Software Faults and Failures
A failure corresponds to erroneous/unexpected runtime behavior observed by a user. A fault is a static software characteristic that can cause a failure to occur. The presence of a fault doesn’t necessarily imply the occurrence of a failure. Input Set User A Inputs Erroneous Inputs User B Inputs User C Inputs [Adapted from Sommerville 5th Ed]
32
Reliability Improvements
Software reliability improves when faults which are present in the most frequently used portions of the software are removed. A removal of X% of faults doesn’t necessarily mean an X% improvement in reliability. In a study by Mills et al. in 1987 removing 60% of faults resulted in a 3% improvement in reliability. Removing faults with the most serious consequences is the primary objective.