|author(s):||Otto Vinter, Per-Michael Poulsen|
|title:||Experience-driven Software Process Improvement|
|organisation(s):||Br|el & Kjfr A/S,Sxren Lauesen,
Copenhagen Business School (Denmark)
Two improvement projects have been conducted according to this approach. The first experiment led to a step-change in the testing process for embedded software development. However, these experiences are relevant for all companies developing software, not only for those developing software for embedded-processor controlled products. The present experiment aims at achieving a similar step-change in the requirements engineering process. Only preliminary findings in this area can be reported at this point in time, however.
The improvement projects are funded as Process Improvement Experiments (PIE) by the Commission of the European Communities (CEC) under the ESSI programme: European System and Software Initiative. The goal of the ESSI programme is to promote improvements in the software development industry so as to achieve greater efficiency, higher quality, and greater economy.
Br|el & Kjfr A/S is a leading manufacturer of high-precision measurement instruments. Br|el & Kjfr develops high-precision electronic instruments for: Sound, Vibration, Condition Monitoring, and Gas Measurements. The company is headquartered in Denmark, but the majority of the products are sold through subsidiaries around the world. Most of the products are heavily based on embedded real-time software, but the number of PC applications are increasing rapidly. This also means that the number of 3rd party products that the software must coexist and cooperate with increases: MS-Windows, 3D graphical packages, communication packages etc.
1. The Test Improvement Project
The first Process Improvement Experiment was aimed at improving the testing process. The title of the improvement experiment was: The Prevention of Errors through Experience-Driven Test Efforts (ESSI Project 10438 - PET). The project was partnered by another Danish company: Danfoss A/S conducting a similar experiment. The results presented here, however, are only those of Br|el & Kjfr.
We performed the categorisation of the bugs from the error logs from the previous projects in accordance with Boris Beizer's categorisation scheme (Software Testing Techniques. Second Edition. Van Nostrand Reinhold, New York 1990) and uncovered some remarkable results that created immediate management attention.
We have found that bugs in embedded real-time software follow the same pattern as other types of software. We have found that the major cause of bugs reported (36%) are directly related to requirements, or can be derived from problems with requirements. The second largest cause of bugs (22%) stems from lack of systematic unit testing. Consequently we established a new unit testing strategy based primarily on static and dynamic analysis. Metrics used were planned to be based on McCabe's complexity measure, code size, and branch coverage.
The baseline project selected for the experiment had just been issued as a trial-release, and based on our analysis the management at Br|el & Kjfr were convinced that resources should be used for a more efficient retesting of the product by means of the methods and tools that had been selected. This would also prove much better the usability of the methods and tools since no new development would take place to distort the effect.
After static and dynamic analysis had been performed on the existing (trial) version of the code, the code was corrected for serious static bugs, and more testcases were developed so that all modules achieved a branch coverage of 85%. An improved (production) version of the baseline product was then released where all the bugs had been corrected that were found during the experiment, as well as bugs reported from outside the development group through actual use of the existing (trial) version in the same period.
The experiment demonstrated a reduction in the number of bugs reported after release by 75%, and the hours of test effort per bug found by 46%. We also demonstrated that the number of bugs that can be found by static and dynamic analysis is quite large, even in code that has been released.
This paper only gives the main results of the experiment. More information can be found in the Final Report of the ESSI-PET project (http://www.esi.es/ESSI/Reports/All/10438), which has been approved by the Commission.
2. The Requirements Engineering Improvement Project
The results of the test improvement project reported in the previous chapter led to the obvious next Process Improvement Experiment: Improving the requirements engineering process. The title of this improvement experiment is: A Methodology for Preventing Requirements Issues from Becoming Defects (ESSI Project 21167 - PRIDE). Professor Sxren Lauesen from the Copenhagen Business School was selected as subcontractor for this experiment because of his knowledge in the requirements engineering field.
The project was again divided into two main phases: Analysis of current practice; and Execution of an experiment with an improved practice. At present the project is in the analysis phase, and therefore no experimental results can be reported. The baseline product for this experiment is a MS-Windows based system that connects to various front-end equipment which performs the real-time measurements. The front-end equipment and its software is not part of the experiment.
We again performed the categorisation of the bugs from the error logs of the preceding project in accordance with Boris Beizer's categorisation scheme. We then had to decide which of the Beizer categories we would regard as being requirements related. Beizer does have a top level category called: Requirements. However, bugs in many other categories can be regarded as being caused by requirements problems, e.g. requirement misunderstood, missing/changed features, external or OS interfaces/timing, and bugs related to 3rd party products.
The problem reports that belonged to these categories were selected. The problem reports in this set turned out to be 51% of all problem reports, thus demonstrating that bugs related to requirements not only represent the prime bug cause, but also the majority of all bugs.
The bugs were also classified according to the interface where the error occurred, i.e. the user interface, graphics package, MS-Windows, documentation/help, release versions, domain concepts, front-end etc. Some problems related to two interfaces, for instance the user interface and the 3D graphical package. An occurrence was registered for each interface.
We have found that requirements related errors are not what is expected from the literature. Usability errors dominate (60%). Problems with understanding and cooperating with 3rd party software packages and circumventing their errors are also very frequent (20%).
Usability errors also seem to be rather easy to correct even when found late in the development process, e.g in or after the integration phase. Problems with 3rd party products, however, are very costly to correct/circumvent. Although there was much ambiguity in the requirements specification of the product we analysed, this evidently caused very few errors. Most errors correspond to tacit requirements, i.e. requirements not specified at all.
Finally in our analysis, we tried to imagine what could have prevented each error. We started out with a list of known techniques and added to it when no technique seemed to be able to prevent the error in question. The result was a detailed list of requirement techniques grouped under the following major subjects: demand analysis, usability techniques (including prototypes), validation and testing of 3rd party software, improved specification techniques, and checking techniques (including formal inspections).
Our present activity is for each error report to assign estimated hit-rates for each technique, but it is too early to report any trends. However, once the estimated effectiveness of each technique has been evalutated we will be in a position to calculate the cost/benefit ratio of each, and then select the optimum set of techniques to be employed in our real-life experiment on a development project. The experiment is expected to be concluded mid 1997.
3. Concluding Remarks
Analysis of error reports is a cheap and effective way for companies who wish to get started on a software quality process improvement programme. It is not necessary to perform comprehensive measurements on development activities, and wait until enough data has been collected.
Setting up a process improvement programme is now an experience-driven incremental task where measurements are only performed when experience shows that there is a real need (problem) for the data to make an informed decision on how to change part of the development process.
This experience-driven incremental approach to process improvement will guarantee constant management attention because of immediate results, and acceptance amoung developers since only important measurements need be collected by them. Consequently there will be no budget problems for the programme, and no objections to its implementation.
|ISPA Homepage - This information last updated October 1997|
|Copyright © 1996-7. All rights reserved.|