author(s): | Khaled El Emam, Dennis R. Goldenson |
title: | Description and Evaluation of the SPICE Phase One Trials Assessments |
organisation(s): | Fraunhofer - Institute for Experimental Software Engineering Software Engineering Institute |
copyright: | ISCN Ltd. |
The objective of the SPICE (Software Process Improvement and Capability dEtermination) Project is to deliver an international standard for software process assessment. As part of this project there are empirical trials. These empirical trials are scheduled to be completed in three broad phases. The first phase was completed in calendar year 1995. Its results were based on several sources of data, including a series of questionnaires completed by both assessors and assessees from 35 assessments conducted world-wide, project problem reports and change requests, and the actual rating profiles forthcoming from the assessment. The focus of phase 1 was on evaluating the design decisions of the SPICE framework, and the usability of version 1.0 of the core SPICE document set. The results from phase 1 were used to help identify shortcomings and inform decisions about the content of the document set prior to standardization.
In this paper we describe the assessments conducted during phase 1 of the SPICE trials and present an evaluation of the core document set based on the experiences from 35 assessments. This is based on the work done by the authors and reported in the phase 1 trials final report. The results indicate that the SPICE model and rating framework are in general sound and have been found to be useful and usable, but they also highlight some potential weaknesses.
2. Research Method
The SPICE trials are a collaborative effort amongst a substantial number of people around the world. Assessments using the SPICE documents were conducted in 1995. During each of these assessments a set of questionnaires were administered. For the purposes of this paper, we obtained responses from two groups of people:
(i) lead assessors who were in charge of the trials assessments, and
(ii) the sponsors of the assessments in the Organizational Unit (the organization or part of the organization that was being assessed). These give us the assessors' and assessees' perspectives respectively. In total, questionnaire data from 35 assessments were collected before the response deadline. Of these, 20 were conducted in Europe, 1 in Canada, and 14 in the Pacific Rim.
The objectives of our analysis of the questionnaire responses as presented in this paper are twofold. First, to describe what actually happened during phase one assessments. Second, to present the evaluations of some of the core SPICE document set. The evaluation results for the documents described in Figure 1 are presented in this paper.
The results we present are the percentage of responses to various questions. To evaluate the documents, we identify the proportions of respondents who are supportive (as opposed to critical) of either the SPICE design decisions or the claim that the documents are usable. A supportive response is one:
that says something positive about SPICE, and/or that will not require any changes to the draft SPICE documents (i.e., the ones that were used during the trials assessments)
3. Results
3.1. Description of the Assessments
During the assessments, the most commonly used type of assessment instrument was a paper based checklist followed by a computerized spreadsheet. Apart from the spreadsheet, it was rare that any other form of computerized instrument was used. The instruments that were used were developed mostly by the lead assessors themselves. Very few of the assessors (only 35 percent) used the exemplar instrument provided by the SPICE project. Furthermore, most of the information that was collected during the assessments was through interviews. No assessors used assessee self-reports (0%) and very few collected data prior to the on-site visit (12%).
3.2. General Evaluation
The assessment sponsors' overall perceptions are generally quite positive towards SPICE. Almost all of them agreed that the benefits of their assessments were at least "on balance" worth the expense and time their organizations expended; almost 40 percent said their assessments were "more than worth the expense." That said, their support is not unqualified. Almost 80 percent of the assessment sponsors agree that awareness, "buy-in," and support for process improvement improved among their organizations' management as a result of their assessments. However, only 65 percent agree to a similar question about their technical staffs, and relatively few chose the "strongly agree" response option to either question about commitment to SPI resulting from the assessments.
Overall, the experienced assessors are somewhat more positive towards SPICE than are the assessees, but they too tend to qualify their responses. Almost all of the assessors say that the organizational unit personnel were satisfied with the results of their assessments; over 80 percent think that the assessments improved awareness of SPI issues among the engineers in the organizational units that were assessed. Perhaps most pertinent from a SPICE perspective, 85 percent of the experienced assessors characterize the SPICE approach as being at least somewhat better than "other assessment methods" with which they are familiar.
The assessment sponsors are generally quite satisfied with the accuracy and actionability of their assessment results. Over 90 percent of the assessors report that their assessments provided valuable direction for process improvement in their organizations, characterized their organizations' strong points at least "reasonably well," and that their SPICE process profiles accurately described their organizations' major problems. Once again, though, the assessees do express some reservations. Over 20 percent of them say that the process profiles were only "generally accurate" within the scope of their assessments. Well over 30 percent of the assessment sponsors report inappropriately identified "problems" in their process profiles; a similar proportion say that their profiles failed to identify problems in the scope of their assessments.
We can compare some of the results obtained for SPICE with those obtained in another survey of users of the CMMsm. When asked about how well the CMM assessment described the organization's major problems with the software process, 98% responded with the "very accurately" or "generally accurately" category. This is comparable to the 91% obtained from the assessees in the SPICE trials. In addition, when asked how well the assessment characterised the organization's strong points, 92% of the respondents to the CMM survey chose the "very well" or "reasonably well" response categories. This percentage is comparable to the 93% obtained from the SPICE survey. Therefore, at least by these two criteria, the results from the phase 1 assessments are comparable to those obtained from previous surveys of process assessments models and methods.
4. Conclusions
As planned, the phase 1 of the SPICE trials was completed in time for a critical decision point in the standardization process of the SPICE document suite. This was a ballot by the member national bodies on the documents. The results from phase 1 of the SPICE trials were used as input into this process, whereby the phase 1 trials report was made available to all member bodies prior to the ballot deadline. We are aware of at least two bodies who made explicit reference to the results of the trials in their comments.
The SPICE trials do show that it is possible to provide empirical evidence that can inform decision making for an evolving, prospective international standard. In the spirit of continuous improvement, the phase 1 trials identified a number of areas in need of improvement.
ISPA Homepage - This information last updated October 1997 |
Copyright © 1996-7. All rights reserved. |