Appendix – Glossary of Common Terminology for Empirical Software Engineering

Printed in Reidar Conradi and Alf Inge Wang (Eds.): Empirical Methods and Studies in Software Engineering – Experiences from ESERNET, Springer Verlag LNCS 2765, ISBN 3-540-40672-7, Aug. 2003, 278 pages, pp. 274-278.

Norwegian University of Science and Technology (NTNU), NO-7491 Trondheim, Norway /

ESERNET Glossary

Case study: a research technique where you identify key factors that may affect the outcome of an activity and then document the activity, its input constraints, resources and output [1].

Component-based Software Engineering (CBSE): building and managing large software systems using (reusing) previously developed and offer software components to better master the growing complexity and size of new software systems, resulting in better, cheaper and earlier software systems.

COTS: Commercial off-the-shelf software.

Empirical method: statistical method used to validate a given hypothesis. Data is collected to verify or falsify the hypothesis [2].

Empirical (experimental) Software Engineering: emphasizes the actual study of software engineering using scientific principles for validation.

Empirical study: systematic, practical test of a hypothesis, e.g. by an experiment, case study, post-mortem analysis, or survey.

ESERNET: Experimental Software EngineeRing NETwork.

Experience: Can be collected raw data (e.g. number of defects) or project summaries (post-mortems). It is also possible to have processed and generalized experience as reusable process models, estimation models, check lists, risk models etc. Experiences can also be realized as reusable software artifacts.

Experience base (EB): logical centralized archive where various experiences are stored, for later to be used for other purposes. An experience base can be realized as documents, web, as a database, as a spreadsheet, or/and as a tool that includes rules, algorithms, models, and other resources.

Experience Factory (EF): logical and/or physical organization for continuous learning from experience, including an experience base for the storage and reuse of knowledge.

Experiment: An act or operation for the purpose of discovering something unknown or of testing the purpose of discovering something unknown or of testing a principle [3].

Experimentation: by using the experimentation engineering method, engineers build and test a system according to a hypothesis. Based upon the result of the test, they improve the solution until it requires no further improvement [2].

Goal Question Metric (GQM): method used to define measurement on the software project, process and product. GQM defines a measurement model on three levels: 1) Conceptual level (goal): A goal is defined for an object, for a variety of reasons, with respect to various models of quality, from various points of view, and relative to a particular environment. 2) Operational level (question): A set of questions is used to define models of the object of study and then focuses on that object to characterize the assessment or achievement of a specific goal. 3) Quantitative level (metric): A set of metrics, based on the models, is associated with every question in order to answer it in a measurable way [4].

Hypothesis: proposition or set of propositions set forth as an explanation for the occurrence of some specified group of phenomena, either asserted merely as a provisional conjecture to guide investigation (working hypothesis) or accepted as highly probable in the light of established facts [3].

Knowledge (operational information): there is no consensus or generally accepted definition of knowledge, but the word can mean information, awareness, knowing, cognition, sapience, cognizance, science, experience, skill, insight, competence, know-how, practical ability, capability, learning, wisdom, certainty etc. [5].

Knowledge base: see experience base.

Knowledge management (KM): management of the organization towards the continuous renewal of the organizational knowledge base - this means e.g. creation of supportive organizational structures, facilitation of organizational members, putting IT-instruments with emphasis on teamwork and diffusion of knowledge (as e.g. groupware) into place. (Thomas Bertels)

Learning organization: organization skilled at creating, acquiring and transferring knowledge and at modifying its behavior to reflect new knowledge and insights [6].

Metric: collection of characteristics (attributes) and their definitions and process for collection, that together characterize a software product and/or software process, for a specific purpose and in a specific context. Typical metrics for software can be lines of code (LOC), defects per LOC etc.

Object-oriented analysis (OOA): concerned with developing an object-oriented model of the application domain. The identified objects reflect entities and operations that are associated with the problem to be solved [7].

Object-oriented design (OOD): concerned with developing an object-oriented model of a software system to implement the identified requirements. The objects in an object-oriented design are related to the solution to the problem that is being solved [7].

Object-oriented programming (OOP): concerned with realizing a software design using an object-oriented programming language. An object-oriented programming language, such as Java, supports the direct implementation of objects and provides facilities to define object classes [7].

Plan-Do-Check-Act (PDCA): cycle of actions used to achieve continuous improvement. It is possible to use an external PDCA-cycle for the whole company or several internal PDCA-cycles for specific and project-related initiatives. PDCA is used in TQM.

Post-Mortem Analysis (PMA): method used to evaluate some performance after you are finished. PMA can be used to evaluate projects by asking questions, such as what went wrong, what did we do well, what can we do better next time etc.

Qualitative research: concerned with studying objects in their natural setting. A qualitative researcher attempts to interpret a phenomenon based on explanations that people bring to them [8].

Quality Assurance (QA): establishment of a framework of organizational procedures and standards, which lead to high-quality software [7].

Quality Control: definition and enactment of processes, which ensure that the project quality procedures and standards are followed by the software development team [7].

Quality Improvement Paradigm (QIP): aimed at building descriptive models of software processes, products, and other forms of experience, experimenting with and analyzing these models, in order to build improvement-oriented, packaged, prescriptive models [9]. QIP uses EF and GQM.

Quality Management: involves defining procedures and standards, which should be used during software development and checking that these are followed by all engineers [7].

Quantitative research: initially concerned with quantifying a relationship or to compare two or more groups [10]. The aim is to identify a cause-effect relationship. The quantitative research is often conducted through setting up controlled experiments or collecting data through case studies.

Reuse: further use or repeated use of an artifact. Typically, software artifacts are designed for use outside their original context to create new systems [11].

Software Engineering: engineering discipline which is concerned with all aspects of software production from the early stages of system specification through to maintaining the system after it has gone into use [7].

Software inspection: static method to verify and validate a software artifact manually [12]. Verification means checking whether the product is developed correctly, i.e. fulfils its specification. Validation means checking whether the “correct” product is developed, i.e. to fulfill the customer’s needs.

Software measurement: concerned with deriving a (numeric) value for some attribute of a software product or software process [7].

Software process improvement (SPI): systematical and focused activity to improve the software quality of the end product and the related software development processes, by using an improvement plan within a software development organization.

Software quality: the totality of features and characteristics of a product or service that bears on its ability to satisfy stated or implied needs (ISO 8402-1986), or: that the developed product should meet its specification [13].

Survey: retrospective study of a situation to try to document relationships and outcomes. Thus, a survey is always done after an event has occurred [1].

Testing: process of executing a program (or part of a program) with the intention of finding errors [14].

Total Quality Management (TQM): Total organization using Quality principles for the Management of its processes [15].

Validation (of measured data values): process of checking whether entered data meets certain conditions or limitations [16].

References to above definitions

[1] Norman Fenton and Susan Lawrence Pfleeger, "Software Metrics: A Rigorous and Practical Approach", 2nd edition, International Thomson Computer Press, 1996.

[2] Victor R. Basili, "The experimental paradigm in software engineering". In H. D. Rombach, V. R. Basili, and R. W. Selby (Eds.), "Experimental Software Engineering Issues: A critical assessment and future directions", p. 3-12. Lecture Notes in Computer Science Nr. 706, Springer Verlag, September 1992.

[3] Webster, "Webster’s Encyclopaedic Unabridged Dictionary of the English Language", Gramercy books 1989.

[4] The Software Engineering Laboratory – SEL, about the Goal-Question-Metric (GQM) method,

[5] Karl Erik Sveiby, "The New Organizational Wealth – Managing and measuring knowledge-based assets", Berrett-Koehler Publishers, April 1997, 275 pages, ISBN 1-576-75014-0 (hardcover).

[6] David A. Garvin, "Building a Learning Organization", Harvard Business Review, July-August 1993.

[7] Ian Sommerville, "Software Engineering", 6th Edition, Addison-Wesley, 2001.

[8] Norman K. Denzin and Yvonna S. Lincoln, "Handbook of Qualitative Research", Sage Publications, London, UK, 1994.

[9] Experimental Software Engineering Group – ESEG, University of Maryland – College Park, about the Quality Improvement Paradigm (QIP),, accessed Sept. 2003.

[10] John W. Creswell, "Research Design, Qualitative and Quantitative Approaches", Sage Publications, 1994.

[11] Ivar Jacobson, Martin Griss, and Patrik Jonsson, "Software Reuse – Architecture, Process and Organization for Business Success", ACM Press book, 1997.

[12] Robert Ebenau and Susan Strauss, "Software Inspection Process", McGraw-Hill, 1994, ISBN 0-070-62166-7.

[13] Philip B. Crosby, "Quality is Free: The Art of Making Quality Certain", McGraw-Hill, 1979.

[14] Glenford J. Myers, "Software Reliability, Principles and Practices", New York, John Wiley, September 1976, 360 pages, hardcover, ISBN 0-471-62765-8.

[15] University of Michigan, "Total Quality Management ENG / MFG401",, accessed Sept. 2003.

[16] MicroSoft Inc., "MSDN – Welcome to the MSDN library",, accessed Sept. 2003.

Relevant web sites

You can also find related terms and definition on the following web sites:

* ESERNET competence center

* Software Engineering Institute (SEI) at Carnegie Mellon University (CMU) in Pittsburgh, USA.

* NASA Software Engineering Laboratory (NASA-SEL) at Greenbelt (outside Washington), USA

* Fraunhofer Center – Maryland (FC-MD) at College Park (outside Washington), USA

* Fraunhofer IESE in Kaiserslautern, Germany

Last updated by Alf Inge Wang ca. 1 July 2003; adapted for .html on 15 Sept. 2003 by Reidar Conradi.