Software engineering mini glossary

Reidar Conradi, IDI, NTNU – August 2003:
This is a small glossary of common terms in software engineering.
See IEEE's SWEBOK project. Other standard references (e.g. Telematics)??
See also ESERNET glossary from Springer LNCS 2765, pp. 274-278.

Software engineering in general

Software engineering – subfield of computer science/informatics:
1. Technologies (e.g. concepts, modeling apparatus, languages, techniques, methods, processes, paradigms), that support software development and maintenance (see below).
2. The application of such technologies in software projects.
From Ian Sommerville's textbook, p.4: "software engineering is concerned with the theories, methods and tools which are needed to develop (the) software (for these computers)" – parentheses inserted by RC.

Software: Computer-executable models ("programs"), including any other models/descriptions (i.e. artifacts like requirements) needed to make such executable models.

Software development: Making (the first release of) a software product according to user requirements, often organized as a project.

Software maintenance: Further development of a software product after its first release, also usually organized as a project.
2/3 of total software costs may fall on software maintenance. We distinguish between perfective (new or revised requirements), adaptive (new technologies/platform), corrective (fixing faults) and preventive maintenance (internal reorganization) – e.g. with relative distribution 50%, 25%, 20%, and 5%.

Project: Software development or maintenance activities in a software-producing organization, in order to deliver a new or revised software product as a release to a paying customer.

Software artifact: Any piece of software (i.e. models/descriptions) developed and used during software development and maintenance. Examples are requirements specifications, architecture and design models, source and executable code (programs), configuration directives, test data, test scripts, process models, project plans, various documentation etc. etc.

Product: Final subset of software artifacts (often with executable code) from a software project, developed for and delivered to a customer. A product may be issued in several releases, cf. maintenance.
The delivered code often consists of three layers: application logic, reused components (see below), and platform commodities like OS, DBMS etc.

Service – especially in telecom, = product? (Norw.: tjeneste): A combination of hardware and software products/systems that offer a set of features to a user, e.g. for making a phone call, showing a web page, or editing a document. Often the same as a product.

System: Total set of software (and possibly hardware) artifacts, delivered by the actual project as a product or service.

User (Norw.: bruker): Human or social entity (person or organization) that may request and use a service, e.g. by executing a product.

Customer (Norw.: kunde): Human or social entity (person or organization) that pays for development, maintenance, or use of a product/service. The customer and user may, or may not, be the same.

Software architecture: A description of the high-level design of a system, i.e. its main parts and their relations and interactions – and the field of making and analyzing such architectures.
The Unified Modeling Language (UML) can partially describe such an architecture.

Software process

Software process: A set of partially ordered activities that produces a software product from certain requirements, usually in a project context. It is often formalized as a process model. The model describes activities with ordering and compositional relations, artifacts being produced or consumed by such activities, human work roles, what tools/techniques to use, and possibly what measurements to apply.

Lifecycle models: Overall process models for software development, like the waterfall, prototyping, spiral, or incremental model. Typical phases are requirements engineering, requirements analysis, design, coding, testing, delivery, and maintenance.
Ex. RUP or Rational Unified Process has emphasis on incremental development.

Software process improvement, SPI: Systematic improvement of the work process(es) used in a software-producing organization, based on organizational goals and backed by empirical studies.

Total Quality Management, TQM: method developed by Deming and Juran in the 1950s, particularly for the manufactoring industry, to continuously improve organizational work processes to better serve the customers. It promotes the Plan-Do-Check-Act cycle that encompasses measurements, statistical process control, and organizational learning.

Capability Maturity Model, CMM: developed by SEI in Pittsburgh, based on ideas from TQM. Five maturity levels are introduced to assess a software organization and thus guide improvement.

Experience Factory, EF: a separate organizational entity to manage experience data inside a software organization, in order to improve its processes. It maintains the organization's Experience Base. Part of QIP.

Goals-Quality-Metrics model, GQM: pragmatic method to make lean and relevant (focused) metrics based on explicit improvement goals. Part of QIP.

Quality Improvement Paradigm, QIP: a downscaled variant of TQM for software organizations, adapting the Plan-Do-Check-Act cycle and employing GQM and EF to drive improvements.

Software quality

Originally compiled by Reidar Conradi, NTNU, 21 juni 1989 – revised 9.1.2003 and 25.8.2003.
See also Norwegian translations of CCITT, IEC, IEEE and EWICS standards.
For QA, both the ISO 9000-series and ISO 8402 Quality – Vocabulary. are now Norwegian standards.
PS: Fredrik Lindeberg ( is a member of EWICS TC/7.

Noen sikkerhets- og sårbarhetdefinisjoner fra Kåre Willoch (red.): Et sårbart samfunn - Utfordringer for sikkerhets- og beredskapsarbeidet i samfunnet, NoU 2000:24, juli 2000.

Quality: The totality of features and characteristics of a product/service that bears upon its (RC: the last four words should rather be "thus defining the latter's") ability to satisfy stated or implied needs (ISO 8402-1986).

Fault, defect, bug (Norw.: passiv feil): Potential "flaw" in a hardware/software system, that later may be activated (see error below).
Ex. physical hardware defect, missing variable initialization in a program.
Remedy: Fault prevention and fault detection.

Error (Norw.: aktiv feil): Execution of a "passive fault", leading to erroneous (vs. requirements) behaviour / system state. Not always externally observable.
Remedy: As under fault above, increased robustness (see below).
NB: This is the IEEE-definition, while the ISO-definition uses "error" about the human activity to introduce a fault ("committing an error").

Failure (Norw.: synlig feil, svikt): Fault execution (i.e. error) that results in observable and erroneous (vs. requirements) external behaviour.
Remedy: as under error above.

Fault prevention (Norw.: forebygging av feil): Systematic avoidance of software faults, e.g. by better methods, languages, tools, training etc.

Fault detection (Norw.: oppdaging av feil): To find faults in order to correct these, using techniques like inspections and testing.

Reliability (Norw.: pålitelighet): Probability of failure-free behaviour (vs. stated requirements), in a specific context (executing environment and usage profile) and time period.
Often measured as Mean-Time-To-Failure (one year?), failure rate (10**-9/second is an extreme value), or fault density (<0.1 faults per KLOC are extreme).

Availability (Norw.: tilgjengelighet): The extent to which a specified service is ready when demanded (thus reliability means that it continues to be available).

Security (Norw.: sikkerhet i betydningen sikring): Protection against unauthorized access (e.g. read / write / search) of data / information.
Remedy: Encryption and strict access control e.g. by passwords and physical hinders.

Safety (Norw: sikkerhet i betydningen trygghet): Protection against dangerous events, i.e. events with possible serious consequences for humans, environment, business, and/or society.
NB: this definition deals only with possible "dangers" to the surroundings – whether termed "failures" or not.
Remedy: as for errors, Hazop or FMEA techniques.

Risk (Norw.: risiko): Probability_of_safety-event * cost_of_safety-event.

Robustness (Norw.: robusthet): Ability to limit the consequences of an active error or failure, in order to resume (partial) service.
Remedy: duplication, repair, containment etc.

Dependability (Norw.: tillitsvekkende, pålitelig – men jfr. reliability): Degree of trustworthyness of a service, expressed as the totality of four quality properties – reliability, availability, security, and safety (possibly also maintainability). NB: these four properties may influence each other.
See Jean-Claude Laprie's work for complete definitions here.

Performance (Norw.: ytelse): The speed or volume offered by a service, e.g. delay/transmission time for data communication, storage capacity in a database, image resolution on a screen, or sound quality over a telephone line.

Quality of Service, QoS (Norw.: tjenestekvalitet): The combined dependability and performance of a service.

Quality Assurance, QA (Norw.: kvalitetssikring): All planned and systematic efforts needed to gain sufficient confidence in that a product or a service will satisfy stated requirements to quality (e.g. degree of safety/reliability). – Or:
Control of product and process throughout software development, so that we increase the probability and hopefully ensure that we manage to fulfil the requirements specifications.

Verification: Showing that a delivered product/system complies with its requirements, i.e. stated user needs. Often done in several successive steps.
Remedy: testing (dynamic), inspections (static), formal verification techniques (mostly static).

Validation: Showing that a delivered product/system satisfies the user's real or future needs.
Remedy: as for verification, plus extensive user testing and try-outs.

Testing: Controlled execution of program code (at different levels: unit, module, subsystem, system, acceptance etc.) to check that actual execution with given inputs produces the expected outputs (results).
Will need separate test data and possibly special test scripts/tools, and repeated regression testing.

Inspections: Systematic reading of software artifacts (especially for design and code) in order to discover faults early, i.e. in a cost-effective way.
Formalized by Michael Fagan at IBM in the 1970s.


See Norman Fenton: "Software Measurement: A Necessary Scientific Basis", IEEE Transactions on Software Engineering, 20(3):199-205, March 1994.

Measurement (by Fenton): The process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to describe them according to clearly defined rules.

Measures (by RC): The actual numbers or symbols being assigned to such attributes – i.e. "data".

Metrics, simple def. (by RC): A schema to describe the actual measures, by defining relevant attributes for some planned investigation. Each attribute will have a corresponding entity, a value domain (with a scale), and a process (rules) by which measures later will be assigned and possibly analyzed.
Examples of attributes are lines of code, development effort for an artifact, height of a person etc.

Metrics, grand def. (RC): The field of measurement.

Software reuse

Software reuse: Software development that includes systematic activities for creation and later incorporation ("reuse") of common, domain-specific artifacts. Reuse may have profound technological, practical, economic, and legal obstacles – but the benefits may be huge. It mostly concerns program artifacts in the form of components, see below. Standard use of platform components – i.e. commodities like OS, DBMS, Internet netware, GUI etc. – are normally not called reuse.

Development for reuse: Software development that systematically develops generalized software artifacts for possible, later reuse. It involves activities like generalization, documentation/classification, certification, storage and advertisement of such artifacts.

Development with reuse: Software development that systematically makes use of pre-made, reusable artifacts. It involves activities like searching, evaluation, retrieval, adaptation (preferably "as-is"), and integration of such artifacts.

Component-based software engineering

Component-based software engineering, CBSE: Software development with reuse, and with emphasis on reusing components developed outside the actual project.

Component: A separable piece of program code (source or executable), that will be integrated into a software system (consisting of components plus application), excluding platform software (commodities like OS, DBMS etc.).

External component: Component developed outside the organization that (re)uses it. We refer to OSS components and COTS components, see below.

OSS (Open Source Software) component: External component for which the source code is available ("white box"), and the source code can be acquired either free of charge or for a nominal fee, and with a possible obligation to report back any changes done.
Ex. Emacs, Java development toolkit, Linux Beowolf software. ++??

COTS (Commercial-Off-The-Shelf) component: External component as executable software being sold, leased, or licensed to the general public; offered by a vendor trying to profit from it; supported and evolved by the vendor, and used by the customers without source code access or modification ("black box").
Ex. libraries to support VR, CORBA middleware. ++??

Glueware: Code to solve possible mismatch between components, or between components and application/platform.

Addware: Code to add functionality which are required, but not provided by the components.

Last changed 20 Feb. 2004, Reidar Conradi: added link to NoU 2000:24.