Kontakt

Dr. Pascal Sieber & Partners AG

Laupenstrasse 45

3008 Bern

1 Schwanengasse
Bern, BE, 3011
Switzerland

+41 31 566 93 00

Auditability of Software

News

Auditability of Software

Guest User

Why is Auditablity of Software important and how can you measure it?

Introduction

Software demands our trust. However, very few people are able to check whether a certain software deserves our trust. Let’s not make this task harder than it should be.

Many aspects of modern society and governance are managed by software we need to trust. In healthcare and telecommunications, we entrust personal data to software and rely on it to be adequately protected against unauthorized access and manipulation. Software is essential for safety of many systems in e.g. transport, defense, manufacturing or the medical sector where malfunctioning would put peoples live at risk. Also when lives are not directly at risk as in the financial sector, unreliable software is often still a risk for the functioning of our society.

How can this trust be built? A manual source code audit is an important step to build trust in a codebase and reviews are usually executed before applications are put in production. The goal of a review usually is to find errors or to ensure that the software complies to certain rules. But transparency is also a goal in itself. As software becomes ever more important for the functioning of society, transparency of the code and algorithms becomes more important. Public reviews, in addition to conventional expert audits, are becoming a common phenomenon. In public reviews everybody can check himself what exactly the software does and how it handles our data. Recent examples are the Covid tracing app or electronic voting systems.

Sadly, real life code is often not written to be reviewed. As if the task of the review was not hard enough the code presented for review is often unnecessarily complex or has an unreadable style, is badly documented or simply incomplete. That not only makes analysis by experts hard but also undermines the transparency that the review process was meant to give. Less people can understand the code and it takes more effort for people who can. One likely cause of badly auditable codebases is that in software development projects there are usually no quality requirements with focus on auditability in software development projects.
It doesn’t help that there is no standard for when a codebase is good enough to be reviewed.

To improve on this situation, we propose a model for assessing the auditability of software which we present in this whitepaper. We aim to provide software owners auditing parties and regulators a standard way of assessing whether a codebase is fit for review such that reviewers can actually do their job. It is designed to be broadly applicable, independent of technology and domain as well as to allow for objectivity, reproducibility, and comparability.

(The auditability model was developed by Software Improvement Group (SIG) and sieber&partners in 2019 and builds on the Dutch guideline for Transferability of software (NPR-5325) It is also partly based on the assessment model for ISO/IEC 25010 Software Product Maintainability.)

Challenge

Software is used today to support or digitize very sensitive and critical processes with equally sensitive data. Only if the code is auditable can it be checked what the software does, how it handles specific data, how it protects specific data and how it is ensured that no manipulation by third parties can take place.

These requirements occur in a wide variety of areas - security-related control systems, han-dling of particularly sensitive or anonymized data - electronic health records and so on.

However, mere access to the source code is not sufficient to be able to audit it in a meaningful way. If it is written in a complex manner, is poorly documented and maybe incomplete, it can be difficult or impossible to audit. Also the circle of those, which can still understand what happens in the code, can be unnecessarily limited by high complexity or the employment of uncommon technologies. Trust in software needs a thorough review. To enable good reviews software must be auditable.

Audibility means that a person with suitable prior knowledge can understand within a reason-able time what exactly happens in the code. The code must be readable, understandable and well documented. If you want to represent these properties in a model that makes a statement about auditability, this model must meet certain requirements:

  • Technology independence (programming language)

  • Domain independence (functionality)

  • Consistency and comparability (applicability)

The developed model meets these requirements and enables wide applicability to all software with a corresponding comparability. The model answers the question of how well the code is readable, understandable and publishable. Two considerations are leading thereby:

  • Will the auditor's work be unnecessarily difficult?

  • Are there unnecessary restrictions for the group of possible auditors?

Auditability Assessment Model

The model defines three aspects that form the basis for good auditability:

  1. Product Quality

  2. Documentation

  3. Publishability

Illustration 1: Main Aspects of the Auditability Model

Illustration 1: Main Aspects of the Auditability Model

The model assumes a code audit as a manual and partly automatically executed static analysis. The auditor is expected to have experience in reading and writing code. In-depth domain knowledge is not required. Because it is technology independent, an audit can in principle be performed for software written in any programming language.

  1. Product Quality

    The core of the aspect "product quality" is the question of maintainability. Why is maintainability an aspect of auditability? An auditor needs to understand what the software does, how functions are implemented, and how certain pieces of code are linked to others. These are also tasks that occur in the normal maintenance of software for a developer. When code needs to be maintained, i.e. changes are made to the code, the developer needs to quickly understand what that piece of code does. He is therefore faced with the same requirements as an auditor.

    In our assessments we use the SIG/TÜViT model for "Trusted Product Maintainability" based on the standard ISO/IEC 25010 for software quality. The resulting rating is based on the maintainability benchmark of several thousand applications and it has been proven on the market for years.

    Some aspects not coverd by the maintainability model are also included. For example the choice of technology. Is a technology stack or platform used that is widespread and wellknown? Or is an exotic platform chosen, which a potential auditor must first familiarize himself with in order to understand what exactly happens in the code?

  2. Documentation

    Documentation is an essential means to understand a codebase. Extracting that information from reverse engineering or interviews makes the audit process much more difficult. Test automation scripts, Functional documentation or Installation and Operation instructions also contribute to the understanding how the software works. For the evaluation, the focus here is on completeness, quality and being up to date.

  3. Publishability

    When previously unpublished software is made public, there are certain areas that require special attention. Otherwise the software can not be fully reviewed or only to a select group of authorized reviewers. Under the aspect of Publishabilitiy it is examined, how far on code level the condition for a publication and/or a comprehensive examination of the code are given. For example, does the code still contain confidential information that could be relevant to data protection law, such as the names of private individuals, passwords, etc.? It must also be ensured that licenses and IP rights are respected. The completeness of the code is another important aspect as often it is important that all code of an application can be reviewed.

Assessment Framework

Aspects and their properties

In order to be able to apply the model in practice but not bind it to strictly to a specific evalua-tor, a conceptual layer of the model is separated from the evaluation method. The conceptual model describes "what" needs to be measured and what Is the relative importance of the
aspects and sub aspects, the evaluation method describes "how" this is measured in detail. This allows for technology specific adaptions or fitting in specific evaluation tools without scarifying reproducibility and comparability of the results. As example: the conceptual model prescribes the assessment of naming conventions. How naming conventions are checked is part of the evaluators method.

Illustration 2: Model and Evaluation

Illustration 2: Model and Evaluation

 

The three main aspects of Auditability are divided into sub aspects, each with different proper-ties. At the bottom of the model are of properties. This is the level where individual ratings are given and then aggregated. The properties are either measured using automated analysis or assessed by experts.

Illustration 3: Aspects in detail

Illustration 3: Aspects in detail

Rating

The individual properties are rated and aggregated with different weightings to form an overall rating. The scale is structured as follows:

Illustration 4: Rating

Illustration 4: Rating

The auditability of a software is also expressed on a scale of 1-5 (with one decimal place). There is no prescription as to which level of auditability is acceptable for a given situation.
But as a general guideline it is recommended that publication should only take place if the following criteria are met:

  • Overall rating of at least 3.0 or better

  • No property has a rating of 1

How the specific rating is handled is up to the publishing body to decide and can be very domain and context specific.

Application in practice

When and how is this model applied in practice? At first glance, there are two areas of applica-tion:

  • Verification of software for correct functionality or compliance with regulations.

  • Publication of software in the sense of confidence building

In both cases the owners of the software (e.g. public authorities) are responsible that their software can be easily audited by third parties (individuals or the public). The responsible
parties should hence strive to ensure that the software is developed in such a way that it is easy to understand what the software does and how it handles sensitive data.

Typical areas of application are areas where a malfunction causes major consequential damage and code reviews are therefore carried out as peer reviews. In medical technology, for example, in the control of radiation equipment, etc. Or in the military in the use of automated weapon systems. Here the auditability of the safe functionality (safety) is in the foreground. Another area of application is e.g. software which is used for aircraft control.

The other area concerns software that handles particularly sensitive data or maps processes that are subject to a high level of confidentiality. This is, for example, software for electronic patient files (e-health) or the electronic voting system. Here, the users of this software must have a high level of trust in the software. This can only be achieved if as many experts as possible have access to the code and if it can be audited.

Software is increasingly penetrating critical areas of our personal and public lives. For this reason, the issue of auditability will play an increasingly important role.

We believe that a review of auditability should be an integral part of any review process. If it can be expected that a software will be audited during its lifetime, then it is advisable to include auditability as a quality feature from the beginning of the development process. If only done afterwards, extensive rework and corrections are often necessary and associated costs can be prohibitive.

Software can generate trust if it is written with good auditability in mind from the outset. Then it is ensured that third parties can understand in the code what the software does. The correct functioning can then be confirmed by people with the appropriate knowledge and, in extreme cases, there is no need to trust a single developer or a single institution

Background

The Software Quality Services are a service provided by sieber&partners in cooperation with the Software Improvement Group B.V. and the German TÜV Informationstechnik (TÜViT, part of TÜV Nord).

Table 1: Cooperations/Standards for Software Quality

Table 1: Cooperations/Standards for Software Quality

The SIG database currently contains over 5,000 systems and over 36 billion lines of code in over 285 programming languages and dialects, and it is growing daily. Many of the +5,000 systems are remeasured continuously over their lifetime.

Table 2: SIG Benchmark and Capabilities

Table 2: SIG Benchmark and Capabilities

 

Do you have questions about auditability of software?