Evaluation matrix

This section describes different scenarios than can apply to software applications depending on the KPI thresholds and criticality levels defined. The set of indicators and their values or ranges defines each scenario which are equivalent to the Quality Gate concept proposed by SonarQube.

The same KPIs are checked in all Quality Gates. The real variation between each Quality Gate type is the differrent threshold of each KPI.

  • Blocker issues
  • Duplicated lines (%)
  • New critical issues since last version
  • Public documented API (%)
  • Technical debt
  • Technical debt ratio
  • Test coverage

You can read more about these indicators in the Quality Gate section.

Quality Gate Definitions

Basically, each Quality Gate defines a set of coding metrics that must be achieved in order to consider the Quality Gate passed.

The following figure depicts the different Quality gates defined. The first column includes the name of each scenario (Quality gate) and the other columns represent the metrics and thresholds for each scenario.

Evaluation matrix 1

Quality Gate details

Low

The Low Quality Gate is the least restrictive quality gate. The metrics that must be fulfilled to get the Passed status are:

  • Blocker issues: 0 occurrences. Any value greater than 0 means an error (Not Passed status).
  • Duplicated lines (%): warning if greater than 20% and error if greater than 50%.
  • New critical issues since last version: warning if greater than 2 occurrences and error if greater than 20 occurrences.
  • Public documented API (%): warning if less than 40% and error if less than 25%.
  • Technical debt: warning if greater than 30 days and error if grater than 60 days.
  • Technical debt ratio: warning if greater than 20% and error if greater than 40%.
  • Test coverage: warning if less than 50% and error if less than 10%.

These metrics only ensure a minimal quality of the code: non blocking issues to reduce the vulnerabilities associated to a bad coding and minimal coverage of tests and documentation.

Standard

Most of software applications will fit in the Standard Quality Gate. The metrics that must be accomplished in order to achieve a Passed status are:

  • Blocker issues: 0 occurrences. Any value greater than 0 means an error (Not Passed status).
  • Duplicated lines (%): warning if greater than 10% and error if greater than 25%.
  • New critical issues since last version: warning if greater than 1 occurrence and error if greater than 10 occurrences.
  • Public documented API (%): warning if less than 80% and error if less than 50%.
  • Technical debt: warning if greater than 10 days and error if greater than 30 days.
  • Technical debt ratio: warning if greater than 10% and error if greater than 20%.
  • Test coverage: warning if less than 70% and error if less than 50%.

These metrics are considered sufficient to ensure that the application is production ready in terms of code quality.

Hard

This Quality Gate is the most restrictive. The metrics that comprise this scenario are:

  • Blocker issues: 0 occurrences. Any value greater than 0 means an error (Not Passed status).
  • Duplicated lines (%): warning if greater than 5% and error if greater than 15%.
  • New critical issues since last version: error if grater than 0 occurrences.
  • Public documented API (%): warning if less than 80% and error if less than 60%.
  • Technical debt: warning if greater than 10 days and error if greater than 10 days.
  • Technical debt ratio: warning if greater than 10% and error if greater than 20%.
  • Test coverage: warning if less than 70% and error if less than 90%.

The main changes compared to the Standard Quality Gate are the Test coverage. The Hard Quality Gate is focus on the necessity of having unit tests to make sure that a large percentage of the code is tested.

Application Criticality

There are several types of applications in terms of criticality:

  • Mission critical applications: applications that are essential to business operation or to an organization.

  • Standard applications: most of applications in an organization fit in this classification. They are applications that can assume certain downtime and not strict bug fixing periods.

  • No critical: proof of concepts, temporary workarounds, deprecated applications, etc.

Evaluation matrix 2

The Downtime term is used to refer to periods when a system is unavailable. When the downtime of an application is lower than 1% (15 minutes daily), it can be considered as a Mission Critical application. When an application has a downtime between 1% to 50% (12 hours a day), it can be considered as Standard application. Downtime greater than 50% means that the application status has minimal impact on the business or operation.

But sometimes the downtime metric is not a factor that can be used to classify the application. In those cases, there are two different factors that can be used to clasify the applications: the Number of Defects per line code ratio (NoDpL) and the Maximum Time to Deliver. The standard NoDpL ratio in the industry is usually between 15/1000 and 50/1000 (number of bugs in 1000 lines of code). When the ratio is below [15/1000] the application can be considered as a Mission Critical application. When the ratio is above [50/1000], the users are more fault tolerant and the application can be considered as a No critical application.

The Max Time to Deliver factor is used to classify the criticality based on the maximum time users can wait to have the detected bugs fixed. It means the maximum time to fix the bug, create a new release and deploy it. If the maximum time to deliver can be lower than a workday (it depends on the bug priority), the application can be classified as a Mission Critical application.

If the max time to deliver is between 1 and 2 weekdays, the application is a Standar application. Higher values correspond to No critical applications.

Software Evaluation

There are three main situations in order to evaluate the quality of an application:

  1. The Project Management team knows the criticality values of the application.
  2. The Project Management team cannot classify the application because the information about certain factors is not known yet.
  3. Regardless of the classification model, the Project Management team wants to know the state of the software quality.

The first scenario is expected to be the more frequent. When the project is deployed on the Sonar web portal, the Quality Team will select the appropriated Quality Gate:

  • Mission critical --> Hard Quality gate
  • Standard applications --> Standard Quality gate
  • No critical --> Low Quality gate

The other two scenarios are the same: the Quality Team will configure three subprojects in Sonar corresponding to the three Quality Gates (Hard, Standard and Low). The application will be evaluated three times (one for each scenario) and once the analysis is completed, the Project Management team will know exactly what quality policies are able to meet.