metric definition

Complexity

Complexity (complexity): Complexity refers to Cyclomatic complexity, a quantitative metric used to calculate the number of paths through the code. Whenever the control flow of a function splits, the complexity counter gets incremented by one. Each function has a minimum complexity of 1. This calculation varies slightly by language because keywords and functionalities do.

Cognitive Complexity (cognitive_complexity): How hard it is to understand the code’s control flow. See the Cognitive Complexity white paper for a complete description of the mathematical model applied to compute this measure.

Duplications

Duplicated blocks (duplicated_blocks): The number of duplicated blocks of lines.

Duplicated files (duplicated_files): The number of files involved in duplications.

Duplicated lines (duplicated_lines): The number of lines involved in duplications.

Duplicated lines (%) (duplicated_lines_density): duplicated_lines / (lines of code) * 100

Issues

New issues (new_violations): The number of issues raised for the first time on new code.

New xxx issues (new_xxx_violations): The number of issues of the specified severity raised for the first time on new code, where xxx is one of: blockercriticalmajorminorinfo.

Issues (violations): The total count of issues in all states.

xxx issues (xxx_violations): The total count of issues of the specified severity, where xxx is one of: blockercriticalmajorminorinfo.

False positive issues (false_positive_issues): The total count of issues marked false positive.

Open issues (open_issues): The total count of issues in the Open state.

Confirmed issues (confirmed_issues): The total count of issues in the Confirmed state.

Reopened issues (reopened_issues): The total count of issues in the Reopened state.

Maintainability

Code smells (code_smells): The total count of code smell issues.

New code smells (new_code_smells): The total count of Code Smell issues raised for the first time on New Code.

Maintainability rating (sqale_rating): (Formerly the SQALE rating.) The rating given to your project relative to the value of your Technical debt ratio. The default Maintainability rating grid is:

A=0-0.05, B=0.06-0.1, C=0.11-0.20, D=0.21-0.5, E=0.51-1

The Maintainability rating scale can be alternately stated by saying that if the outstanding remediation cost is:

  • <=5% of the time that has already gone into the application, the rating is A
  • between 6 to 10% the rating is a B
  • between 11 to 20% the rating is a C
  • between 21 to 50% the rating is a D
  • anything over 50% is an E

Maintainability rating (new_maintainability_rating: The rating given to the new code on your project relative to the value of your Technical debt ratio.  See Maintainability rating above for the maintainability rating grid.   

Technical debt (sqale_index): A measure of effort to fix all code smells. The measure is stored in minutes in the database. An 8-hour day is assumed when values are shown in days.

Technical debt on new code (new_technical_debt): a measure of effort required to fix all code smells raised for the first time on new code.

Technical debt ratio (sqale_debt_ratio): The ratio between the cost to develop the software and the cost to fix it. The Technical Debt Ratio formula is: Remediation cost / Development cost
Which can be restated as: Remediation cost / (Cost to develop 1 line of code * Number of lines of code)
The value of the cost to develop a line of code is 0.06 days.

Technical debt ratio on new code (new_sqale_debt_ratio): The ratio between the cost to develop the code changed on new code and the cost of the issues linked to it.

Quality gates

Quality gate status (alert_status): The state of the quality gate associated with your project. Possible values are ERROR and OK. Note: the WARN value has been removed since SonarQube 7.6.

Quality gate details (quality_gate_details): For all the conditions of your quality gate, you know which condition is failing and which is not.

Reliability

Bugs (bugs): The total number of bug issues.

New Bugs (new_bugs): The number of new bug issues.

Reliability Rating (reliability_rating)
A = 0 Bugs
B = at least 1 Minor Bug
C = at least 1 Major Bug
D = at least 1 Critical Bug
E = at least 1 Blocker Bug

Reliability remediation effort (reliability_remediation_effort): The effort to fix all bug issues. The measure is stored in minutes in the DB. An 8-hour day is assumed when values are shown in days.

Reliability remediation effort on new code (new_reliability_remediation_effort): The same as Reliability remediation effort but on the code changed on new code.

Security

Vulnerabilities (vulnerabilities): The number of vulnerability issues.

Vulnerabilities on new code (new_vulnerabilities): The number of new vulnerability issues.

Security Rating (security_rating)
A = 0 Vulnerabilities
B = at least 1 Minor Vulnerability
C = at least 1 Major Vulnerability
D = at least 1 Critical Vulnerability
E = at least 1 Blocker Vulnerability

New Security Rating (new_security_rating): The security rating given to new code (A to E). Calculated based on the number of vulnerabilities detected.  

Security remediation effort (security_remediation_effort): The effort to fix all vulnerability issues. The measure is stored in minutes in the DB. An 8-hour day is assumed when values are shown in days.

Security remediation effort on new code (new_security_remediation_effort): The same as Security remediation effort but on the code changed on New Code.

Security hotspots (security_hotspots): The number of Security Hotspots

Security hotspots on new code (new_security_hotspots): The number of new Security Hotspots on New Code.

Security review rating (security_review_rating): The security review rating is a letter grade based on the percentage of Reviewed Security Hotspots. Note that security hotspots are considered reviewed if they are marked as Acknowledged, Fixed or Safe.

A = >= 80%
B = >= 70% and <80%
C = >= 50% and <70%
D = >= 30% and <50%
E = < 30%

Security review rating on new code (new_security_review_rating): The security review rating for new code.

Security hotspots reviewed (security_hotspots_reviewed): The percentage of reviewed security hotspots. Ratio formula: Number of Reviewed Hotspots x 100 / (To_Review Hotspots + Reviewed Hotspots)

New Security Hotspots Reviewed: The percentage of reviewed security hotspots on new code.

Size

Classes (classes): The number of classes (including nested classes, interfaces, enums, and annotations).

Comment lines (comment_lines): The number of lines containing either comment or commented-out code.

Non-significant comment lines (empty comment lines, comment lines containing only special characters, etc.) do not increase the number of comment lines.

The following piece of code contains 9 comment lines:

/**                                            +0 => empty comment line
 *                                             +0 => empty comment line
 * This is my documentation                    +1 => significant comment
 * although I don't                            +1 => significant comment
 * have much                                   +1 => significant comment
 * to say                                      +1 => significant comment
 *                                             +0 => empty comment line
 ***************************                   +0 => non-significant comment
 *                                             +0 => empty comment line
 * blabla...                                   +1 => significant comment
 */                                            +0 => empty comment line

/**                                            +0 => empty comment line
 * public String foo() {                       +1 => commented-out code
 *   System.out.println(message);              +1 => commented-out code
 *   return message;                           +1 => commented-out code
 * }                                           +1 => commented-out code
 */                                            +0 => empty comment line

Comments (%) (comment_lines_density): The comment lines density = comment lines / (lines of code + comment lines) * 100

With such a formula:

  • 50% means that the number of lines of code equals the number of comment lines
  • 100% means that the file only contains comment lines

Directories (directories): The number of directories.

Files (files): The number of files.

Lines (lines): The number of physical lines (number of carriage returns).

Lines of code (ncloc): The number of physical lines that contain at least one character which is neither a whitespace nor a tabulation nor part of a comment.

Lines of code per language (ncloc_language_distribution): The non-commented lines of code distributed by language.

Functions (functions): The number of functions. Depending on the language, a function is defined as either a function, a method, or a paragraph.

Projects (projects): The number of projects in a Portfolio.

Statements (statements): The number of statements.

Tests

Condition coverage (branch_coverage): On each line of code containing some boolean expressions, the condition coverage answers the following question: ‘Has each boolean expression been evaluated both to true and to false?’. This is the density of possible conditions in flow control structures that have been followed during unit tests execution.

Condition coverage = (CT + CF) / (2*B)
where:

  • CT = conditions that have been evaluated to ‘true’ at least once
  • CF = conditions that have been evaluated to ‘false’ at least once
  • B = total number of conditions

Condition coverage on new code (new_branch_coverage): This definition is identical to Condition coverage but is restricted to new/updated source code.

Condition coverage hits (branch_coverage_hits_data): A list of covered conditions.

Conditions by line (conditions_by_line): The number of conditions by line.

Covered conditions by line (covered_conditions_by_line): The number of covered conditions by line.

Coverage (coverage): A mix of Line coverage and Condition coverage. It’s goal is to provide an even more accurate answer the question  ‘How much of the source code has been covered by the unit tests?’.

Coverage = (CT + CF + LC)/(2*B + EL)
where:

  • CT = conditions that have been evaluated to ‘true’ at least once
  • CF = conditions that have been evaluated to ‘false’ at least once
  • LC = covered lines = linestocover – uncovered_lines
  • B = total number of conditions
  • EL = total number of executable lines (lines_to_cover)

Coverage on new code (new_coverage): This definition is identical to Coverage but is restricted to new/updated source code.

Line coverage (line_coverage): On a given line of code, Line coverage simply answers the question ‘Has this line of code been executed during the execution of the unit tests?’. It is the density of covered lines by unit tests:

Line coverage = LC / EL
where:

  • LC = covered lines (lines_to_cover – uncovered_lines)
  • EL = total number of executable lines (lines_to_cover)

Line coverage on new code (new_line_coverage): This definition is identical to Line coverage but restricted to new/updated source code.

Line coverage hits (coverage_line_hits_data): A list of covered lines.

Lines to cover (lines_to_cover): The number of lines of code that could be covered by unit tests (for example, blank lines or full comments lines are not considered as lines to cover).

Lines to cover on new code (new_lines_to_cover): This definition is Identical to Lines to cover but restricted to new/updated source code.

Skipped unit tests (skipped_tests): The number of skipped unit tests.

Uncovered conditions (uncovered_conditions): The number of conditions that are not covered by unit tests.

Uncovered conditions on new code (new_uncovered_conditions): This definition is identical to Uncovered conditions but restricted to new/updated source code.

Uncovered lines (uncovered_lines): The number of lines of code that are not covered by unit tests.

Uncovered lines on new code (new_uncovered_lines): This definition is identical to Uncovered lines but restricted to new/updated source code.

Unit tests (tests): The number of unit tests.

Unit tests duration (test_execution_time): The time required to execute all the unit tests.

Unit test errors (test_errors): The number of unit tests that have failed.

Unit test failures (test_failures): The number of unit tests that have failed with an unexpected exception.

Unit test success density (%) (test_success_density): Test success density = (Unit tests – (Unit test errors + Unit test failures)) / (Unit tests) * 100