Documentation Metrics


All metrics used in the quality model are described thereafter, with useful information and references. They are classified according to their source. Please note also that several other metrics may be retrieved but not used in the quality model.



  • Average number of attributes ( ATTRS )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Average number of attributes defined in a class. This can be compared to the total number of operands (TOPD) considered at the class level, and is representative of the data complexity of the class (not including the complexity of methods).

  • Average number of public attributes ( ATTRS_PUBLIC )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Average number of attributes defined in classes with a public modifier.

    Publicness of attributes is meaningful for reusability: if there are a lot of public attributes available, it may be more time-consuming to find the right one. Also, good practices for object-oriented programming recommend to use getters and setters in most cases.

  • Percentage of branches covered by tests ( BRANCH_COVERAGE )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 0 ≤ 2 < 15 ≤ 3 < 36.5 ≤ 4 < 73.3 ≤ 5

    The percentage of branches that is exercised by the tests. The SonarQube measure used for this purpose is branch_coverage.

    On a given line of code, Line coverage simply answers the following question: Has this line of code been executed during the execution of the unit tests?

  • Number of jobs ( CI_JOBS )

    Provided by: Hudson

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

    The total number of jobs defined on the Hudson engine.

  • Number of failed jobs on week ( CI_JOBS_FAILED_1W )

    Provided by: Hudson

    Used by: QM_ACTIVITY

    Scale: 1 < 10 ≤ 2 < 5 ≤ 3 < 3 ≤ 4 < 0 ≤ 5

    The number of jobs that failed during last week on the Hudson engine.

  • Number of green jobs ( CI_JOBS_GREEN )

    Provided by: Hudson

    Used by:

    Scale: 1 < 0 ≤ 2 < 3 ≤ 3 < 5 ≤ 4 < 10 ≤ 5

    The number of green (successful) jobs on the Hudson engine.

    Green (or blue) jobs in Hudson define successful builds.

  • Ratio of green jobs ( CI_JOBS_GREEN_RATIO )

    Provided by: Hudson

    Used by: QM_REL_ENG

    Scale: 1 < 25 ≤ 2 < 50 ≤ 3 < 75 ≤ 4 < 95 ≤ 5

    The number of green (successful) jobs on the Hudson engine, divided by the total number of jobs.

    Green (or blue) jobs in Hudson define successful builds.

  • Number of red jobs ( CI_JOBS_RED )

    Provided by: Hudson

    Used by:

    Scale: 1 < 10 ≤ 2 < 5 ≤ 3 < 3 ≤ 4 < 0 ≤ 5

    The number of red (failed) jobs on the Hudson engine.

    Red jobs in Hudson define failed builds.

  • Number of yellow jobs ( CI_JOBS_YELLOW )

    Provided by: Hudson

    Used by:

    Scale: 1 < 10 ≤ 2 < 5 ≤ 3 < 3 ≤ 4 < 0 ≤ 5

    The number of yellow (unstable) jobs on the Hudson engine.

    Yellow jobs in Hudson define unstable builds. According to Hudson's documentation, a build is unstable if it was built successfully and one or more publishers report it unstable. For example if the JUnit publisher is configured and a test fails then the build will be marked unstable.

  • Number of classes ( CLASSES )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 100 ≤ 2 < 1000 ≤ 3 < 10000 ≤ 4 < 100000 ≤ 5

    The number of Java classes, including nested classes, interfaces, enums and annotations.

    The SonarQube measure used for this purpose is classes. This metric is often used as rule-of-thumb indicator for the size of software, and to make other measures relatives.

    See also SonarQube's definitions for size metrics: http://docs.sonarqube.org/display/SONAR/Metric+definitions#Metricdefinitions-Size.

  • Comment rate ( COMMENT_LINES_DENSITY )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 10 ≤ 2 < 15 ≤ 3 < 20 ≤ 4 < 30 ≤ 5

    The ratio of source lines of code on the number of comment lines of code.

    Density of comment lines = Comment lines / (Lines of code + Comment lines) * 100. With such a formula, 50% means that the number of lines of code equals the number of comment lines and 100% means that the file only contains comment lines. The SonarQube measure used for this purpose is comment_lines_density.

    See also SonarQube's page on comment lines metrics: http://docs.sonarqube.org/display/SONAR/Metrics+-+Comment+lines.

  • Number of downloads on the web site ( DL_REPO_1M )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The number of downloads on the web site of the project (download page) during last three months. Example of download page (for CDT) : http://www.eclipse.org/downloads/ .

  • Number of downloads on update site ( DL_UPDATE_SITE_1M )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The number of downloads on the update site of the project during last three months. Is computed using successful and failed installs from marketplace.

  • Cloning density ( DUPLICATED_LINES_DENSITY )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 50 ≤ 2 < 40 ≤ 3 < 30 ≤ 4 < 10 ≤ 5

    The amount of duplicated lines in the code, divided by the number of lines. This is expressed as a percentage; The SonarQube metric used for this purpose is duplicated_lines_density.

    See also SonarQube's definition for duplications: http://docs.sonarqube.org/display/SONAR/Metric+definitions#Metricdefinitions-Duplications. There is a longer description of the code cloning algorithm here: http://docs.sonarqube.org/display/SONAR/Duplications.

  • Number of files ( FILES )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 100 ≤ 2 < 1000 ≤ 3 < 10000 ≤ 4 < 100000 ≤ 5

    The number of code files (e.g. with a .java for the Java language) in the source code hierarchy. The SonarQube measure used for this purpose is files. This metric is often used as rule-of-thumb indicator for the size of software, and to make other measures relatives.

  • Number of functions ( FUNCTIONS )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 100 ≤ 2 < 1000 ≤ 3 < 10000 ≤ 4 < 100000 ≤ 5

    The number of functions defined in the source files.

    The SonarQube measure used for this purpose is functions. This metric is often used as rule-of-thumb indicator for the size of software, and to make other measures relatives.

    See also SonarQube's definitions for size metrics: http://docs.sonarqube.org/display/SONAR/Metric+definitions#Metricdefinitions-Size. More details on the metric: http://docs.sonarqube.org/display/SONAR/Metrics+-+Methods.

  • Average Cyclomatic Complexity ( FUNCTION_COMPLEXITY )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 5.3 ≤ 2 < 3.5 ≤ 3 < 2.8 ≤ 4 < 2.3 ≤ 5

    The average number of execution paths found in functions.

    Basically, the counter is incremented every time the control flow of the function splits, with any function having at least a cyclomatic number of 1. The sum of cyclomatic numbers for all functions is then divided by the number of functions. In SonarQube the measure is function_complexity.

    The cyclomatic number is a measure borrowed from graph theory and was introduced to software engineering by McCabe in [McCabe1976]. It is defined as the number of linearly independent paths that comprise the program. To have good testability and maintainability, McCabe recommends that no program modules (or functions as for Java) should exceed a cyclomatic number of 10. It is primarily defined at the function level and is summed up for higher levels of artefacts.

    See also Wikipedia's entry on cyclomatic complexity: http://en.wikipedia.org/wiki/Cyclomatic_complexity.

    See also SonarQube's definition for code complexity: http://docs.sonarqube.org/display/SONAR/Metrics+-+Complexity. There is also a discussion about its meaning here: http://www.sonarqube.org/discussing-cyclomatic-complexity/

    See also the Maisqual wiki for more details on complexity measures: http://maisqual.squoring.com/wiki/index.php/Category:Complexity_Metrics.

  • IP log code coverage ( IP_LOG_COV )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Percentage of contributions covered by an appropriate IP log.

    Eclipse foundation has a very strict IP policy and this measure should always be 100%.

  • Defect density ( ITS_BUGS_DENSITY )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 3.36826347305 ≤ 2 < 0.0755602867593 ≤ 3 < 0.0335380213652 ≤ 4 < 0.009367219332 ≤ 5

    Ratio of the total number of tickets in the issue tracking system to the size of the source code in KLOC, at the time of the data retrieval.

    The subset of tickets considered from the issue tracking system are those specified in the project documentation, for example by specifying a tracker or a project name appearing in ticket data. Source code is all source code found in the source code management repository.

  • ITS issues changed lifetime ( ITS_CHANGED )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of updates on issues during the overall time range covered by the analysis.

    The subset of tickets considered from the issue tracking system are those specified in the project documentation, for example by specifying a tracker or a project name appearing in ticket data. Update means an operation on a ticket changing its state, or adding information. Opening and closing a ticket are considered as updates. Identities of updaters are the character strings found in the corresponding field in the ticket information. Time range is measured as a calendar month period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

  • ITS issues changers lifetime ( ITS_CHANGERS )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of distinct identities updating tickets during the overall time range covered by the analysis.

    The subset of tickets considered from the issue tracking system are those specified in the project documentation, for example by specifying a tracker or a project name appearing in ticket data. Update means an operation on a ticket changing its state, or adding information. Opening and closing a ticket are considered as updates. Identities of updaters are the character strings found in the corresponding field in the ticket information. Time range is measured as a calendar month period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

  • ITS issues closed lifetime ( ITS_CLOSED )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Number of issues closed during the overall time range covered by the analysis.

    The subset of tickets considered from the issue tracking system are those specified in the project documentation, for example by specifying a tracker or a project name appearing in ticket data. Opened means that the ticket information was submitted to the issue traking system for the first time for that ticket. Identities of openers are the character strings found in the corresponding field in the ticket information. Time range is measured as a calendar month period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included). Number of lines are those calculated at the end of the period of analysis.

  • Number of issue closed during last month ( ITS_CLOSED_30 )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Number of distinct identities opening tickets during the last month in the issue tracking system.

    The subset of tickets considered from the issue tracking system are those specified in the project documentation, for example by specifying a tracker or a project name appearing in ticket data. Opened means that the ticket information was submitted to the issue traking system for the first time for that ticket. Identities of openers are the character strings found in the corresponding field in the ticket information. Time range is measured as a calendar month period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included). Number of lines are those calculated at the end of the period of analysis.

  • ITS issue closers lifetime ( ITS_CLOSERS )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of distinct identities closing tickets during during the overall time range covered by the analysis.

    The subset of tickets considered from the issue tracking system are those specified in the project documentation, for example by specifying a tracker or a project name appearing in ticket data. Update means an operation on a ticket changing its state, or adding information. Opening and closing a ticket are considered as updates. Identities of updaters are the character strings found in the corresponding field in the ticket information. Time range is measured as a calendar month period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

  • ITS closers ( ITS_CLOSERS_30 )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of distinct identities closing tickets during the last month in the issue tracking system.

    The subset of tickets considered from the issue tracking system are those specified in the project documentation, for example by specifying a tracker or a project name appearing in ticket data. Update means an operation on a ticket changing its state, or adding information. Opening and closing a ticket are considered as updates. Identities of updaters are the character strings found in the corresponding field in the ticket information. Time range is measured as a calendar month period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

  • ITS issues opened lifetime ( ITS_OPENED )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Number of issues opened during the overall time range covered by the analysis.

    The subset of tickets considered from the issue tracking system are those specified in the project documentation, for example by specifying a tracker or a project name appearing in ticket data. Opened means that the ticket information was submitted to the issue traking system for the first time for that ticket. Identities of openers are the character strings found in the corresponding field in the ticket information. Time range is measured as a calendar month period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included). Number of lines are those calculated at the end of the period of analysis.

  • ITS issue openers lifetime ( ITS_OPENERS )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Number of distinct identities opening tickets during the overall time range covered by the analysis.

    The subset of tickets considered from the issue tracking system are those specified in the project documentation, for example by specifying a tracker or a project name appearing in ticket data. Opened means that the ticket information was submitted to the issue traking system for the first time for that ticket. Identities of openers are the character strings found in the corresponding field in the ticket information. Time range is measured as a calendar month period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included). Number of lines are those calculated at the end of the period of analysis.

  • ITS updates ( ITS_UPDATES_3M )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 4 ≤ 2 < 13 ≤ 3 < 37 ≤ 4 < 596 ≤ 5

    Number of updates to tickets during the last three months, in the issue tracking system.

    The subset of tickets considered from the issue tracking system are those specified in the project documentation, for example by specifying a tracker or a project name appearing in ticket data. Update means an operation on a ticket changing its state, or adding information. Opening and closing a ticket are considered as updates. Time range is measured as three calendar months period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

  • JIRA authors ( JIRA_AUTHORS )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of different authors who created issues during the lifetime of the project.

    A high number of authors shows diversity and improves the bus factor of the project.

  • JIRA authors last month ( JIRA_AUTHORS_1M )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of authors who created issues during last month. If today is 2017-02-01 then the range is from 2017-01-01 to 2017-02-01.

  • JIRA authors last week ( JIRA_AUTHORS_1W )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of authors who created issues during last week. If today is Wed. 2017-02-01 then the range is from Wed. 2017-01-25 to Wed. 2017-02-01.

  • JIRA authors last year ( JIRA_AUTHORS_1Y )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of authors who created issues during last year. If today is 2017-02-01 then the range is from 2016-02-01 to 2017-02-01.

  • JIRA issues created last month ( JIRA_CREATED_1M )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of issues created during last month. If today is 2017-02-01 then the range is from 2017-01-01 to 2017-02-01.

  • JIRA issues created last week ( JIRA_CREATED_1W )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of issues created during last week. If today is Wed. 2017-02-01 then the range is from Wed. 2017-01-25 to Wed. 2017-02-01.

  • JIRA issues created last year ( JIRA_CREATED_1Y )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of issues created during last year. If today is 2017-02-01 then the range is from 2016-02-01 to 2017-02-01.

  • JIRA late issues ( JIRA_LATE )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of issues with a past due date.

    It is considered good practice to keep this number low. Either fix it or maintain its due date.

  • JIRA open issues ( JIRA_OPEN )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Number of issues with a state 'open' at the time of analysis.

  • JIRA open issues (%) ( JIRA_OPEN_PERCENT )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Percentage of open issues compared to the overall number of issues registered in the system.

  • JIRA pending issues ( JIRA_OPEN_UNASSIGNED )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Number of issues in state open with no assignee (i.e. pending).

    It is considered to be good practice to keep this number low. In an active project, people would either work on the bug (i.e. assign it) or triage it (pass it to some other state or assigning it).

  • JIRA issues updated last month ( JIRA_UPDATED_1M )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of issues updated during last month. If today is 2017-02-01 then the range is from 2017-01-01 to 2017-02-01.

  • JIRA issues updated last week ( JIRA_UPDATED_1W )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of issues updated during last week. If today is Wed. 2017-02-01 then the range is from Wed. 2017-01-25 to Wed. 2017-02-01.

  • JIRA issues updated last year ( JIRA_UPDATED_1Y )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 2 ≤ 2 < 4 ≤ 3 < 9.75 ≤ 4 < 80 ≤ 5

    Number of issues updated during last year. If today is 2017-02-01 then the range is from 2016-02-01 to 2017-02-01.

  • JIRA total issues ( JIRA_VOL )

    Provided by: JiraIts

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Number of issues registered in the database, whatever their state is.

  • Number of jobs ( JOBS )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The total number of jobs defined on the Hudson engine.

  • Number of failed jobs ( JOBS_FAILED_1W )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The number of jobs that failed during last week on the Hudson engine.

  • Number of green jobs ( JOBS_GREEN )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The number of green(successful) jobs on the Hudson engine.

    Green (or blue) jobs in Hudson define successful builds.

  • Number of red jobs ( JOBS_RED )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The number of red (failed) jobs on the Hudson engine.

    Red jobs in Hudson define failed builds.

  • Number of yellow jobs ( JOBS_YELLOW )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The number of yellow (unstable) jobs on the Hudson engine.

    Yellow jobs in Hudson define unstable builds. According to Hudson's documentation, a build is unstable if it was built successfully and one or more publishers report it unstable. For example if the JUnit publisher is configured and a test fails then the build will be marked unstable.

  • License identification ( LIC_IDENT )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Is there a list of the different licences used in the product?

    This measure may use an external tool like Ninka (ninka.turingmachine.org) to identify the licence of components used in the project. Another way would be to identify a file named LICENCES in the root folder.

  • Percentage of lines of code covered by tests ( LINE_COVERAGE )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 0 ≤ 2 < 20 ≤ 3 < 48.89 ≤ 4 < 87.7 ≤ 5

    The percentage of source lines of code that is exercised by the tests. The SonarQube measure used for this purpose is line_coverage.

  • Depth of Inheritance Tree ( MDIT )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The maximum depth of inheritance tree of a class within the inheritance hierarchy is defined as the maximum length from the considered class to the root of the class hierarchy tree and is measured by the number of ancestor classes. In cases involving multiple inheritance, the MDIT is the maximum length from the node to the root of the tree [Chidamber1994].

    A deep inheritance tree makes the understanding of the object-oriented architecture difficult. Well structured OO systems have a forest of classes rather than one large inheritance lattice. The deeper the class is within the hierarchy, the greater the number of methods it is likely to inherit, making it more complex to predict its behavior and, therefore, more fault-prone [Chidamber1994]. However, the deeper a particular tree is in a class, the greater potential reuse of inherited methods [Chidamber1994].

    See also SonarQube's page on the depth in tree: http://docs.sonarqube.org/display/SONAR/Metrics+-+Depth+in+Tree

  • Average number of methods ( METHS )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Average number of methods defined in the class. This can be compared to the total number of operators (TOPT) considered at the class level.

  • Average number of public methods ( METHS_PUBLIC )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Average number of methods defined in classes with a public modifier.

    Publicness of methods is meaningful for reusability: if there are a lot of public methods available, it may be more time-consuming to find the right one.

  • Number of favourites on the Marketplace ( MKT_FAV )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 10 ≤ 2 < 30 ≤ 3 < 70 ≤ 4 < 300 ≤ 5

    The number of favourites on the Eclipse Marketplace for the project.

    This measure uses the MarketPlace REST API to retrieve the number of registered users who marked this project as a favourite. If the project is not registered on the Marketplace, the metric should be zero.

  • Number of successfull installs on the Marketplace ( MKT_INSTALL_SUCCESS_1M )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 100 ≤ 2 < 500 ≤ 3 < 1000 ≤ 4 < 10000 ≤ 5

    The number of successful installs for the project as reported on the Marketplace during the last three months.

    This measure uses the list of successful installs available on the Marketplace for every registered project. Here is an example of log page, metrics tab. If the project is not registered on the Marketplace, the metric should be zero.

  • ML authors ( MLS_SENDERS_30 )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1.5 ≤ 2 < 3 ≤ 3 < 7.5 ≤ 4 < 26 ≤ 5

    Number of distinct senders for messages dated during the last month, in developer mailing list archives.

    Developer mailing list is the list or lists considered as 'for developers' in the project documentation. The date used is the mailing list server date, as stamped in the message. Time range is measured as three calendar months period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included). Distinct senders are those with distinct email addresses. Email addresses used are the strings found in 'From:' fields in messages.

    Note that developer communications are measured through the mailing list, and user communications are measured through the forums.

  • ML posts ( MLS_SENT_30 )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 2 ≤ 2 < 6 ≤ 3 < 22 ≤ 4 < 103 ≤ 5

    Number of messages dated during the last month, in developer mailing list archives.

    Developer mailing list is the list or lists considered as 'for developers' in the project documentation. The date used is the mailing list server date, as stamped in the message. Time range is measured as one calendar month period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

    Note that developer communications are measured through the mailing list, and user communications are measured through the forums.

  • ML sent responses ( MLS_SENT_RESPONSE )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 2 ≤ 2 < 6 ≤ 3 < 22 ≤ 4 < 103 ≤ 5

    Number of responses (i.e. replies in a thread) in developer mailing list archives.

    Threads are identified using 'In-Reply-To' message headers. Developer mailing list is the list or lists considered as 'for developers' in the project documentation.

    Note that developer communications are measured through the mailing list, and user communications are measured through the forums.

  • ML threads ( MLS_THREADS )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 2 ≤ 2 < 6 ≤ 3 < 22 ≤ 4 < 103 ≤ 5

    Number of threads (i.e. an email and its replies) in developer mailing list archives.

    Threads are identified using 'In-Reply-To' message headers. Developer mailing list is the list or lists considered as 'for developers' in the project documentation.

    Note that developer communications are measured through the mailing list, and user communications are measured through the forums.

  • Number of non-conformities for analysability ( NCC_ANA )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The total number of violations of rules that impact analysability in the source code.

    See also the page on practices for a list of checked rules and their associated impact on quality characteristics.

  • Average number of non-conformities for analysability ( NCC_ANA_IDX )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 0.5 ≤ 3 < 0.3 ≤ 4 < 0.1 ≤ 5

    The total number of violations of rules that impact analysability in the source code, divided by the number of thousands of lines of code.

    See also the page on practices for a list of checked rules and their associated impact on quality characteristics.

  • Number of non-conformities for changeability ( NCC_CHA )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The total number of violations of rules that impact changeability in the source code.

    See also the page on practices for a list of checked rules and their associated impact on quality characteristics.

  • Average number of non-conformities for changeability ( NCC_CHA_IDX )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 3 ≤ 2 < 1 ≤ 3 < 0.5 ≤ 4 < 0.3 ≤ 5

    The total number of violations of rules that impact changeability in the source code, divided by the number of thousands of lines of code.

    See also the page on practices for a list of checked rules and their associated impact on quality characteristics.

  • Number of non-conformities for reliability ( NCC_REL )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The total number of violations of rules that impact reliability in the source code.

    See also the page on practices for a list of checked rules and their associated impact on quality characteristics.

  • Average number of non-conformities for reliability ( NCC_REL_IDX )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 0.5 ≤ 3 < 0.3 ≤ 4 < 0.1 ≤ 5

    The total number of violations of rules that impact reliability in the source code, divided by the number of thousands of lines of code.

    See also the page on practices for a list of checked rules and their associated impact on quality characteristics.

  • Number of non-conformities for reusability ( NCC_REU )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The total number of violations of rules that impact reusability in the source code.

    See also the page on practices for a list of checked rules and their associated impact on quality characteristics.

  • Average number of non-conformities for reusability ( NCC_REU_IDX )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 3 ≤ 2 < 1 ≤ 3 < 0.5 ≤ 4 < 0.3 ≤ 5

    The total number of violations of rules that impact reusability in the source code, divided by the number of thousands of lines of code.

    See also the page on practices for a list of checked rules and their associated impact on quality characteristics.

  • Source Lines Of Code ( NCLOC )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 915828 ≤ 2 < 198290.25 ≤ 3 < 97961.5 ≤ 4 < 38776.25 ≤ 5

    Number of physical lines that contain at least one character which is neither a whitespace or a tabulation or part of a comment. This is mapped to SonarQube's ncloc metric.

    See also SonarQube's definition for size metrics: http://docs.sonarqube.org/display/SONAR/Metric+definitions#Metricdefinitions-Size. More details can be seen here: http://docs.sonarqube.org/display/SONAR/Metrics+-+Lines+of+code

  • Maximum depth of nesting ( NEST )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The maximum depth of nesting counts the highest number of imbricated code (including conditions and loops) in a function.

    Deeper nesting threatens understandability of code and induces more test cases to run the different branches. Practitioners usually consider that a function with three or more nested levels becomes significantly more difficult for the human mind to apprehend how it works.

  • Number of milestones ( PLAN_MILESTONES_VOL )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 5 ≤ 3 < 10 ≤ 4 < 20 ≤ 5

    The number of milestones that occured during the last five releases.

    Milestones are retrieved from the PMI file and are counted whatever their target release is. Milestones are useful to assess the maturity of the release and improves predictability of the project's output, in terms of quality and time.

  • Project is on time for next milestone ( PLAN_NEXT_MILESTONE )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    Is the project on time for the next milestone?

  • Reviews success rate ( PLAN_REVIEWS_SUCCESS_RATE )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 20 ≤ 2 < 40 ≤ 3 < 60 ≤ 4 < 80 ≤ 5

    What is the percentage of successful reviews over the past 5 releases?

    The reviews are retrieved from the project management infrastructure record, and are considered successful if the entry is equal to success. If there are less than 5 releases defined, the percentage is computed on these only.

  • Access information ( PMI_ACCESS_INFO )

    Provided by: EclipsePmi

    Used by: QM_SUPPORT

    Scale: 1 < 0 ≤ 2 < 1 ≤ 3 < 2 ≤ 4 < 3 ≤ 5

    Is the access info (downloads, update sites..) correctly filled in the PMI records?

    The project management infrastructure file holds information about how to access binaries of the project. This test checks the number of access-related entries defined in the PMI: download_url, downloads, update_sites.

  • Doc information ( PMI_DOC_INFO )

    Provided by: EclipsePmi

    Used by: QM_SUPPORT

    Scale: 1 < 0 ≤ 2 < 2 ≤ 3 < 4 ≤ 4 < 6 ≤ 5

    Is the documentation info correctly filled in the PMI records?

    The project management infrastructure file holds information about various documentation and manuals. This test checks the number of doc-related entries defined in the PMI: build_doc, documentation, documentation_url, forums, gettingstarted_url, mailing_lists, website_url, wiki_url.

  • ITS information ( PMI_ITS_INFO )

    Provided by: EclipsePmi

    Used by: QM_SUPPORT

    Scale: 1 < 2 ≤ 2 < 3 ≤ 3 < 4 ≤ 4 < 5 ≤ 5

    Is the bugzilla info correctly filled in the PMI records?

    The project management infrastructure file holds information about one or more bugzilla instances. This test checks that at least one bugzilla instance is defined, with a product identifier, a create_url to enter a new issue, and a query_url to fetch all the issues for the project.

  • Number of releases ( PMI_REL_VOL )

    Provided by: EclipsePmi

    Used by: QM_REL_ENG

    Scale: 1 < 1 ≤ 2 < 3 ≤ 3 < 5 ≤ 4 < 10 ≤ 5

    The number of releases recorded in the PMI.

    Milestones are retrieved from the PMI file and are counted whatever their target release is. Milestones are useful to assess the maturity of the release and improves predictability of the project's output, in terms of quality and time.

  • SCM information ( PMI_SCM_INFO )

    Provided by: EclipsePmi

    Used by: QM_SCM , QM_SUPPORT

    Scale: 1 < 0 ≤ 2 < 1 ≤ 3 < 1 ≤ 4 < 2 ≤ 5

    Is the source_repo info correctly filled in the PMI records?

    The project management infrastructure file holds information about one or more source repositories. This test checks that at least one source repository is defined, and accessible.

  • Public API ( PUBLIC_API )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 48347 ≤ 2 < 14018 ≤ 3 < 6497.5 ≤ 4 < 2420.75 ≤ 5

    Number of public Classes + number of public Functions + number of public Properties. The SonarQube measure used for this purpose is public_api.

    See also SonarQube's definition for size metrics: http://docs.sonarqube.org/display/SONAR/Metric+definitions#Metricdefinitions-Size. More details can be seen here: http://docs.sonarqube.org/display/SONAR/Metrics+-+Public+API

  • Public documented API ( PUBLIC_API_DOC )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The percentage of documented API accesses. Density of public documented API = (Public API - Public undocumented API) / Public API * 100. The SonarQube measure used for this purpose is public_documented_api_density.

    See also SonarQube's definition for size metrics: http://docs.sonarqube.org/display/SONAR/Metric+definitions#Metricdefinitions-Size.

  • Number of violated rules for analysability ( RKO_ANA )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The number of analysability rules that have at least one violation on the artefact.

  • Number of violated rules for changeability ( RKO_CHA )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The number of changeability rules that have at least one violation on the artefact.

  • Number of violated rules for reliability ( RKO_REL )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The number of reliability rules that have at least one violation on the artefact.

  • Number of violated rules for reusability ( RKO_REU )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The number of reusability rules that have at least one violation on the artefact.

  • Adherence to analysability rules ( ROKR_ANA )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 10 ≤ 2 < 30 ≤ 3 < 50 ≤ 4 < 75 ≤ 5

    Ratio of conform analysability practices on the number of checked analysability practices.

  • Adherence to changeability rules ( ROKR_CHA )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 10 ≤ 2 < 30 ≤ 3 < 50 ≤ 4 < 75 ≤ 5

    Ratio of conform changeability practices on the number of checked changeability practices.

  • Adherence to reliability rules ( ROKR_REL )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 10 ≤ 2 < 30 ≤ 3 < 50 ≤ 4 < 75 ≤ 5

    Ratio of conform reliability practices on the number of checked reliability practices.

  • Adherence to reusability rules ( ROKR_REU )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 10 ≤ 2 < 30 ≤ 3 < 50 ≤ 4 < 75 ≤ 5

    Ratio of conform reusability practices on the number of checked reusability practices.

  • Number of rules for analysability ( RULES_ANA )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The total number of rules checked for analysability.

  • Number of rules for changeability ( RULES_CHA )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The total number of rules checked for changeability.

  • Number of rules for reliability ( RULES_REL )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The total number of rules checked for reliability.

  • Number of rules for reusability ( RULES_REU )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The total number of rules checked for reusability.

  • SCM authors ( SCM_AUTHORS )

    Provided by: Git

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 18 ≤ 5

    Total number of identities found as authors of commits in source code management repository.

    Source code management repositories are those considered as such in the project documentation. Commits in all branches are considered. Date used for each commit is 'author date' (when there is a difference between author date and committer date). An identity is considered as author if it appears as such in the commit record (for systems logging several identities related to the commit, authoring identity will be considered).

  • SCM authors one month ( SCM_AUTHORS_1M )

    Provided by: Git

    Used by: QM_DIVERSITY , QM_SCM

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 18 ≤ 5

    Total number of identities found as authors of commits in source code management repositories dated during the last month.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'author date' (when there is a difference between author date and committer date). Time range is measured as a one month period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

  • SCM authors one week ( SCM_AUTHORS_1W )

    Provided by: Git

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 18 ≤ 5

    Total number of identities found as authors of commits in source code management repositories dated during the last week.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'author date' (when there is a difference between author date and committer date). Time range is measured as a one week period starting the day before the data retrieval.

  • SCM authors one year ( SCM_AUTHORS_1Y )

    Provided by: Git

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 18 ≤ 5

    Total number of identities found as authors of commits in source code management repositories dated during the last year.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'author date' (when there is a difference between author date and committer date). Time range is measured as a one year period starting the day before the data retrieval (example: if retrieval is on Feb 3rd 2016, period is from Feb 3rd 2015 to Feb 3rd 2016, both included).

  • SCM committers ( SCM_AUTHORS_30 )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 18 ≤ 5

    Total number of identities found as authors of commits in source code management repositories dated during the last three months.

    Source code management repositories are those considered as such in the project documentation. Commits in all branches are considered. Date used for each commit is 'author date' (when there is a difference between author date and committer date). An identity is considered as author if it appears as such in the commit record (for systems logging several identities related to the commit, authoring identity will be considered). Time range is measured as three calendar months period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

  • SCM Commits ( SCM_COMMITS )

    Provided by: Git

    Used by:

    Scale: 1 < 2 ≤ 2 < 5 ≤ 3 < 13 ≤ 4 < 121 ≤ 5

    Total number of commits in source code management repositories.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'author date' (when there is a difference between author date and committer date).

  • SCM Commits one month ( SCM_COMMITS_1M )

    Provided by: Git

    Used by: QM_ACTIVITY , QM_SCM

    Scale: 1 < 2 ≤ 2 < 5 ≤ 3 < 13 ≤ 4 < 121 ≤ 5

    Total number of commits in source code management repositories dated during the last month.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'author date' (when there is a difference between author date and committer date). Time range is measured as a one month period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

  • SCM Commits one week ( SCM_COMMITS_1W )

    Provided by: Git

    Used by:

    Scale: 1 < 2 ≤ 2 < 5 ≤ 3 < 13 ≤ 4 < 121 ≤ 5

    Total number of commits in source code management repositories dated during the last week.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'author date' (when there is a difference between author date and committer date). Time range is measured as a one week period starting the day before the data retrieval.

  • SCM Commits one year ( SCM_COMMITS_1Y )

    Provided by: Git

    Used by:

    Scale: 1 < 2 ≤ 2 < 5 ≤ 3 < 13 ≤ 4 < 121 ≤ 5

    Total number of commits in source code management repositories dated during the last year.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'author date' (when there is a difference between author date and committer date). Time range is measured as a one year period starting the day before the data retrieval (example: if retrieval is on Feb 3rd 2016, period is from Feb 3rd 2015 to Feb 3rd 2016, both included).

  • SCM Commits ( SCM_COMMITS_30 )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 2 ≤ 2 < 5 ≤ 3 < 13 ≤ 4 < 121 ≤ 5

    Total number of commits in source code management repositories dated during the last three months.

    Source code management repositories are those considered as such in the project documentation. Commits in all branches are considered. Date used for each commit is 'author date' (when there is a difference between author date and committer date). Time range is measured as three calendar months period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

  • SCM committers ( SCM_COMMITTERS )

    Provided by: Git

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 18 ≤ 5

    Total number of identities found as committers of commits in source code management repository.

    Source code management repositories are those considered as such in the project documentation. Commits in all branches are considered. Date used for each commit is 'committer date' (when there is a difference between author date and committer date). An identity is considered as committer if it appears as such in the commit record.

  • SCM committers one month ( SCM_COMMITTERS_1M )

    Provided by: Git

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 18 ≤ 5

    Total number of identities found as committers of commits in source code management repositories dated during the last month.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'committer date' (when there is a difference between author date and committer date). Time range is measured as a one month period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

  • SCM committers one week ( SCM_COMMITTERS_1W )

    Provided by: Git

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 18 ≤ 5

    Total number of identities found as committers of commits in source code management repositories dated during the last week.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'committer date' (when there is a difference between author date and committer date). Time range is measured as a one week period starting the day before the data retrieval.

  • SCM committers one year ( SCM_COMMITTERS_1Y )

    Provided by: Git

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 18 ≤ 5

    Total number of identities found as committers of commits in source code management repositories dated during the last year.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'committer date' (when there is a difference between author date and committer date). Time range is measured as a one year period starting the day before the data retrieval (example: if retrieval is on Feb 3rd 2016, period is from Feb 3rd 2015 to Feb 3rd 2016, both included).

  • SCM Files committed ( SCM_FILES )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 3 ≤ 2 < 19 ≤ 3 < 95.75 ≤ 4 < 2189 ≤ 5

    Total number of files touched by commits in source code management repositories dated during the last three months.

    Source code management repositories are those considered as such in the project documentation. Commits in all branches are considered. A file is 'touched' by a commit if its content or its path are modified by the commit. Date used for each commit is 'author date' (when there is a difference between author date and committer date). Time range is measured as three calendar months period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

  • SCM Changed Lines ( SCM_MOD_LINES )

    Provided by: Git

    Used by:

    Scale: 1 < 1000 ≤ 2 < 5000 ≤ 3 < 50000 ≤ 4 < 500000 ≤ 5

    Total number of changed lines (added, removed, changed) in source code management repositories.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'author date' (when there is a difference between author date and committer date).

  • SCM Changed Lines one month ( SCM_MOD_LINES_1M )

    Provided by: Git

    Used by: QM_ACTIVITY

    Scale: 1 < 20 ≤ 2 < 50 ≤ 3 < 100 ≤ 4 < 500 ≤ 5

    Total number of changed lines (added, removed, changed) in source code management repositories dated during the last month.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'author date' (when there is a difference between author date and committer date). Time range is measured as a one month period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

  • SCM Changed Lines one week ( SCM_MOD_LINES_1W )

    Provided by: Git

    Used by:

    Scale: 1 < 10 ≤ 2 < 20 ≤ 3 < 50 ≤ 4 < 100 ≤ 5

    Total number of changed lines (added, removed, changed) in source code management repositories dated during the last week.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'author date' (when there is a difference between author date and committer date). Time range is measured as a one week period starting the day before the data retrieval.

  • SCM Changed Lines one year ( SCM_MOD_LINES_1Y )

    Provided by: Git

    Used by:

    Scale: 1 < 50 ≤ 2 < 100 ≤ 3 < 500 ≤ 4 < 1000 ≤ 5

    Total number of changed lines (added, removed, changed) in source code management repositories dated during the last year.

    Source code management repositories are those considered as such in the project documentation. Date used for each commit is 'author date' (when there is a difference between author date and committer date). Time range is measured as a one year period starting the day before the data retrieval (example: if retrieval is on Feb 3rd 2016, period is from Feb 3rd 2015 to Feb 3rd 2016, both included).

  • File stability index ( SCM_STABILITY_3M )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 3.79487179487 ≤ 2 < 1.4430952381 ≤ 3 < 1.14285714286 ≤ 4 < 1 ≤ 5

    Average number of commits touching each file in source code management repositories dated during the last three months.

    Source code management repositories are those considered as such in the project documentation. Commits in all branches are considered. A file is 'touched' by a commit if its content or its path are modified by the commit. Date used for each commit is 'author date' (when there is a difference between author date and committer date). Time range is measured as three calendar months period starting the day before the data retrieval (example: if retrieval is on Feb 3rd, period is from Jan 3rd to Feb 2nd, both included).

  • Stack Overflow Answers (5Y) ( SO_ANSWERS_VOL_5Y )

    Provided by: StackOverflow

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 3 ≤ 4 < 4 ≤ 5

    The number of answers to questions related to the project's tag posted on Stack Overflow during the last 5 years.

    Having many answers posted about the project indicates a strong interest from the community, and a good support. The list of questions and their answers associated to the tag can be browsed on the Stack Overflow web site.

  • Stack Overflow Answer rate (5Y) ( SO_ANSWER_RATE_5Y )

    Provided by: StackOverflow

    Used by:

    Scale: 1 < 1 ≤ 2 < 2 ≤ 3 < 4 ≤ 4 < 5 ≤ 5

    The average number of answers per questions related to the project's tag on Stack Overflow during the last 5 years.

    Having many answers posted about the project indicates a strong interest from the community, and a good support. The list of questions and their answers associated to the tag can be browsed on the Stack Overflow web site.

  • Stack Overflow Askers (5Y) ( SO_ASKERS_5Y )

    Provided by: StackOverflow

    Used by:

    Scale: 1 < 1 ≤ 2 < 3 ≤ 3 < 5 ≤ 4 < 10 ≤ 5

    The number of distinct people asking questions related to the project's tag posted on Stack Overflow during the last 5 years.

    Having many people ask questions about the project indicates a strong interest from the community, and a good support. The list of questions and their answers associated to the tag can be browsed on the Stack Overflow web site.

  • Stack Overflow Questions (5Y) ( SO_QUESTIONS_VOL_5Y )

    Provided by: StackOverflow

    Used by:

    Scale: 1 < 5 ≤ 2 < 50 ≤ 3 < 400 ≤ 4 < 800 ≤ 5

    The number of questions related to the project's tag posted on Stack Overflow during the last 5 years.

    Having many questions posted about the project indicates a strong interest from the community. The list of questions associated to the tag can be browsed on the Stack Overflow web site.

  • Stack Overflow Views (5Y) ( SO_VIEWS_VOL_5Y )

    Provided by: StackOverflow

    Used by:

    Scale: 1 < 100 ≤ 2 < 1000 ≤ 3 < 2500 ≤ 4 < 5000 ≤ 5

    The total number of views for questions related to the project's tag on Stack Overflow during the last 5 years.

    Having many views on questions about the project indicates a strong interest from the community. The list of questions and their answers associated to the tag can be browsed on the Stack Overflow web site.

  • Stack Overflow Votes (5Y) ( SO_VOTES_VOL_5Y )

    Provided by: StackOverflow

    Used by:

    Scale: 1 < 10 ≤ 2 < 100 ≤ 3 < 250 ≤ 4 < 500 ≤ 5

    The total number of votes on questions related to the project's tag on Stack Overflow during the last 5 years.

    Having many votes on questions about the project indicates a strong interest from the community. The list of questions and their answers associated to the tag can be browsed on the Stack Overflow web site.

  • Number of comment lines ( SQ_COMMENT_LINES )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 500 ≤ 2 < 1000 ≤ 3 < 10000 ≤ 4 < 50000 ≤ 5

    Number of lines containing either comment or commented-out code.

    Non-significant comment lines (empty comment lines, comment lines containing only special characters, etc.) do not increase the number of comment lines.

    For Java, file headers are not counted as comment lines (as they usually define the license).

    Lines containing the following instructions are counted both as comments and lines of code: AUTHOR, INSTALLATION, DATE-COMPILED, DATE-WRITTEN, SECURITY.

  • Comment lines density ( SQ_COMR )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 10 ≤ 2 < 20 ≤ 3 < 30 ≤ 4 < 40 ≤ 5

    Density of comment lines = Comment lines / (Lines of code + Comment lines) * 100.

    With such a formula, 50% means that the number of lines of code equals the number of comment lines and 100% means that the file only contains comment lines

  • Commented code ( SQ_COM_CODE )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 40 ≤ 2 < 30 ≤ 3 < 20 ≤ 4 < 10 ≤ 5

    Commented lines of code

    See more information about commented code on SonarQube doc web site. There is a well-documented debate on Stack Overflow as well.

  • Test coverage ( SQ_COVERAGE )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 10 ≤ 2 < 20 ≤ 3 < 40 ≤ 4 < 50 ≤ 5

    Overall test coverage.

  • Branch coverage ( SQ_COVERAGE_BRANCH )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 10 ≤ 2 < 20 ≤ 3 < 40 ≤ 4 < 50 ≤ 5

    Branch test coverage.

  • Line coverage ( SQ_COVERAGE_LINE )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 10 ≤ 2 < 20 ≤ 3 < 40 ≤ 4 < 50 ≤ 5

    Line test coverage.

  • Total complexity ( SQ_CPX )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

    It is the complexity calculated based on the number of paths through the code. Whenever the control flow of a function splits, the complexity counter gets incremented by one. Each function has a minimum complexity of 1. This calculation varies slightly by language because keywords and functionalities do.

    For more information on line counting for each language, see https://docs.sonarqube.org/display/SONAR/Metrics+-+Complexity.

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • File complexity ( SQ_CPX_FILE_IDX )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 10 ≤ 2 < 20 ≤ 3 < 30 ≤ 4 < 40 ≤ 5

    Average complexity by file.

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • Duplicated lines (%) ( SQ_DUPLICATED_LINES_DENSITY )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 40 ≤ 2 < 30 ≤ 3 < 20 ≤ 4 < 10 ≤ 5

    Density of duplication = Duplicated lines / Lines * 100.

  • Number of files ( SQ_FILES )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 100 ≤ 2 < 500 ≤ 3 < 1000 ≤ 4 < 5000 ≤ 5

    The total number of files analysed.

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • Number of functions ( SQ_FUNCS )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 300 ≤ 2 < 500 ≤ 3 < 3000 ≤ 4 < 5000 ≤ 5

    Number of functions. Depending on the language, a function is either a function or a method or a paragraph.

    For Java, constructors are considered as methods and accessors are considered as methods if the sonar.squid.analyse.property.accessors property is set to false.

    For Cobol, it is the number of paragraphs.

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • Number of lines of code ( SQ_NCLOC )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 500 ≤ 2 < 1000 ≤ 3 < 10000 ≤ 4 < 50000 ≤ 5

    Number of physical lines that contain at least one character which is neither a whitespace or a tabulation or part of a comment.

    For Cobol, generated lines of code and pre-processing instructions (SKIP1, SKIP2, SKIP3, COPY, EJECT, REPLACE) are not counted as lines of code.

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • Public API ( SQ_PUBLIC_API )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 500 ≤ 2 < 1000 ≤ 3 < 5000 ≤ 4 < 10000 ≤ 5

    Number of public Classes + number of public Functions + number of public Properties

  • Public documented API (%) ( SQ_PUBLIC_API_DOC_DENSITY )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 40 ≤ 2 < 30 ≤ 3 < 20 ≤ 4 < 10 ≤ 5

    Density of public documented API = (Public API - Public undocumented API) / Public API * 100

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • Technical debt ( SQ_SQALE_INDEX )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 5000 ≤ 2 < 1000 ≤ 3 < 500 ≤ 4 < 100 ≤ 5

    Effort to fix all maintainability issues. The measure is stored in minutes in the DB.

  • Maintainability rating ( SQ_SQALE_RATING )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 0.05 ≤ 2 < 0.1 ≤ 3 < 0.2 ≤ 4 < 0.5 ≤ 5

    Rating given to your project related to the value of your Technical Debt Ratio. The default Maintainability Rating grid is: A=0-0.05, B=0.06-0.1, C=0.11-0.20, D=0.21-0.5, E=0.51-1.

    The Maintainability Rating scale can be alternately stated by saying that if the outstanding remediation cost depends on the time that has already gone into the application: A <=5% , B between 6 to 10%, C between 11 to 20%, D between 21 to 50%, and anything over 50% is an E.

  • Number of statements ( SQ_STATEMENTS )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 500 ≤ 2 < 1000 ≤ 3 < 10000 ≤ 4 < 50000 ≤ 5

    Number of statements.

    For Java, it is the number of statements as defined in the Java Language Specification but without block definitions. Statements counter gets incremented by one each time a following keyword is encountered: if, else, while, do, for, switch, break, continue, return, throw, synchronized, catch, finally..

    Statements counter is not incremented by a class, method, field, annotation definition, package declaration and import declaration.

    For Cobol, a statement is one of move, if, accept, add, alter, call, cancel, close, compute, continue, delete, display, divide, entry, evaluate, exitProgram, goback, goto, initialize, inspect, merge, multiply, open, perform, read, release, return, rewrite, search, set, sort, start, stop, string, subtract, unstring, write, exec, ibmXmlParse, ibmXmlGenerate, readyReset, mfCommit, mfRollback.

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • Number of blocker issues ( SQ_VIOLATIONS_BLOCKER )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 100 ≤ 2 < 50 ≤ 3 < 10 ≤ 4 < 1 ≤ 5

    The total number of issues (violations) found by SonarQube with a severity equal to BLOCKER.

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • Number of critical issues ( SQ_VIOLATIONS_CRITICAL )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 500 ≤ 2 < 100 ≤ 3 < 50 ≤ 4 < 10 ≤ 5

    The total number of issues (violations) found by SonarQube with a severity equal to CRITICAL.

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • Number of info issues ( SQ_VIOLATIONS_INFO )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 5000 ≤ 2 < 1000 ≤ 3 < 500 ≤ 4 < 100 ≤ 5

    The total number of issues (violations) found by SonarQube with a severity equal to INFO.

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • Number of major issues ( SQ_VIOLATIONS_MAJOR )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 500 ≤ 2 < 100 ≤ 3 < 50 ≤ 4 < 10 ≤ 5

    The total number of issues (violations) found by SonarQube with a severity equal to MAJOR.

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • Number of minor issues ( SQ_VIOLATIONS_MINOR )

    Provided by: SonarQube45

    Used by:

    Scale: 1 < 500 ≤ 2 < 100 ≤ 3 < 50 ≤ 4 < 10 ≤ 5

    The total number of issues (violations) found by SonarQube with a severity equal to MINOR.

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • ( )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < ≤ 2 < ≤ 3 < ≤ 4 < ≤ 5

  • Test success density ( TEST_SUCCESS_DENSITY )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 50 ≤ 2 < 65 ≤ 3 < 80 ≤ 4 < 95 ≤ 5

    The percentage of failed tests during the last execution of the test plan. The SonarQube measure used for this purpose is test_success_density.

    Computation is as follows: Test success density = (Unit tests - (Unit test errors + Unit test failures)) / Unit tests * 100, where Unit test errors is the number of unit tests that have failed, Unit test failures is the number of unit tests that have failed with an unexpected exception.

  • Number of tests relative to the code size ( TST_VOL_IDX )

    Provided by: No provider defined!

    Used by:

    Scale: 1 < 2 ≤ 2 < 5 ≤ 3 < 7 ≤ 4 < 10 ≤ 5

    The total number of test cases for the product, divided by the number of thousands of SLOC.

    Metric is computed from SonarQube tests and ncloc metrics.


Page generated by Alambic 3.3.2 on Mon Nov 20 01:03:58 2017.