Software Engineering – Measuring Code Quality

A software enterprise must have the ability to deliver new functionality predictably, consistently and with high quality to succeed in any of its business initiatives. Software quality is multi-fold, and one of it, is to ensure quality of code. After a certain stage of evolution of the software, the contribution to the software code can come from diverse organizations (e.g. software maintenance is outsourced), teams and more generally, varying skill levels of developers. The diversity in skills and perception, though a risk for clean software, becomes essential for economic and business reasons. Thus, software engineering process must address the risk of code quality.

Understanding and quantifying Code Quality 

To handle the risk of degrading code quality, an organization must focus on automated tools for checking it. There are three primary aspects of measuring code quality

  1. Static code analysis is a form of assessment of software code for early detection of vulnerabilities and identifying insufficiency. This is generally accomplished by using established coding rules (can be customized), assessing complexity, identifying known vulnerabilities and weaknesses.
  2. Beyond identifying the insufficiency, certain tools can also measure the effort needed to fix it. This measure is popularly called technical debt.
  3. With the popularity of agile development methodologies, software code and design undergoes continuous change. To handle this continuous change, automated tests (unit, system and integration) have become mandatory in software development. Measuring the code coverage continuously by the tests will help assess the impact of newly added code to the code base.

There are many tools in the market to address the above aspects of code quality. A common and popular open source tool with LGPLv3 license is sonarQube. It supports plugins for popular programming languages like Java, C# and Objective C and can be integrated with Continuous Integration tools.

For enterprises in software services, a tool with above characteristics becomes even more important. With multiple clients having different NDAs, it is important for the access of the tool be controlled and adhered to the agreed NDAs.

A candidate implementation for multiple teams working for different clients or projects, all segregated by multiple VLANs is shown below. The advantages of centrally hosting the tool are: ease in maintenance, monitoring, backup and recovery. In this setup, each Docker container hosts one instance of sonarQube and Postgres DB. The DB is used to store the snapshots of the metrics and reports.

The docker container opens up the DB and Sonar web application TCP ports and these ports are only accessible to the client’s project team through their specific VLAN. A less preferred alternative approach is to use sonarQube’s features of users, groups, projects and dashboards for isolation. Though this is simpler, the risks of sharing sonar database across clients is high and is not preferred for NDA reasons.


Measuring code quality mitigates many risks of software quality. Setting up an automated tool that measures, stores and trends code quality across organization will help enterprises cope with evolving software. For enterprises in software services that need to adhere to different NDAs of its clients, a candidate implementation is to use Docker containers to isolate the sonarQube instances and use system level firewall to restrict access to ports.

About the author:

An Architect with expertise in designing and architecting products in Open Source Systems and has built distributed systems that can handle billions of data points with terra byte databases.

Leave a Reply

Your email address will not be published. Required fields are marked *