Using automated code metrics to improve quality, documentation and maintainability

By Gavin Davies

We write code to solve problems. If code solves the problem, does it matter how that problem is solved?

The answer is that yes, absolutely it does. Code is seldom inert. Code has to be maintained, extended, reused. Code must be secure, perform well and be comprehensible. In Bob Martin’s excellent book, "Clean Code", he suggests the "cost of owning a mess" tends towards total unprofitability over time as developers spend more and more time trying to understand poor quality code.

Code quality is a slippery beast, difficult to define – as much art as science. Good code should be unsurprising, clear, and well tested. So how can we help to achieve this? And how can we measure whether what we’re producing is high quality in that respect?

Metric - system of measurement: a system of related measures that facilitates the quantification of some particular characteristic

-          WordNet

Although many aspects of code are subjective, clever folks have defined sensible metrics that we can use to improve the overall quality of our code. The definition of "metric" we are using is based on the one in the boxout –basically we’re saying “this is something we can measure about the code”.

A metric may be a threshold ("70% unit test coverage") or an absolute ("all functions should have descriptive PHPDoc comments"). Subjective statements like "it should look good" are not something we can measure automatically. We’ll take a look at some of the metrics we use on Amaxus 4 to keep code quality high and ensure a long, healthy life for a product that is enjoyable to work with.

 

Code metrics - What we use on the Amaxus 4 Web CMS

Netbeans
Figure 1: NetBeans code coverage analysis

We're constantly working to improve our quality and get tighter feedback loops so we feel a greater sense of connection between doing our work and seeing results from it. Therefore, we run all our metrics as part of our CI (Continuous Integration) process on the Hudson server. This means developers get feedback within minutes of a commit, a short enough time that they are still mentally invested in the task. The most demanding metrics (code coverage) are on a nightly build, but developers can generate these on a microscale in an IDE such as NetBeans or PHPStorm so this is not a problem. Coding standards such as CodeSniffer can also be run from within the IDE.

In the binary world of software, engineers tend to prefer metrics that are pass/fail, so a sensible place to analyse the code for these metrics is the CI (Continuous Integration) stage. The diagram below shows how this fits into our continuous integration process.

 

CI Diagram
Figure 2: Amaxus 4 CI process

 

Amaxus web CMS product uses a number of metrics which are shown in table 1, and CI builds are considered failed if any of these metrics are below acceptable values. Details of these code metrics are in the table below. We don’t use every single metric going, but have a policy of adding them gradually, so the list has grown over time and will grow further as we keep improving the product’s quality.

PHP metrics 

All PHPUnit tests pass

Make sure commit has not broken anything. Must be 100%.

Unit test coverage

If code does not have good unit tests, it cannot safely be refactored, extended or trusted in any way. Aim for full coverage, but 70% or higher is considered acceptable. Unit tests allow people to work in shorter cycles. Test code does what it’s supposed to. Support refactoring.

Functions must be less than n lines long

Long functions are usually poorly factored with too many variables and poor logic. Usually indicates poor craftsmanship, violating lots of principles.

No empty statements

Sometimes code gets in that doesn’t actually do anything. Search and destroy.

Cyclomatic complexity should be reasonable

Detect complex methods and recommend refactoring them to be simpler. Simple code is easier to test and understand. There is no point being “clever”.

Forbidden/deprecated functions should not be used

Detect outdated/deprecated functions and replace them with the supported version

Nesting level should be reasonable

Deeply nested functions are an indicator of poorly thought out code. Refactorings such as guard clauses and well written exception handling can clean up the code

No multiple statements

Multiple statements compounded together might make a programmer feel clever but they disguise his/her intentions for the person who has to maintain it

No unused function parameters

Niladic or monadic form functions are preferred. If parameters are required, they should at least be used. Otherwise, remove them as they are clutter

Class and function documentation provided

All functions should have documentation describing what they do, their parameters and return values. This improves IDE support, and helps internal and third party developers.

SQL metrics 

Migrations are correct

Check that our database migrations can be run all the way up and all the way down without errors

JavaScript metrics 

Unit tests through qUnit

Allow people to work in shorter cycles. Test code does what it’s supposed to. Support refactoring.

JSLint

Similar to the PHP sniffs – static analysis. This has caught many of our IE7 bugs before they went out. Checks many different aspects of JavaScript code for correctness. I customised JSLint to our requirements for this. You can run jslint as a commit hook.

Performance metrics 

Build time

Builds are quick – about 2 minutes or so. If they take longer, performance has degraded and changes need to be re-evaluated

Build time is a reasonable indicator of performance, but only for the areas that have unit tests – it isn’t a substitute for a full browser test, but occasionally we do catch something timing out at the integration phase and we can address that immediately.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Should there be anything awry, problems are immediately reported in Box UK's IRC internal #dev channel and via email. If nothing is amiss, the build doesn't nag. If a build fails because any one of these targets is not hit, it is a priority to fix it, because if it is left, the problem tends to compound.

 

Beyond the build

The build is only the immediate story. Other useful metrics include number of bugs open on your issue tracker, average support response times, and number of overtime hours worked (less is better!). And, as this article underlines, profit is the bottom line. It’s a good exercise to have somewhere that you pull all of these metrics together, so your CI metrics can be exported and displayed alongside your support statistics and internal bugtracking.

It doesn’t end with the product itself either: Dan Zambonini has suggested automated testing of documentation against metrics, which is an excellent way of ensuring your end users see consistent style from you.

 

More tools

PHPUnit author Sebastian Bergmann lists some great tools for PHP Quality Assurance. Dependency analysis and detection of "copypasta" are great tools to add to a metrics suite.

Although our primary languages are JavaScript, PHP and SQL and we have focused on those tools in table 1, other Box UK products and services use Java, for which we use JUnit for unit testing and JavaDoc for documentation via Hudson. ClickDensity uses Microsoft FXCop for .Net to analyse the source for performance, security and style issues, using CruiseControl.Net CI server.

 

Introducing automated metrics to your project/organisation

If you have a software product that has been developed without metrics, adding a tool like CodeSniffer can be demoralising as it spews thousands of errors to standard output. It is best to pick a single rule at a time, and apply it to the product until you have a green light again. Chipping away is usually the best policy rather than overwhelming the product with more errors than you can reasonably cope with.

You may encounter resistance from some developers who feel that it restricts their way of doing things and have no interest in writing clean code as is conventionally understood. In truth, none of these metrics should impede problem solving in any way, and if they do they should be abandoned. I would recommend such individuals be encouraged to read some of the following books:

Everyone's opinion differs on the minutiae of coding standards, but professionalism dictates that you pick a standard (and it doesn’t really matter which) and stick to it for consistency in your product. Insisting upon being different is unprofessional – when you work on a product, you adopt the standards of that product.

Automated tools can never tell the whole story, but they are a useful way to bring everyone into working in the same way and catch problems before they get out of the door. A consistent, elegant codebase is maintainable and pleasant for its developers and third party clients alike.

Posted on 19th August 2010

Latest News

RSS
Conference microsite built on Amaxus’ powerful content delivery framework for ultimate flexibility.

Get In Touch

Whether you’re an existing client, are investigating Content Management Systems for an upcoming project, or are interested in our Partner Programme, get in touch today.

Behind the Scenes

RSS
Posted on 4th July 2012How can your business respond to the increasing popularity and accessibility of mobile devices such as smartphones and tablets?