Teaser: Creating Ad Hoc Reports in Panopticode 0.2

In Metrics Must be Interpreted In Context I described one of my preferences when creating rules around metrics. Namely, that one should not look at metrics independently, but within the context of other metrics. I described a rule that said unit test line coverage must be greater then 80% for all code with a cyclomatic complexity over 1. While you could enforce this rule in Panopticode 0.1 by creating a custom report, this is not ideal. Who wants to write Java code every time you make a new rule? This will get much easier in Panopticode 0.2.

Panopticode 0.2 has been re-architected to use RDF as it’s file format and internal data store. This enables you to write queries using SPARQL. Here is a query to find violators of the rule mentioned above:

 PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
 PREFIX panopticode: <http://www.panopticode.org/ontologies/panopticode#>
 PREFIX java: <http://www.panopticode.org/ontologies/technology/java#>
 PREFIX emma: <http://www.panopticode.org/ontologies/supplement/emma/1#>
 PREFIX javancss: <http://www.panopticode.org/ontologies/supplement/javancss/1#>

 SELECT ?packageName ?filePath ?className ?methodSignature ?ccn ?lineCoveragePercent
 WHERE
 {
   ?package         rdf:type                       java:Package           .
   ?package         panopticode:name               ?packageName           .
   ?package         java:hasFile                   ?file                  .
   ?file            panopticode:filePath           ?filePath              .
   ?file            java:hasType                   ?class                 .
   ?class           panopticode:name               ?className             .
   ?class           java:hasExecutableMember       ?method                .
   ?method          java:methodSignature           ?methodSignature       .
   ?method          emma:hasLineCoverage           ?lineCoverage          .
   ?method          javancss:cyclomaticComplexity  ?ccn                   .
   ?lineCoverage    emma:coveredPercent            ?lineCoveragePercent   .

   FILTER (?ccn > 1) .
   FILTER (?lineCoveragePercent <= 80.0)
 }
 ORDER BY DESC(?ccn) ?lineCoveragePercent ?packageName ?filePath ?className ?methodSignature
Posted in Categories Suck, Tags Rule | Tagged , , , | 1 Comment

New Tool: Complexian

Marty Andrews has just released version 0.12.0 of Complexian. This version adds the ability to output to plain or XML files.

Complexian is a tool for very quickly measuring NPATH and CCN of Java projects. It will be free for use until the first major release.

Support for executing Complexian is already in Panopticode’s Subversion repository and full support will be in release 0.2.

Posted in Categories Suck, Tags Rule | Tagged , | Leave a comment

Panopticode 0.1 Is Released

Panopticode 0.1 is now available.

While there are still some rough edges, this release provides:

  • Treemaps of code coverage
  • Treemaps of complexity
  • Integrates metrics from Emma, JavaNCSS, and JDepend
  • Gathers metrics from CheckStyle, Cobertura, Complexian, Simian, and Subversion

I would like to thank all of those who have helped make this release possible: Chris Turner, Peter Sestoft, Alok Singh, and Jeff Bay.

Posted in Categories Suck, Tags Rule | Tagged , , , | Leave a comment

Metrics Must be Interpreted In Context

James Newkirk points out that the absolute measurement of code coverage is not a very interesting number by itself, but must be viewed as part of the overall trend of coverage. He also points out that it is acceptable for some categories of code to have lower, or even zero, test coverage.

These are examples of one of my fundamental rules of metrics.

Rule #1: Metrics Must Be Interpreted In Context

A few examples of context are:

  • Within a time series. A code coverage of 20% could be very good if the team is just beginning to add automated tests to a legacy system. On the other hand a code coverage of 85% could be poor if was at 90% coverage at the last release.
  • By category of code. James uses the example of web service wrappers generated by Visual Studio and and views within a Model-View-Presenter pattern. I agree with James that generated code does not need unit tests. However, you should be particlarly thorough in testing any code generators you write.
  • Correlations with other metrics. If pressed to come up with an arbitrary threshold for a metric I like to do it within the context of another metric. For example, I might say that the unit test line coverage must be greater than 80% for all methods with a cyclomatic complexity greater than one. A key metric to correlate with is defects. Code that has proven to be buggy in production should get closer attention.
  • Compared to other projects in your organization. If every other software project at your company has 90% code coverage you had better have a good reason for only having 70%.

There is a lot more to say on this subject. Stay Tuned.

Posted in Categories Suck, Tags Rule | Tagged , | 1 Comment