Arthur Schwarz - 2012-11-07

Hi Tim;

I just began to use your program in cygwin. If you don't mind, I have a few comments (however, no 'bitter fair' or criticisms, just some comments.).

1. One of McCabe's proteges produced an extension of the McCabe Cyclomatic Compexity Metric.

    It's a NIST (not FIBS) publicationwhich I don't have in front of me now, but if you're interested,     I'll look it up. Anyhow, one measure of truly awful code is when thereis a branch into a new scope.    Graphically this can be seen by 'reducing' the Cylomatic Complexity graph by removing all subgraphs     that don't have crossing arcs. The details are given in the cited but
unreferenced NIST pub.
2. McCabe's Cyclomatic Complexity has known issue with, for example, switch statements. Because
    of the way the graph is constructed, each case statement(s) within a switch (in C) is counted as     '1', leading to the erroneous impression that a well-formed functions with a switch statement has     poor quality. Nothing that can be done about this using the metric, but the article in 1) repairs this     issue.
3. I don't see much practical use in the many of averages used (and please, I amnot denigrating your  project only pointing out what I see). In particular.  a. Comments / project have no merit. A project manager/Quality Assurance person/Tech Lead         would probably look at comments per function per (physical) file (except for C#), comments  per class, excluding comments per function, and comments per file excluding the other comments. The aggregated average I don't think has any practical use.
    b. McCable's Complexity Metric makes more sense on a per function and average per function  basis. The summation (?) of all metrics has no practical meaning in the sense that quality is most often seen as the quality of a function (as given in the metric)
or the quality of the  file (as given as the average quality per function) or the quality of the project (as given in   the average quality of the functions in the project).
4. A subordinated HTML file showing:
    1. Project, then
    2. classes, then
    3. functions.
    4. And classes / function (where appropriate) might be worthwhile. You can see example layouts in, for example, Javadoc
from Oracle or Doxygen in sourceforge. This would allow a means to navigate to required data and will preserve the aggregated sums associated with classes and projects. Otherwise it becomes very messy to see everything when everything is presented at the same level.
5. Some reference would be nice, and some algorithmic details on various things seen. For  example:
    a. I assume that the Complexity Metric is a summed value.
    b. I assume that a red cell means "tsk, tsk - not good" and a yellow means "you can do  better" and that a white cell means "your doing ok". but I don't know what the ranges of values should be and whether my interpretation is correct.
    c. How are the lines counted? I (simply) do "> egrep '}|;' *.h *.cpp | wc" to get a figure of about 14,300 LOC while you get 18184 LOC. How can I find out my counting error if I don't know how  you count? Similar reasoning applies to other algorithms used but not described.
    d. Where appropriate it would be nice to see all values in a column added. Given the example in c) above, maybe to sum the LOC in the Procedural Matrix.

I think that this is a great effort. I like it's conception and execution, and it provides a needed normalized look at project data. Rather than come into this with empty hands, since this is in the public domain, maybe I can help a little if the aid is wanted.