Cyclomatic Complexity

From Wayne's Dusty Box of Words

In 1956 George Miller, a cognitive psychologist, proposed as a law of human cognition and information processing that humans can effectively process no more than seven units, or chunks, of information, plus or minus two pieces of information, at any given time [1]. That limit applied to short-term memory and to a number of other cognitive processes. After being supported by further research by Miller and others, this became known as Miller's Law. This informs many of the design aspects important to information systems and is the basis of the cyclomatic complexity metric.

Cyclomatic complexity is a code analysis metric first proposed by Thomas McCabe in 1976[2]. The cyclomatic complexity of a section of source code is the number of linearly independent paths through it. The more paths, the higher the complexity and the higher chance that the programmer can't keep them straight in his or her head which would lead to defects.

From a software engineering perspective, the goal is that you keep the complexity of any given module of code to a minimum, definitely below your "Miller" threshold of 7±2. Code with low complexity is easier to write, easier to read/document, easier to test, and easier for others to understand and maintain.

Some of the more rabid Test Driven Design proponents preach that all code modules should do exactly one thing, yielding a cyclomatic complexity score of 1. That's typically not completely practical in real life. However, I do coach my junior developers that each module should aim to do one coherent 'thing' and no more than 2 or 3 steps towards getting that 'thing' done.

References

  1. Miller, George A. (1956). "The magical number sever, plus or minus twoL Some limits on our capacity for processing information". Psycological Review 63(2)L 81-97.
  2. McCabe, Thomas (1976). "A Complexity Measure". IEEE Transactions on Software Engineering: 308-320.