Cyclomatic complexity is a well-established metric of software engineering that measures, in seemingly straightforward terms, how complex a program is based on the number of branches within the module (function, procedure, subsystem, etc.) and how many modules it references. There are many definitions which essentially borrow from the same understanding – one which apparently has a good grasp on either what it means intuitively or the requisite math. Both appear to be valid, but as usual, a more formal understanding contributes to a more carefully constructed cognitive model of it, and enables us to move from general to specific more easily without logic errors, and thus yields more value. You can get the former by simply saying "the number of paths you can take without duplication through a module" without seeming to confuse anyone – we all know what a "path" and a "module" is, right? Probably not exactly. The latter formality takes a little work. Since there is a dearth of a formal definition of "linearly independent" alongside definitions of cyclomatic complexity I’ve found, let’s see if we can raise the level of dialog, and infuse a bit more rigor into the art of measuring. I won’t give equations – just narrative – unless sufficiently provoked.
A program consisting of a set of statements and data can be thought of as a graph – a mathematical construct which relates a "vertex" or "node", which is an element in a set of objects – conceptual or physical, doesn’t matter – to other vertexes or nodes (nodes from now on since I want to avoid the contention between "vertices" and "vertexes"). These relations are "edges". In graphs, edges can be "directed" or "undirected". We’re interested in the directed edges between nodes, since programs have "flow" or the CPU marches forward being pushed by the voltage given it. Now, the set of all edges is, effectively, the program. If we start at the starting node (public void Main()) and follow each directed edge until the end, the program completes (or should, since a program which doesn’t have a terminating state is, in practical terms, an error). Finally, an edge exists between nodes (statements in the program) if they are logically adjacent in the program, due to the progression of the CPU. This includes branching and function calls.
Another way to look at the set of all concluding directed edges is a set of vectors. A vector is a mathematical construct which relates certain quantities together in a structure – an ordered set, basically. Commonly, a physical force can be modeled by a vector because it has more than one component (amount, or magnitude, and direction), and thus can’t be measured by a "scalar", or single, value. So, each edge in our program graph is a vector quantity which has a magnitude and direction which gives us the next statement, from any starting statement, in our program space.
Now, if we take the set of all the vectors that we can make in our program space, and use only the ones which don’t repeat edges in the graph (or, looking at it another way, execute any 2 statements in succession in the same program state, which really can’t happen in a computer program anyway, unless there is some external influence on the Instruction Pointer register due to debugging or an exploit, since even in a loop, while the same statements are being executed, the loop counter is incrementing or stream is being read and exhausted, so the program state is changing), and the set of vectors we choose covers all the nodes in our program graph, then we have a set of "linearly independent" edges or paths through our program. This is also known as a basis for the program space.
Once we have a basis, or set of linearly independent paths through the program, we can then proceed to evaluate the complexity of the now well-determined system. The added formalism could potentially buy us a lot – reducing the problem to a set of linear equations could allow us to develop optimizers both for compile time and run time, or just help us root out our own inefficiencies. Just the term “linearly independent” doesn’t allow for that immediate insight (Math 341 isn’t a requisite for CompSci in most schools, but most developers are comfortable with matrix math), but rather just makes our work appear to only have “truthiness” instead of trustworthiness. So please, if you talk about cyclomatic complexity, please explain linearly independent carefully, or make sure your references do.