I want to explore the notion of quantifying the amount of succintness a programming language provides. That is, the amount a high-level language reduces the complex.
This idea of "simplification" is a factor of text-wise reduction (fewer characters needed to express a complex concept, à la Algorithmic Information Theory) and another, less easy-to-quantify concept of maintainability. Fleshing out this latter concept, it is clear it has to do with how easily one can establish programmer consensus for the given task (i.e. given an particular implementation, would other programmers agree on the best answer?).
I will define the Kolmogorov Quotient so that higher numbers for a given language denote a *reduction* in the complexity of solving the problem in the given language.
Once the basic premise and a methodology above is agreed to, it is only a matter of a rough constant of difference for any specific implementation. (That is, as long as the implementation is the same across all measurements, the number should be valid and comparable.)
But it could go something like this: Pick a language "close to the machine", like C or Assembly, and measure the amount of bytes of machine code it used to implement a standard suite(*) of common, non-threaded programming tasks (base_language_count). Then code the exact same functionality in the language you are wanting to measure (without using external libraries) and count the number of bytes of source code (test_language_count).
KQuotient = base_language_count / test_language_count.
This *should* always be greater than 1.0.
(*) "standard suite of common programming tasks...": I see two main categories:
- Data-processing suite limited to simple text I/O *(computation towards the machine)*
- GUI suite *(computation towards the user)*