Of all the definitions, I think that the Merriam Webster Kids Definition of complexity says it best: “something difficult to understand or lacking simplicity”.
The best place to begin the journey of understanding the forgotten computer science foundations in the practice of software engineering is exploring the basics of computation and complexity in that context. Perhaps unsurprisingly, there are multiple competing definitions for complexity in science, and several in the realm of computer science alone.
In computational complexity theory1 the definition of complexity is assigned to the classes of problems, rather than to the formulation (algorithm) for their solution. This is of course beneficial to the process of comparing different efficiencies of algorithms based upon a description of the problem in terms of the optimal computational time and space required to resolve the problem. Case in point, deciding what sorting algorithm is going to be optimal based on the structure of the input. A similar view of complexity is represented in algorithmic analysis.
Of greater interest to our exploration is a framing called “Kolmogorov complexity”2a/2b, which describes complexity as the minimum size of the algorithmic expression (program) to emit a given sequence of values or behavior. This is usually applied to the subject of compression algorithms or mathematical series; but can be applied in similar fashion to the idea of minimum program size to process information in a specific fashion.
Also interesting is descriptive complexity theory3, which describes complexity in terms of the logical resources required to define a problem relative to a class of all finite structures for an appropriate signature of the problem. That is a rather thick way of saying complexity is described by how much must be expressed as structures built from abstract objects of its concept domain. I am taking some liberty in translating this from specific mathematical terms, but our exploration is focused on computation from the perspective of writing software so am shifting terms to better match the goal.
Bringing this back to software engineering in the context of business serving applications, we can simplify our scientifically based understanding of complexity to the below types:
- How complicated is the problem itself? Does it represent a simple sequence of operations and limited rules, or is it a very complicated and nuanced behavior against relationships between data that we must apply rules to?
- How much code must we generate to implement the algorithms required to solve the problem?
- How much structure in schema of objects/classes and/or relational data must be created to support the algorithms and processes we must perform to solve the problem?
The specific reason that I provided the children’s definition for complexity at the start of this blog is because I feel we must bring some additional reality to our definitions of complexity. This time I will rely on cognitive science as a basis for liking the kid’s definition, as there is a proven limitation to the “working memory” we can apply to cognitive tasks4. There is evidence that the more complex the concepts we hold in our working memory the fewer we can hold at once for consideration.
Based on our understanding of cognitive science, I would therefore add to our definitions of complexity:
- How hard is it to understand our own solution to the problem? How complicated and lacking in simplicity is the code and schemas/structures, and how much sheer volume is there in the complete system for us to comprehend?
Now that we are establishing a scientific view and understanding of complexity, the next steps will be around exploring how this applies to the practice of software engineering. We will be asking and exploring answers for a couple key questions in my next blog entries:
- Are there more and less complex ways to solve the same problems?
- In software engineering for business, do we solve a problem and move on never to work with our code again?
- Why do these things matter?
Please feel free to comment or ask questions and I will add my responses to future blogs, or within the comments section.
1 – Dean, Walter, “Computational Complexity Theory”, The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/win2016/entries/computational-complexity/
2a – Kolmogorov, A. N., 1965, ‘Three Approaches to the Definition of the Concept “Quantity of Information”’. Problemy Peredachi Informatsii, 1: 3–11.
2b – Kolmogorov, A. N., and V. A. Uspensky, 1988, ‘Algorithms and Randomness’. SIAM Theory of Probability and Applications, 32: 389–412
3 – Immerman, Neil (1999). Descriptive Complexity. New York: Springer-Verlag. pp. 113–119. ISBN 0-387-98600-6.
4 – Cowan N. The Magical Mystery Four: How Is Working Memory Capacity Limited, and Why? Curr Dir Psychol Sci. 2010;19: 51–57. 10.1177/0963721409359277 [PMC free article] [PubMed]