7 Key Concepts You Need to Know From Herbert Simon’s Paper on the Architecture of Complexity

Introduction

In 1962, Herbert A. Simon, a distinguished figure in the fields of economics, psychology, and artificial intelligence, published a beautiful paper entitled “The Architecture of Complexity.” This seminal work has since become a cornerstone in studying complex systems, transcending disciplinary boundaries and impacting various research fields.

Complexity and complex systems abound in nature, and their characteristics are most prominently observed in living systems, from DNA molecules to single and multicellular organisms, animals, and social systems.
Complexity and complex systems abound in nature, and their characteristics are most prominently observed in living systems, from DNA molecules to single and multicellular organisms, animals, and social systems.

Simon’s paper thoroughly examines the nature of complexity. By exploring the interconnectedness of diverse disciplines, such as biology, physics, sociology, and computer science, Simon illuminates the inherent complexity present in systems at both the molecular and societal levels.

This article will break down Simon’s original paper into seven key concepts with commentary on each.

Concept 1: Why We Need Systems Theory

A number of proposals have been advanced in recent years for the development of “general systems theory”, which, abstracting from properties peculiar to physical, biological, or social systems, would be applicable to all of them. We might well feel that, while the goal is laudable, systems of such diverse kinds could hardly be expected to have any nontrivial properties in common. Metaphor and analogy can be helpful, or they can be misleading. All depends on whether the similarities the metaphor captures are significant or superficial.

— H. A. Simon, The Architecture of Complexity

Two fundamental issues in describing complex systems have been encountered so far:

  • The first issue is how reliable a reductionist approach can be as it describes a system in terms of its macro rather than micro properties. Some systems exhibit structural redundancies and can, therefore, be reduced without much loss of information, while in other cases, to paraphrase Dr Murray Gell-mann, the only reliable description of a complex system is the system itself. We return to this point later when discussing Nearly-Decomposible Systems.
  • The second issue is whether a framework of sufficient generality exists that can rigorously describe aspects of complex systems. Simon believes this to be the case and refers to General Systems Theory (along with useful analyses and metaphors) as providing the boilerplate concepts for such a framework.

The significance of General Systems Theory lies in its ability to explain phenomena of complex systems in sufficiently abstract and rigorous methods despite the former’s spread into various fields and disciplines such as physics, chemistry, biology, and social sciences.

The study of complex systems spanned various fields, such as chemistry, biology, and sociology, and it became imperative at some point to find a generic framework capable of describing these systems. The framework was developed by pioneers in complexity theory like Ludwig von Bertalanffy and became known as General Systems Theory.
The study of complex systems spanned various fields, such as chemistry, biology, and sociology, and it became imperative at some point to find a generic framework capable of describing these systems. The framework was developed by pioneers in complexity theory like Ludwig von Bertalanffy and became known as General Systems Theory.

General Systems Theory (GST) is a conceptual framework and interdisciplinary field that originated in the mid-20th century. It was developed to address complex phenomena and problems involving interaction between system components or elements. Here are the key facts about General Systems Theory:

  • Origin and Pioneers: Austrian biologist Ludwig von Bertalanffy first introduced General Systems Theory in the 1930s. He aimed to create a unified framework for understanding systems across various disciplines.
  • Interdisciplinary Nature: GST is inherently interdisciplinary, encompassing fields such as biology, physics, psychology, sociology, economics, and engineering. It seeks to find common principles and concepts that can be applied across these diverse domains.
  • Systems and Components: GST views the world as systems composed of interconnected and interdependent components or elements. These elements can be physical entities, processes, or abstract concepts.
  • Hierarchy and Levels: Systems can exist at multiple hierarchical levels. For example, a biological system like an organism is composed of subsystems (organs), which are further composed of cells, and so on. GST often involves analyzing systems at different levels of abstraction.
  • Holism: A core principle of GST is holism, which emphasizes the importance of considering the entire system and the interactions between its components rather than focusing solely on individual parts. This holistic perspective is crucial for understanding the emergent properties of systems.
  • Feedback and Control: GST recognizes the significance of feedback mechanisms in systems. Feedback loops can either maintain stability or drive change within a system. Control theory deals with regulating system behaviour and is often related to GST.
  • Emergence refers to the appearance of new properties or behaviours in a system that cannot be explained solely by understanding its individual components. GST explores how these emergent properties arise from the interactions within a system.
  • Open and Closed Systems: GST distinguishes between open and closed systems. Open systems interact with their environment, exchanging matter, energy, or information, while closed systems do not have such interactions.
  • Applications: General Systems Theory has found applications in various fields, including biology (ecology and systems biology), management (systems thinking in organizations), and engineering (systems engineering and control theory).
  • Critiques: While GST offers valuable insights into complex systems, it has also faced criticism for its abstract and general nature. Some argue that it lacks the precision needed for practical problem-solving in specific domains.

General Systems Theory provides a systematic way to analyze and understand complex systems, emphasizing their interconnectedness and the emergence of properties at different levels of organization. It has influenced multiple disciplines and continues to be relevant in addressing complex problems across various fields.

Concept 2: What Is a Complex System?

Roughly, by a complex system, I mean one made up of a large number of parts that interact in a nonsimple way. In such systems, the whole is more than the sum of the parts, not in an ultimate, metaphysical sense, but in the important pragmatic sense that, given the properties of the parts and the laws of their interaction, it is not a trivial matter to infer the properties of the whole.

— H. A. Simon, The Architecture of Complexity

A complex system is composed of many parts interacting in a non-trivial way. Herbert Simon defines it as a "whole that is more than the sum of the parts". A national economy is a perfect example of such a system.
A complex system is composed of many parts interacting in a non-trivial way. Herbert Simon defines it as a “whole that is more than the sum of the parts”. A national economy is a perfect example of such a system.

In Complexity in Natural and Human Systems — Why and When We Should Care, we provided four definitions of complexity, which, despite some overlapping, presented precise notions of what a complex system should look like. Herbert Simon originally adopted one of those definitions revolving around the following concepts:

  • Complexity arises in a system with many parts interacting in non-simple ways. For example, elementary particles (quarks) can form atomic nuclei through the strong force, nuclei interact with electrons via the electromagnetic force to form atoms, and the latter interact with other atoms via covalent, ionic, hydrogen, or Van Der Vaals bonds to form molecules. Another example can be found in social groups: culture, shared history, hierarchical organisation, status, and power relations govern the interaction between two team members.
  • The macro properties of a complex system cannot be inferred from the micro properties of its parts. This concept is summed up by the phrase “the whole is more than the sum of the parts”. The system cannot be reduced to the sum of its parts without losing precision and predictive powers. As we shall see later when discussing Nearly-Decomposable Systems, understanding complex systems can be satisfactorily achieved by recursively decomposing the system into smaller subsystems until some elementary level is reached. The elementary level is somehow arbitrary and is at the discretion of the scientist. This arbitrariness in choosing the boundaries is OK, as we shall see later.

Concept 3: Why Hierarchical Complex Systems Are More Frequent in Nature

The lesson for biological evolution is quite clear and direct. The time required for the evolution of a complex form from simple elements depends critically on the numbers and distribution of potential intermediate stable forms.

— H. A. Simon, The Architecture of Complexity

In Dr. Simon’s paper, hierarchies play a pivotal role in the rise and evolution of complex systems:

  • A hierarchical system comprises multiple subsystems, each of which can be subdivided into its own subsystems. The number of subsystems belonging to a specific node is called its span.
  • The relationship between a top system and its child subsystems is not limited to the authoritarian hierarchy of the boss-subordinate type. In Simon’s view, any two interacting systems form a hierarchy of some sort defined by the intensity and nature of the interaction.
  • A hierarchical system can evolve far more quickly than its non-hierarchical counterpart. The evolution speed heavily depends on the number and distribution of stable intermediary stages. To illustrate this concept, consider that the energy required to build the first cell from random processes is the same as the cell replicating to create two new ones.

This last concept explaining why nature favours hierarchical systems is crucial in an age where flat organisational structures are seen as the natural progression towards a more empowered and egalitarian form of running businesses. The entire team is affected in a flat hierarchy when the boss is absent or overloaded. A hierarchical structure is easier to scale by breaking and recomposing existing subsystems. On the contrary, more direct reports make the system more fragile in a flat hierarchy by continuously overloading the line manager.

Another recent concept, that of self-organisation, has emerged with Agile, in which no formal hierarchies are imposed, but the teams are encouraged to self-organise, preferably without hierarchies. Studies have shown that hierarchies will inevitably emerge in such cases, although for slightly other reasons than the ones described above (speed of evolution). It seems that hierarchies are inevitable.

Concept 4: Problem-Solving Through Selective Trial and Error

Nature is the most capable problem solver. By producing life through millions of years of constant tinkering and refinements, nature has solved what appears to be an insoluble puzzle. Simon extends this analogy further to cover the problem-solving activities we normally engage in.

A considerable amount has been learned in the past five years about the nature of the mazes that represent common human problem-solving tasks, proving theorems, solving puzzles, playing chess, making investments, and balancing assembly lines, to mention a few. All that we have learned about these mazes points to the same conclusion: that human problem-solving, from the most blundering to the most insightful, involves nothing more than varying mixtures of trial and error and selectivity.

— H. A. Simon, The Architecture of Complexity

Simon’s theory of how people solve problems is based on a “mixture of trial and error and selectivity”. Confronted with problems we encounter for the first time, the following process occurs in our minds:

  • We scan our memory to look for similar problems. If we find an old problem that was successfully resolved, the same solution is attempted.
  • If the problem is new, we try to break it into more manageable chunks (logical decomposition into subsystems). This decomposition stops when we are comfortable with the current granularity level.
  • Apply a mixture of trial and error (preferably safe-to-fail) experiments for each subproblem identified. If the local solution on the subsystem level brings us closer to the global one, we adopt it and move to the next subsystem.
  • We arrive at the final solution once all the subproblems have been solved. The final solution relies heavily on the stable intermediary solutions generated through the process steps of problem-solving.

Concept 5: Selectivity and Feedback Loops

When we examine the sources from which the problem-solving system, or the evolving system, as the case may be, derives its selectivity, we discover that selectivity can always be equated with some kind of feedback of information from the environment.

— H. A. Simon, The Architecture of Complexity

Minimizing an objective function is a common approach to neural network supervised learning. The idea is very straightforward: an objective (or cost function) is constructed, and the neural network parameters are varied algorithmically until a specific configuration minimizes this cost function. It is possible that the algorithm will find a local minimum instead of the global one. The local minimum might be deemed "good enough" for all practical purposes.
Minimizing an objective function is a common approach to neural network supervised learning. The idea is very straightforward: an objective (or cost function) is constructed, and the neural network parameters are varied algorithmically until a specific configuration minimizes this cost function. It is possible that the algorithm will find a local minimum instead of the global one. The local minimum might be deemed “good enough” for all practical purposes.

This approach to problem-solving presents the following features:

  • The discovered solution is not unique but depends on the specific path followed by the person searching for the solution. This path depends on how the original problem was decomposed and the past experiences of the person involved.
  • The solution might be suboptimal but might be deemed good enough for all practical purposes. Consider, for example, the case of supervised learning of a neural net using a backpropagation algorithm. The learning algorithm continuously searches the spaces of possible solutions, jumping from one local minimum to the next until it exhausts its allocated search time.
  • As per Simon, the selection process (or heuristics) used to distinguish between great and mediocre solutions is determined by prior experience and environmental feedback. Complex adaptive systems navigate towards an optimal solution by constantly seeking guidance from the environment in which the system is embedded.

Concept 6: Nearly Decomposable Systems

(a) in a nearly decomposable system, the short-run behaviour of each of the component subsystems is approximately independent of the short-run behaviour of the other components; (b) in the long run, the behaviour of any one of the components depends in only an aggregate way on the behaviour of the other components.

— H. A. Simon, The Architecture of Complexity

The notion of near decomposability is intuitive, elusive, and powerful at the same time. It lies at the heart of our ability to describe complex systems without having to track every single one of its parts, which, for example, in the case of a volume of gas, can be intractable. Near decomposability can be explained as follows:

  • We start by placing all the parts of a complex system in a two-by-two matrix, with each element in that matrix measuring the strength of the interaction between the corresponding parts.
  • In a non-trivial system, its parts will interact differently, and groups of highly interacting parts can be distinguished from other groups where interaction levels are lower. The matrix can then be decomposed into sub-matrices where intragroup interactions are much stronger than intergroup ones.

Because of the weak links between the different subgroups, the short-term evolution of a group will be independent of the rest. In the long run, the behaviour of one group will depend on the others only in an aggregate way; short-term, high-frequency changes will cancel out in the long run.

Consider the structure of a society as an example. Strong familial ties hold families firmly together, while appreciable but less strong ones hold tribes, communities, and neighbourhoods in looser but fairly distinct structures. On the nation-state level, similar bonds allow citizens to form identities and cultures and perceive themselves as one whole.

Concept 7: Hierarchies in Social Systems

In social as in physical systems, there are generally limits on the simultaneous interaction of large numbers of subsystems. In the social case, these limits are related to [a] human being more nearly a serial than a parallel information-processing system. He can carry on only one conversation at a time, and although this does not limit the size of the audience to which mass communication can be addressed, it does limit the number of people simultaneously involved in most other forms of social interaction.

— H. A. Simon, The Architecture of Complexity

The necessity and inevitability of hierarchies in complex systems have been amply described in the previous sections. In the present one, we will focus a bit more on the specific examples of social hierarchies and why they emerge.

A nearly decomposable system is a property of a 2×2 matrix representing the strength of interactions between the different parts of a complex system. For the system to be nearly decomposable, distinct subsystems can be delineated with strong intra-group interactions and weak inter-group ones. For example, the group “A-B-C” represents a subsystem with a strong internal connection.

Simon provides a compelling theory well-established in the scientific community (see Sapiens – A Brief History of Humankind). The theory posits constraints on relationship building among individuals, with those limits generally applying to the size of the group being formed. The type of relationship sought influences the limit.

For example:

  • Miller’s number 7 places a limit on our information processing capacity. Teams of no more than seven members are placed under one manager.
  • 5 is the maximum number of people one can trust, limiting the size of an effective committee.
  • Dunbar’s number 150 limits the number of individuals in a socially cohesive group.

Conclusion

Philip assembled his Macedonian empire and gave it to his son, to be later combined with the Persian subassembly and others into Alexander’s greater system. On Alexander’s death, his empire did not crumble to dust but fragmented into some of the major subsystems that had composed it. The watchmaker argument implies that if one were Alexander, one should be born into a world where large, stable political systems already exist. Where this condition was not fulfilled, as on the Scythian and Indian frontiers, Alexander found empire building a slippery business.

— H. A. Simon, The Architecture of Complexity

The general reader might ask Why we (software developers) need to care about complexity? The answer is twofold. First, “The Architecture of Complexity” is a masterful paper by a first-caliber intellectual, Herbert A. Simon, a classic for complexity enthusiasts. Second, complexity consists of a set of concepts that are invaluable to understanding social systems, such as software teams.

The study of the very small (elementary particles, quantum systems, superstrings) and the very large (solar systems, galaxies, clusters and superclusters of galaxies, and the universe itself) relies heavily on concepts of Newtonian dynamics. Everything in between, such as crystals, proteins, DNA, living organisms, social groups of any size, and many other physical, chemical, and biological systems, are complex, and their study through the Newtonian framework is likely to frustrate even the most patient.

Applying Newtonian dynamics, i.e. a framework where the future evolution of a system is deterministic and depends solely on its laws of dynamics and initial conditions, to complex systems such as a team of software developers is doomed to fail. Therefore, software managers and developers must expand their knowledge in the areas of anthropology, philosophy, and complexity theory. This article and the many others we authored on these topics are there to help the author and reader understand what makes complex systems tick and how to act in them.

This elaborate study of complexity, especially in social sciences, lies in the context of achieving Operational Excellence in Software Development teams, helping developers succeed in creating better and more affordable software. We hope this article has been insightful for the reader, helping them make sense of the complex (and sometimes hostile) ecosystem in which they operate.

Leave a Reply

Your email address will not be published. Required fields are marked *