Complexity in Natural and Human Systems — Why and When We Should Care

1. Introduction

The reader familiar with our ideas will promptly appreciate complexity theory’s role in shaping our understanding of human systems. But what is complexity, and why is it relevant to a discussion on software development?

As we address these questions, we will focus on fundamental concepts of complexity theory with examples of complex systems, from tiny living organisms to the entire universe and other systems in between.

These concepts will provide us with a fresh perspective that is especially useful for technically oriented staff trained in engineering. More precisely, it will help us understand why mechanistic and reductionist models frustrate our efforts to solve problems or improve processes within teams and social groups.

What will this article cover?

  • We start the article by distinguishing complicated from complex problems and systems
  • We then define complex systems and how they impact organisations, teams, or any social group tasked with an objective.
  • To motivate the discussion further, we will examine four examples of complex systems and what concepts from complexity theory can help us better understand them.
  • Finally, we provide references and books for readers wishing to pursue this discussion further.

This article focuses on complex systems and the theory behind them. It describes the fundamental concepts in complexity with examples that cover a broad range of domains. The application of complexity to business management in general and software development, in particular, have been described in two articles (Operational Excellence and the Structure of Software Development and Delivery and Principles of Operational Excellence in Software Development). The interested reader is invited to go through them in that order.

2. Complex or Complicated?

We commonly use complex and complicated in our daily discourse to refer to problems that are hard to solve or systems that cannot be easily understood, using both terms interchangeably. However, it is valuable to distinguish between both domains (the complex and the complicated) as the tools used in each will be different.

By separating complex and complicated, we find ourselves equipped with two instead of one method of managing problems. Each method will apply best in one domain but not the other. Instead of universal tools that may work poorly outside their domain of applicability, we will have a richer and more specialized toolset.

The table below shows the main differences between complicated and complex domains. For a fuller discussion, see Dr Dave Snowden’s online talks on the Cynefin framework.

ApproachReduction and logical decompositionReduction without loss of prediction power is not possible
Nature of their structureThe whole equals the sum of its partsThe whole is more than the sum of its parts
Solution findingEngineering solutions existMultiple viable and competing alternatives might exist
Management and controlSetting a target state and closing the gapNudging in the desired direction
Constraint effectivenessConstraints are effective.Constraints are weak.
Nature of causalityIf A appears before B, then A leads to B (causality can be established).A may lead to B today, but tomorrow might lead to C or D (causality cannot be established).
PredictabilityCannot produce new behaviour and are, therefore, predictable.Shows recognizable behavioural patterns but chaotic and hard to predict.
InnovationInnovation is not possible.Emergent properties lead to innovative behaviour.
A comparative analysis of complicated and complex systems.

Let’s take an example to illustrate the points. A computer is an intensely complicated piece of electronic equipment, but it is not complex. The same program with the same inputs always produces the same results. There is no stochasticity, non-linearity, or novel or emergent behaviour. The personal computer cannot self-organize, reprogram itself, or change the laws of physics, quantum mechanics, or data processing.

When modelling a computer, the complicated subsystems can always be abstracted away with simpler representations. Despite this reduction, the abstract model can faithfully replicate the behaviour of the actual physical system.

In contrast, we observe something very different in complex systems. A complex system can modify its internal structure and the interaction rules between its components in response to external stimuli. Attempting to understand and control a complex system, such as a social group, as though it was merely complicated can lead to much frustration.

3. What Is a Complex System?

3.1 Defining Complexity

While the previous section might have given the reader some idea of how complex systems behave, it is time to provide a more rigorous definition of complex systems. The issue, however, is that scientists and experts have not agreed on a single definition of complexity — Dr. Seth Lloyd provides a plausible explanation for this.

Complexity is not a property of a single physical state but rather a property of a process.

Seth Lloyd and Heinz Pagels. Complexity as thermodynamic depth, 1988.

In his view, complexity is like physics; it cannot be easily defined, but the physical attributes of a system can defined and measured with sometimes arbitrary precision. Therefore, in this section, instead of giving a unique definition of complex systems, we will quote some experts on what they believe complex systems are. The coming sections will focus on complex systems’ specific, tangible attributes.

In an excellent paper, conveniently titled What is a complex system? the authors have collected nine definitions of complex systems, of which we found two highly insightful. We also present a third definition from Wikipedia.

3.2 Complex Systems — A First Definition

A complex system is one whose evolution is very sensitive to initial conditions or to small perturbations, one in which the number of independent interacting components is large, or one in which there are multiple pathways by which the system can evolve. Analytical descriptions of such systems typically require nonlinear differential equations.

What is a Complex System? — J. Ladyman, J. Lambert, K. Wiesner

Definition 1 focuses on three important aspects of complex systems which will appear constantly in our discussions:

  • Their evolutions’ sensitivity towards initial conditions
  • Their large number of independent and interacting components.
  • The multiple possibilities in which they might evolve.

3.3 Complex Systems — A Second Definition

In recent years the scientific community has coined the rubric ’complex system’ to describe phenomena, structures, aggregates, organisms, or problems that share some common theme: (i) They are inherently complicated or intricate […]; (ii) they are rarely completely deterministic; (iii) mathematical models of the system are usually complex and involve non-linear, ill-posed, or chaotic behaviour; (iv) the systems are predisposed to unexpected outcomes (so-called emergent behaviour).

What is a Complex System? — J. Ladyman, J. Lambert, K. Wiesner

Definition 2 stresses two distinguishing aspects of complex systems, which will also feature consistently in our discussions:

  • Their unpredictability and chaotic behaviour
  • Their capacity for innovation

3.4 Complex Systems — A Third Definition

[…] something with many parts where those parts interact with each other in multiple ways, culminating in a higher order of emergence greater than the sum of its parts.

Wikipedia — Complexity

Definition 3 tells us that complex systems produce emergent properties that cannot be explained by examining their parts in isolation. A complex system is more than the sum of its parts.

3.5 Complex Systems — A Fourth Definition

The authors of “What is a complex system?” argue that neither non-linearity nor feedback loops, spontaneous order, lack of central control, emergence, or hierarchical organisation are sufficient conditions for complexity. So, how can we determine whether a system is complex or not?

To assess the degree of complexity in a system, Professor Lloyd suggests we try to answer the following questions about that system:

  • How difficult is it to describe?
  • How difficult is it to create?
  • How organized is its structure?

Many measures can provide us with quantitative and precise answers to those questions. In the next section, we will explore some of these measures.

4. How Can We Measure Complexity?

Yes, complexity can indeed be measured in a few different ways. Some measures are concerned with the description of the system, such as the amount of randomness of regularity. Others focus more on how easy or difficult it is to create the system’s final state if one starts with the simplest form. Finally, a third measure attempts to quantify the degree of organization, such as the level of decomposition and hierarchies.

Let’s have a look at some of the measures in detail.

4.1 Descriptive Measures of Complexity

When assessing the characteristics of a complex system, it is essential to consider various measures that can provide insight into its structure and behaviour.

  • One such measure is information, which quantifies the knowledge or data contained within a system. Information can be expressed in bits, representing the fundamental unit of information in a binary system. In addition to information, another crucial measure is entropy. Entropy is a concept borrowed from thermodynamics but is widely applicable in information theory. It quantifies the level of disorder or randomness present in a system. In simple terms, high entropy indicates a high degree of randomness or unpredictability, while low entropy suggests a more ordered and predictable system.
  • Another measure is algorithmic complexity, also known as algorithmic information content. This concept was formulated independently by three renowned scientists, Ray Solomonoff, Andrey Kolmogorov, and Gregory Chaitin, in the mid-1960s. Algorithmic complexity is a mathematical approach used to quantify the amount of information present in a system. It seeks to determine the length of the shortest computer program that can generate a complete description of the system. In other words, it measures the level of complexity required to reproduce the system’s structure and behaviour.
  • A third measure is the concept of Lempel-Ziv complexity. Lempel-Ziv complexity is a measure of regularity in a system, similar to a compression algorithm that attempts to reduce the number of storage bits of a regularly occurring word in a string. Abraham Lempel and Jacob Ziv developed it in the 1970s, and it has found applications in various fields, including data compression, bioinformatics, and time series analysis. The idea behind Lempel-Ziv’s complexity is based on the observation that certain patterns or words repeat themselves in many sequences. By identifying and coding these repeated words, the complexity of the sequence can be reduced, resulting in a more concise representation. In other words, the Lempel-Ziv complexity of a sequence quantifies the amount of information needed to describe it. To calculate Lempel-Ziv complexity, the sequence is scanned from left to right, and each time a new word is encountered, it is added to a dictionary. The algorithm moves to the next word if a word is already in the dictionary. The complexity is then determined by the number of distinct words encountered during this process.

Unfortunately, all three measures are inadequate representations of complexity as they highly reward completely random systems, not exactly according to our intuitive notions of complexity. A complex system cannot reside in the extremes but somewhere in the middle, with sufficient order to produce patterned behaviour and enough flexibility to allow for adaptation.

4.2 Difficulty of Creation as a Measure of Complexity

What about the difficulty of creation? Is that a good measure of a system’s complexity? The below measures were proposed in different contexts to measure this attribute. Given a complex system, these methods can measure how easy or difficult it would be to recreate this system from its initial conditions.

  • Computational complexity
  • Information-based complexity
  • Logical depth
  • Thermodynamic depth
  • Cost
  • Crypticity

Logical depth appears to be the most insightful and subtle of the lot and will be the focus of our attention.

Logical depth is the shortest time a universal computer requires to compute an object by an incompressible program. In this context, an incompressible program is also the shortest from the simplest hypothesis we can put forward to explain the object’s creation. The running duration of this program then provides us with a measure of an object’s complexity.

To motivate the discussion, we will use a quote from Charles Bennet’s paper on the subject:

  • Some mathematical and natural objects (a random sequence, a sequence of zeros, a perfect crystal, a gas) are intuitively trivial, while others (e.g. the human body, the digits of Pi) contain internal evidence of a nontrivial causal history.

Given a random sequence of one million digits, logical depth can be understood as follows:

  • A short program that prints out the one million digits is incompressible but probably not the shortest, as it doesn’t attempt to leverage any regularities in the sequence.
  • Let’s assume this number is Pi. We can now write a small program (much shorter than one million bytes of code) that would calculate and print the first million digits of Pi.

The second program would start from a simple rule and recursively add more digits to the sequence until it reaches one million. Instead of a depth of one million bytes, we can store the same information in a much smaller string, a stunningly simple formula for generating Pi. This shortest string is the sequence’s logical depth.

Charles Bennet explains why we assume complex objects should have had humble origins. In his view, the fewer assumptions we make on the creation hypothesis, the more plausible it should be. He uses “ad-hocness” to describe redundant assumptions that a more concise hypothesis can replace. This idea agrees with the principle of Occam’s razor for theory construction or evaluation.

4.3 Degree of Organization

Measuring the degree of organization in a system is a key aspect of assessing its complexity. Complexity is a multifaceted concept that involves understanding a system’s intricate interactions, structures, and behaviours. Measuring organization within the system provides valuable insights into its complexity for several reasons:

  • Identification of Patterns: Complex systems often contain patterns or regularities that emerge from the interactions of their components. Measuring organization helps identify these patterns, such as fractal structures, self-similarity, or recurring motifs, contributing to the overall complexity.
  • Hierarchical Structure: Complex systems often exhibit hierarchical organization, where smaller components or subsystems interact to form larger, more complex structures. Measuring organization can reveal the presence and depth of these hierarchies.
  • Information Flow: The flow of information and feedback loops within a system is a crucial aspect of complexity. Measuring organization can help assess the information flow and connectivity between components, which is essential for understanding the system’s behaviour.

Various quantitative measures and metrics are used to assess the organization and complexity of systems, such as entropy, fractal dimension, network connectivity, and information theory measures like mutual information. These tools allow researchers to assign numerical values to aspects of the organization, providing a basis for comparison and analysis.

5. Examples of Complex Systems

5.1 Complexity in Nature

We normally associate complexity and dynamicity with the living. On the contrary, we expect the inanimate universe to be static and eternal. The following four examples show that this assumption does not always hold, and the richness of evolution and behaviour can be experienced in non-living systems.

Cellular automata, human history, and the quantum computational model of the universe are three examples that will allow us to understand how complex behaviour unfolds from seemingly simple (physical, biological, computational…) rules.

These examples show how simple rules governing the interaction of a sufficiently large number of agents can produce innovation, emergent behaviour, and unpredictability, all fundamental properties of complex systems.

5.2 Cellular Automata and the Origins of Life

Living organisms are the epitome of complexity. Each organism, from a tiny bacterium to a majestic whale, is a harmonious symphony of intricate systems and processes. To help scientists understand the characteristics and behaviour of these amazing creations, they often turn to various computational models. One such model that has proven to be highly valuable is Cellular Automata (CA).

Cellular automata models involve a frid with living and dead cells, both of which can change their states (from dead to living and vice versa) according to simple rules applied in many iterations. After many iterations, a complex pattern might emerge if the rules are just right.
Cellular automata models involve a frid with living and dead cells, both of which can change their states (from dead to living and vice versa) according to simple rules applied in many iterations. After many iterations, a complex pattern might emerge if the rules are just right.

Cellular Automata, initially developed in the 1940s by Stanislaw Ulam and John von Neumann, provide a discrete computational framework for studying complex systems. The concept behind Cellular Automata is relatively straightforward. It involves a rectangular grid composed of individual cells, where each cell can exist in a certain state. In most cases, this state is represented by a binary alphabet, meaning a cell can be either on or off, alive or dead.

What makes Cellular Automata truly fascinating is how these cells interact and evolve over time according to predefined rules. In the standard model of Cellular Automata, these rules are time-independent and applied to the entire grid simultaneously at each iteration. This synchronous updating allows for the emergence of complex patterns and behaviours, making Cellular Automata an excellent tool for simulating and studying various phenomena.

One of the most well-known Cellular Automaton rules is the Wolfram-30 rule, named after the influential scientist Stephen Wolfram. Dr. Wolfram extensively investigated the 256 rules associated with a neighbourhood of three cells. Of these rules, only a select few generated intriguing patterns that exhibited an ongoing complexity, neither homogenous, oscillating, or purely chaotic.

Rule 110 of the Wolfram set, for example, is very intriguing; it generates intricate and seemingly random patterns akin to those found in natural phenomena, such as crystal growth or tree branches. It demonstrates how a few simple rules can give rise to complexity and emergent behaviour, mirroring the complexity we observe in our world.

One of the most remarkable aspects of Rule 110 is its ability to exhibit both ordered and chaotic behaviour. Initially, the rule starts with a simple row of cells, often called the “initial condition.” As generations evolve, intricate and intricate patterns begin to emerge, containing a mix of repetitive and irregular elements. The complexity increases with each subsequent generation, creating structures reminiscent of gliders, spaceships, and even collisions between patterns.

To appreciate the beauty and intricacy of CA, we recommend looking at Conway’s Game of Life. From random starting patterns, you will see how most evolutions will end in an empty grid with all the cells dying out. At the same time, a tiny minority will generate local structures that keep oscillating forever.

Nathan Thomson invented a more exciting variation of Conway’s Game of Life in 1994 since it allowed for a specific pattern to replicate itself. Replication (such as when a bacterium splits) is a hallmark of living organisms.

5.3 Explaining Innovation in Complex Systems

An explicit example is the feedback between genotype and phenotype within a cell: genes are “read out” to produce changes to the state of expressed proteins and RNAs, and these, in turn, can feed back to turn individual genes on and off.

Formal Definitions of Unbounded Evolution and Innovation Reveal Universal Mechanisms for Open-Ended Evolution in Dynamical Systems — Alyssa Adams, Hector Zenil, Paul C. W. Davies, and Sara I. Walker, 2017

In the previous section, we have seen patterns that evolved under constant or fixed rules. However, a model where the dynamics are not allowed to change cannot explain innovation. To introduce the capacity for innovation, the agents of a complex system must be able to modify their interaction rules dynamically.

Changes in the organism’s internal states or rules could lead to novel adaptive behaviours or patterns, while changes in the environment could provide new stimuli or challenges that necessitate innovative responses from the organism.

A 2017 paper by Alyssa Adams, Hector Zenil, Paul C. W. Davies, and Sara I. Walker titled Formal Definitions of Unbounded Evolution and Innovation Reveal Universal Mechanisms for Open-Ended Evolution in Dynamical Systems explains how this works:

  • The researchers recognized that a system capable of innovation must necessarily consist of two interconnected subsystems: the organism and the environment. The two systems are intricately coupled, meaning that changes in one impact the other and vice versa.
  • In their study, Adams et al. introduced a modified CA model where not only the states of the cells evolved but also the rules that governed their behaviour. By observing the dynamics of this novel model, the researchers discovered that the interplay between the organism and the environment played a crucial role in driving innovation.
  • Furthermore, the study found that having feedback loops between the organism and the environment amplifies the system’s potential for innovation. These feedback loops create a constant cycle of adaptation and selection, enabling the system to generate novel forms and behaviours continually.

There are other mechanisms through which complex systems can innovate, and one such system, abundant in nature, is the repurposing of existing features to deal with novel problems. Adapting existing features to solve new problems is a fundamental concept in evolutionary biology and is often called exaptation. Exaptation occurs when a trait or feature that evolved for one function or purpose is co-opted for a different function, often due to changing environmental conditions or selective pressures.

Stephen Jay Gould and Elisabeth Vrba introduced the term “exaptation” in the context of evolutionary biology. They argued that not all features or traits necessarily evolve for their eventual, current functions. Instead, some features may have evolved for a different purpose but were later repurposed or adapted to serve a new function. This concept challenges the notion that all traits result from direct adaptations to specific functions.

Exaptation is a significant departure from the traditional view of adaptation, in which traits are thought to evolve directly in response to selective pressures. Instead, it highlights the role of historical contingency and serendipity in the evolutionary process.

A classic example of exaptation is the evolution of feathers in birds. Feathers likely initially evolved in a non-avian dinosaur ancestor for insulation or display purposes. However, they were co-opted for flight over time, which is a completely different function. Feathers are a classic example of a trait that was adapted to serve a new and essential role in the evolution of birds.

Adapting existing features to solve new problems through exaptation is an important concept in evolutionary biology that underscores the complexity and non-linear nature of the evolutionary process.

5.4 The Quantum Computing Universe

One of the most beautiful and revealing passages I have ever read is Professor Seth Lloyd’s opening paragraphs from his book Programming the Universe — A Quantum Computer Scientist Takes on the Cosmos. It goes like this:

The universe is the biggest thing there is, and the bit is the smallest possible chunk of information. The universe is made of bits. Every molecule, atom, and elementary particle registers bits of information. Every interaction between those pieces of the universe processes that information by altering those bits. That is, the universe computes […] in an intrinsically quantum-mechanical fashion; its bits are quantum bits. The history of the universe is […] a huge and ongoing computation. […] What does the universe compute? It computes itself […], its own behaviour.

Programming the Universe — A Quantum Computer Scientist Takes On the Cosmos — Seth Lloyd

In Lloyd’s quantum computing universe, elementary particles in the universe register bits of information. Every interaction between particles is an elementary computation guided by the laws of physics. The universe is a gigantic computing machine. The complexity observed in stars, galaxies, black holes, and life are all emergent features of a large body of interacting particles. At the heart of physical rules are quantum mechanics and general relativity. Everything else (biology, chemistry, sociology, anthropology…) emerges from these “relatively simple” rules.

To support this model, we observe the following:

  • The Big Bang’s initial conditions were not complicated and could be summarised by extreme temperatures coupled with extreme densities governed by a unified form of the forces of nature.
  • As the universe’s temperature cooled down, the forces of nature took distinct forms. Small structures like atoms and molecules started to form, driven by weak, strong, and electromagnetic forces.
  • The cooling temperature changed the rules (through symmetry-breaking) and not just the state of the universe, similar to what we saw in the environment-organism model. (See The Greatest Story Ever Told So Far for a popularized version of the Standard Model of Particle Physics).
  • Today’s universe is filled with intricate structures of galaxies, stars, black holes, and many other objects of fascinating sizes and properties. The most evolved of the lot is probably life on Earth. In Lloyd’s view, this overwhelming complexity can be explained by the successive application of simple quantum mechanical laws to every “bit” in the universe over 13.7 billion years.
  • In his Conformal Cyclic Cosmology model (see Cycles of Time), Sir Roger Penrose argues that the universe will contain massless photons towards the end of its life (after some googol years), thus removing any signs of scale. The universe will be moving forward in time, but the notion of physical distances will disappear. Assuming the theory is true, this would be an example where the rules governing the universe will have changed under the influence of its previous states and the state of the universe itself.

While the similarities are striking, two important differences set the cellular automata model and the universe apart:

  • The original state of the universe was very homogenous. But, without random quantum fluctuations, local regions of space would not have been lumped under the effect of gravity, and no large structures such as stars and galaxies would have emerged. The state of the universe today would have been very different if it had not been nudged in a specific direction by those tiny quantum fluctuations during its very early stages.
  • While the cells in the cellular automata grid were either on or off, the quantum bits in the universe are simultaneously on and off. The CA cells and rules are classical in every sense, while the universe is quantum mechanical in every bit. Its bits are qubits, and its computations are quantum-mechanical.

The idea that the universe is computing its behaviour, rules and states, physical laws and objects is mind-bending.

The simpler laws that the universe started with were the basis of a richer and more diverse set that appeared later when the universe’s conditions changed (sparser and cooler). During the Big Bang, the state of the universe was simplicity itself, a stark contradiction to the diversity and complexity we observe today.

The idea that the future evolution of the universe was fully encoded in the initial states and laws that it began with, just like two cells combine and evolve into a fetus, makes complexity so fascinating.

5.5 History and Human Progress

We think of history and human progress as a slow-paced, incremental build-up of tiny achievements here and there with occasional scientific discoveries and momentous events. This view, however, seems to be incongruent with observation, as Dr. Nassim Taleb tries to explain in his best-seller The Black Swan.

History is a series of jumps triggered by Black Swans and Butterfly Effects rather than incremental changes.
History is a series of jumps triggered by Black Swans and Butterfly Effects rather than incremental changes.

In today’s modern world, progress is dominated by extremely rare events with huge impacts, which Nassim Taleb calls The Black Swan. These events radically change some aspects of our lives such that life before them would seem like a remote past. Because they are so rare, they are unpredictable. Nobody could have predicted the two World Wars, the internet, smartphones, or the 2008 financial crisis. Yet these events happened, and they irrevocably changed our lives through the events of the technological innovations that ensued as a result.

We can think of today’s world as one big society due to globalization. It is an artificial structure that resembles living organisms, replicating, growing, adapting, changing, and self-organising. These complex properties make Black Swans more frequent, unpredictable, and severe (see Antifragile, also by Taleb).

Lions were exterminated by the Canaanites, Phoenicians, Romans, and later inhabitants of Mount Lebanon, leading to the proliferation of goats who crave tree roots, contributing to the deforestation of mountain areas, consequences that were hard to see ahead of time.

— Antifragile — Nassim Taleb

Understanding the intricate web of causality is daunting in a complex and interconnected world. The dynamics of this complexity often elude easy description, and predicting its evolution can seem like an insurmountable challenge. This difficulty manifests through the system’s high sensitivity to initial conditions, which is sometimes called the butterfly effect.

The butterfly effect, a term coined by mathematician and meteorologist Edward Lorenz, is the idea that even the tiniest actions or changes in one part of a system can have profound and far-reaching consequences elsewhere. Lorenz’s metaphor is that a seemingly inconsequential flap of a butterfly’s wings in one corner of the world could potentially contribute to the formation of a hurricane in another.

Illustrating the implications of the butterfly effect, Yuval Noah Harari explores the relationship between social welfare in capitalist countries and the rise of communism in the East. He postulates that the implementation of social welfare policies emerged not solely from moral or ethical considerations but also as a means to pacify the working class and prevent the spread of communist ideologies.

The story of lions in ancient Caanan is another example of what complexity experts call “unintended consequences”. A seemingly insignificant decision on behalf of the Canaanites (a butterfly wing flap) caused wide-scale deforestation in Mount Lebanon.

The Butterfly Effect and Black Swans provided a more compelling explanation of how history unfolds. This realisation has important consequences on intervention policies and decision-making as it shows that intervening in complex systems will always have unforeseeable consequences. These consequences, which sometimes appear as Black Swans (extremely rare and impactful events), become more frequent as the system’s complexity rises.

6. Characteristics of Complex Systems

Complex systems are characterized by a set of defining features and properties that distinguish them from simple or linear systems. These characteristics make complex systems challenging to predict and understand fully. Here are some key characteristics of complex systems.

6.1 Emergence

Complex systems exhibit emergent behaviour, which means that the system’s global properties or patterns emerge from the interactions of its individual components. These emergent properties are often not predictable from the properties of the individual components alone.

6.2 Non-linearity

Complex systems are typically non-linear, meaning that small changes in one part of the system can lead to disproportionately large and unpredictable effects in other parts. This non-linear behaviour can make understanding and controlling the system’s dynamics challenging.

6.3 Interconnectedness

Complex systems consist of many interconnected components or agents that interact with each other. These interactions can be direct and indirect, creating a web of relationships that influence the system’s behaviour.

6.4 Adaptation

Complex systems often exhibit adaptive behaviour, where they can adjust and evolve in response to changes in their environment or internal conditions. This adaptability helps them survive and thrive in dynamic situations.

6.5 Feedback Loops

Feedback loops are common in complex systems and can be positive (amplifying) and negative (dampening). Positive feedback loops can lead to rapid and sometimes chaotic changes, while negative feedback loops can help stabilize the system.

Negative feedback loops are a regulatory mechanism in which a change in a system triggers a response that opposes the change, thus stabilising the system’s original state. These loops are essential for maintaining homeostasis and preventing excessive deviations from the set point in biological, physical, and social systems.

Feedback loops and corrective mechanisms in systems.
Feedback loops and corrective mechanisms in systems.

Positive feedback loops are a regulatory mechanism in which a change in a system triggers a response that amplifies the change, leading to a further deviation from the system’s original state. These loops can create self-reinforcing cycles, often resulting in rapid and dramatic changes. Positive feedback loops are important in growth, development, and physiological responses but can also lead to unstable or chaotic behaviour in some systems.

For example, a software system’s design shift can affect the development timeline, leading to cascading effects on the project schedule and further constraints on the development team.

6.6 Robustness and Fragility

Complex systems can be both robust and fragile. They can withstand certain disturbances or failures while being vulnerable to others. Understanding the factors contributing to robustness or fragility is essential for managing complex systems.

6.7 Self-organization

Complex systems can self-organize and spontaneously arrange themselves into structured patterns or states without external control. This self-organization is often a result of the interactions between system components.

6.8 Heterogeneity

Complex systems typically have diverse elements or agents with different properties, behaviours, or roles. This heterogeneity can lead to complex interactions and outcomes.

6.9 Equilibrium, Far-From-Equilibrium, and Strange Attractors

Complex systems can operate near critical points or phase transitions, where small perturbations can significantly change the system’s behaviour. This criticality is associated with power-law distributions and long-tailed events.

Complex systems can reach a state of equilibrium through self-organisation, where the interactions between components result in stable behaviour. For example, market supply and demand constantly reshape supplier-vendor relationships, driving the system to new equilibrium states.

However, strange attractors can sometimes emerge, leading to unpredictable behaviour, even in the short term. For example, a team’s attitude towards quality and responsiveness in software development might shift depending on the current SDLC stage. A typical example would be the (sometimes unexplainable) relaxed observation of processes and best practices when fixing bugs versus implementing new features.

6.10 Path-dependence and Dispositionality

Complex systems often exhibit path-dependent behaviour, meaning that their past history and the sequence of events can profoundly influence their current state and future evolution.

6.11 Multiple Scales

Complex systems can exhibit behaviours and patterns at multiple scales, from the microscopic interactions of individual components to macroscopic phenomena that span the entire system. Understanding these multi-scale dynamics is crucial for grasping the system’s complexity.

6.12 Unpredictability

Due to their non-linear and emergent nature, complex systems can be highly unpredictable. Small initial conditions or parameter changes can lead to divergent outcomes, making long-term predictions difficult.

6.13 Nature of Causality

The relationships between components in complex software systems are often non-linear, meaning that small changes in one component can result in radical changes in another.

Kaoru Ishikawa invented the fishbone diagram, which is commonly used for root-cause analysis. It assumes a linear relationship between causes and effects that meticulous searches and information gathering on events and system components can uncover. Such a direct method of viewing cause and effect is doomed to fail in complex systems.
Kaoru Ishikawa invented the fishbone diagram, which is commonly used for root-cause analysis. It assumes a linear relationship between causes and effects that meticulous searches and information gathering on events and system components can uncover. Such a direct method of viewing cause and effect is doomed to fail in complex systems.

Non-linearity and dense networks of interconnections can make predicting the system’s behaviour based on individual components impossible.

For example, changing team structures, tools, or processes to achieve a specified objective with one purpose may create unanticipated side effects (aided by positive feedback loops) that ultimately undermine them.

6.14 Constraints and Boundaries

Software development projects are often subject to constraints and feedback loops that shape their behaviour.

For example, budget and timeline constraints can affect the choices made by the development team.

Systems are also defined by their boundaries, which is another way of thinking about constraints. For example, team boundaries are created by setting rigid team structures, roles, and objectives, which are subsequently reflected in the system’s architecture.

7. Why is it Important to Understand Complex Systems?

As software developers, one might think that complexity theory might be more relevant for social scientists, chemists, or biologists — hardly a topic of concern for developers. That, however, is an underestimation of the importance of the topic.

Below are three reasons why we believe complexity theory is remarkably relevant to professionals working in any organisation.

7.1 The Risk of Mistaking a Complex System for an Ordered One

One of the gravest mistakes managers can make is to mistake a complex for an orderly system. How we "sense" the world determines how we "act" in it. This means solutions that work for ordered systems might fail cataclysmically in complex ones.
One of the gravest mistakes managers can make is to mistake a complex for an orderly system. How we “sense” the world determines how we “act” in it. This means solutions that work for ordered systems might fail cataclysmically in complex ones.

One of the highest risks in dealing with complex systems is mistaking them for ordered ones. When this happens, efforts to control and manage the system based on traditional approaches will likely fail. 

For example, a common mistake in software development is assuming that a certain category of software project (megaprojects or those with lots of novel technology or business requirements) can be managed like a linear process with a clear beginning, middle, and end. This assumption often leads to a rigid Waterfall approach, ill-suited to handle software projects’ complex, adaptive nature.

On the other hand, treating all software development projects as complex and, therefore, requiring (costly) Agile approaches is equally likely to fail.

The key message here is that, just like in software design, context is essential, and solutions have a bounded domain of applicability.

7.2 Abandoning the Notion of Long-Term Prediction and Control

Another critical characteristic of complex systems is that they are inherently unpredictable in the long term. Efforts to predict the future or control the outcome of a complex system are generally futile.

In software development, for example, it is not possible to predict with certainty the exact outcome of a project or to control all aspects of the development process. This unpredictability is compounded by size, novelty, and uncertainty, for example, in mega-projects.

Instead, an Agile approach combined with a modular design creates a flexible and adaptive process that can evolve as the project progresses. By embracing uncertainty and unpredictability, software development teams can be more effective in delivering successful projects.

7.3 Understanding the Agency of Individuals

The interactions of the individual components within them shape the behaviour of complex systems. While these components may have some level of agency, the entire system’s behaviour is not necessarily a result of the individual choices of these components.

In software development, individual developers can influence local decisions and outcomes. However, the group behaviour and the project outcome will result from interactions between all the stakeholders.

Systems constructed in this manner are hierarchical in nature and are known as Nearly Decomposable Systems.

Understanding how individual roles in complex systems can influence their behaviour is crucial for developing effective strategies not centred around leaders only and avoiding the long-term, centralized, command-and-control paradigm.

8. Approaches to Managing Complex Systems in Organizations and Teams

Complex systems in organisations and teams can be challenging to manage due to the inherent nature of their structure and behaviour. It is vital to consider four fundamental concepts of good management to overcome these challenges: 

  1. Systems Thinking
  2. Cybernetics
  3. The role of informal networks
  4. Dispositions in complex systems

8.1 General Systems Thinking

This approach to making sense of organisations is a radical shift from past paradigms that place the individual at the centre of the picture.

Systems thinking focuses on the system rather than individual components by considering the interactions and interdependencies between components.

8.2 Cybernetics

Cybernetics is a field of study that explores the science of control and communication in complex systems. The principles of cybernetics have been applied to various fields, including biology, psychology, engineering, and management.

When applied to organisations and teams, cybernetics provides a framework for understanding the role of feedback and regulation in the functioning of complex systems.

By focusing on the flow of information and control, cybernetics can help organisations and teams better understand the relationships between different system components and develop strategies for managing these relationships.

8.3 Informal Networks

Informal networks are individuals interacting informally (outside formal structures or hierarchies) to exchange information, ideas, and support.

These networks often form naturally within organisations and teams and can play a significant role in managing complex systems. For example, in software development teams, informal networks provide fallback mechanisms when formal processes break down or cannot respond adequately in challenging situations.

These networks facilitate the flow of information and ideas and provide a support system for individuals as they work to manage complex systems.

8.4 Dispositionality

Dispositionality refers to the propensity of a complex system to behave in a certain way based on its internal structure and external environment.

Complex systems move into states of lower energies, similar to how thermodynamical systems tend to move into states of larger entropies. The shift from one equilibrium state to another is probabilistic and depends on the energy gradient between the two states.
Complex systems move into states of lower energies, similar to how thermodynamical systems tend to move into states of larger entropies. The shift from one equilibrium state to another is probabilistic and depends on the energy gradient between the two states.

By understanding the dispositionality of the system, managers can work to amplify desired effects and dampen adverse ones. This approach contrasts command-and-control methods that identify a target state and attempt to close the gap.

9. The Role of Processes and Procedures

Processes and procedures play a crucial role in managing complex systems. However, their impact on the system can be positive or negative, and understanding their impact is essential to manage complex systems effectively.

9.1 What Roles Do Processes Play in Managing Complex Systems?

Processes provide instructions for people to follow to achieve specific outcomes. In complex systems, procedures and best practices can guide implementations, ensuring everyone is on the same page and working towards the same goals.

For example, in software development, processes such as the Software Development Lifecycle (SDLC) can ensure that projects are delivered on time, within budget, and to the expected quality standards.

9.2 How Mature Processes Can Serve as Knowledge Storage Devices

Processes can act as knowledge storage devices, capturing the expertise and experience of the team so that it is not lost when team members change. This characteristic can be vital in complex systems where knowledge is often decentralized and dispersed across the organization.

For example, in software development, a mature Agile process can provide a framework for the team to work within, allowing for flexibility and adaptability while still providing structure and guidance. By capturing the knowledge and experience of the team in the process, future teams can benefit from the hard-won lessons learned by previous groups, reducing the risk of repeating the same mistakes.

9.3 How Rigid Processes Accelerate Complex Systems Failure

However, overly rigid processes can sometimes lead to problems. Complex systems are characterized by change and unpredictability, and rigid processes can hinder the ability of the system to adapt.

Fragile systems have a propensity for sudden catastrophic failure. Rigid processes in a complex environment overconstrain the system allowing risk to be buried until one small stimulus causes the entire structure to topple.
Fragile systems have a propensity for sudden catastrophic failure. Rigid processes in a complex environment overconstrain the system, allowing risk to be buried until one small stimulus causes the entire structure to topple.

Applying best practices designed from past experiences to novel situations is, by definition, flawed and can lead to destructive behaviours as it over-constrains systems that are not easily restrained.

The best example software developers are familiar with is following a rigid, sequential process such as the Waterfall model in projects where requirement volatility (and therefore uncertainty) is high, as the process does not allow for experimentation or short feedback loops.

Overconstrained systems are more prone to catastrophic failures (Black Swans) as risks are hidden, and force maintains order.

In addition, rigid processes can create a culture of conformity and stifle creativity and innovation. Complex systems thrive on diversity and creativity, and inflexible processes can limit the ability of the system to generate new solutions to problems.

10. Final Words

Complex systems are a fascinating aspect of our lives and our world. They encompass everything from software development projects to the structures and behaviours of human organizations.

Understanding complex systems is critical because they differ from ordered systems in ways that significantly affect the designing and managing of interventions effective in producing desired outcomes.

The study of complex systems involves several basic concepts, such as the nature of causality, the role of constraints and feedback, and the emergence of novel behaviour. It also requires a systems thinking approach, a recognition of the boundaries and connectedness, and an appreciation of the importance of self-organization and knowledge management.

Challenges in managing complex systems in organizations and teams are significant. They include misperceptions about their nature, difficulties in predicting and controlling their behaviour, and the need for effective processes and procedures to support knowledge storage and decision-making.

Approaches such as cybernetics and systems thinking have been developed to help organizations better manage complex systems, but each has pros and cons.

The challenges of managing complex systems can be addressed in the software development industry by understanding Agile, Waterfall, and scientific management methods.

These methods offer different approaches to working with uncertain and unpredictable situations, with Agile and Waterfall providing a more flexible approach and scientific management offering a more rigid structure.

The best approach for a given organization will depend on its specific context and the software products and technologies it adopts.

Operational excellence is critical to obtaining a competitive edge and increasing an organisation’s value proposition, as it allows organisations to manage their resources and achieve their goals effectively.

In conclusion, understanding the nature of complex systems, their challenges, and the approaches to managing them is essential for success in the software development industry and other complex systems.

Effective management of complex systems requires a balance of rigidity and flexibility, understanding processes and procedures, and focusing on operational excellence.

11. References

Leave a Reply

Your email address will not be published. Required fields are marked *