Complexity and Complex Systems From Life on Earth to the Universe

1. Overview

Dealing with complexity is integral to our lives, even if we do not realise it. 

An organisation can be modelled as a complex system from megacorporations’ scale down to the smallest teams. The architecture of software solutions can be equally complicated, and megaprojects and implementations are certainly involved.

Every situation, relationship, or interaction that captures our attention or interest will undoubtedly be complex. Why?

Because complex systems are unpredictable, they are also exciting. Because they are innovative, they are captivating. Because their manifestations are unique, they are valuable.

But what is complexity, and what makes a system complex? This article will formalise the definition of complexity and look at some of the most exciting properties that make a system complex. We will also examine some quantitative measures of complexity.

To motivate the discussion, we will look at three different examples of highly complex systems and corresponding models to help us unravel their dynamics:

Studying complex systems can be very rewarding because our minds are ill-equipped to deal with complicated situations requiring slow, rational reasoning. Awareness of its limitations and their context can be invaluable for better decisions.

3. Three Models of Complex Systems

3.1 Cellular Automata

Living organisms are the epitome of complexity. To help scientists understand their characteristics and behaviour, a discrete computational model called Cellular Automata (CA), initially developed in the 1940s by Stanislaw Ulam and John von Neumann, is extensively used.

In a nutshell, Cellular Automata consist of a rectangular grid where the state of each cell belongs to a (typically binary) alphabet. A cell in the grid can be either on or offliving or dead.

The grid starts in an initial state and evolves in time according to specific rules. In the standard model of CA, these rules are time-independent and applied to the whole grid simultaneously at every turn.

The below animation shows how the cells are updated under a rule called Wolfram-30, named after Dr Stephen Wolfram. He thoroughly investigated the 256 rules associated with a neighbourhood of three cells.

Of all the 256 rules, only a select few generated intriguing patterns that were not homogenous, oscillating, or chaotic but had an intricate, ongoing complexity that seemed to go on for a long time. We will revisit this notion in the next section on the computing universe.

Rule 110 of the Wolfram set is a select few, as shown in the diagram below.

CA rule110s

The story of cellular automata under rule 110 is incredibly fascinating for the following reason. This story proves complex patterns can be generated with simple dynamics and initial conditions!

We recommend looking at Conway’s Game of Life to get a feel of how CA works. You will see from random starting patterns how the overwhelming majority of the evolutions will end in an empty grid with all the cells dying out. At the same time, a tiny minority will generate local structures that keep oscillating forever.

Nathan Thomson invented a more exciting variation of Conway’s Game of Life in 1994 since it allowed for a specific pattern to replicate itself. Replication (such as when a bacterium splits) is a hallmark of living organisms.

Highlife replicator

So far, we have seen patterns that evolved under constant or fixed rules. This model, where the dynamics are not allowed to change, cannot explain innovation.

A beautiful paper was published in 2017 by Alyssa Adams, Hector Zenil, Paul C. W. Davies, and Sara I. Walker titled Formal Definitions of Unbounded Evolution and Innovation Reveal Universal Mechanisms for Open-Ended Evolution in Dynamical Systems, where a formal definition of innovation was proposed. The paper also described a model of a system (using CA), where not only the states evolved but also the rules, and here is how it works.

A system capable of innovation must necessarily consist of two subsystems which the authors labelled organism and environment. Both subsystems are coupled such that changes in one impact the other and vice versa. This phenomenon is not uncommon, as the authors explain:

An explicit example is the feedback between genotype and phenotype within a cell: genes are “read-out” to produce changes to the state of expressed proteins and RNAs, and these in turn can feed back to turn individual genes on and off.

Formal Definitions of Unbounded Evolution and Innovation Reveal Universal Mechanisms for Open-Ended Evolution in Dynamical Systems — Alyssa Adams, Hector Zenil, Paul C. W. Davies, and Sara I. Walker, 2017

Under such conditions, environmental subsystem patterns do not have to be intricate; seasonal changes will work just fine. In contrast, the organism will display complex patterns that, in isolation, it would not have been able to produce.

The impact of the environmental change results in the organism’s rules being altered, which in turn would be responsible for producing rich and novel behaviour. In addition to external influence, the rule updates can also depend on their previous states, in which case they are self-referential.

The system follows an Open-Ended Evolution (or OEE). The organism is either partially or fully impacted by external environmental changes, and the result of this openness is a rich and innovative texture.

A historical instance of this environment-organism strong coupling happened in the late Archean Eon when an oxygen-containing atmosphere began to develop. Scientists believe that around 2.4–2.0 billion years ago, photosynthesizing cyanobacteria produced the oxygen necessary for life as we know it to evolve. This game-changing development is known as the Great Oxygenation Event.

3.2 The Quantum Computing Universe

One of the most beautiful and revealing passages I have ever read is Professor Seth Lloyd’s opening paragraphs from his book Programming the Universe — A Quantum Computer Scientist Takes on the Cosmos. It goes like this:

The universe is the biggest thing there is, and the bit is the smallest possible chunk of information. The universe is made of bits. Every molecule, atom, and elementary particle registers bits of information. Every interaction between those pieces of the universe processes that information by altering those bits. That is, the universe computes […] in an intrinsically quantum-mechanical fashion; its bits are quantum bits. The history of the universe is […] a huge and ongoing computation. […] What does the universe compute? It computes itself […], its own behaviour.

Programming the Universe — A Quantum Computer Scientist Takes On the Cosmos — Seth Lloyd

The traditional mechanistic Newtonian model of natural phenomena consisted of a machine with initial conditions and laws that govern its dynamics and dictate its future evolution.

Whilst the mechanistic view served its purpose very well, the quantum computational model seems more suitable for explaining the complexity and feature richness of the universe.

Subtle but straightforward parallels exist between cellular automata’s story and the quantum computing universe with minor variations. Let’s have a look at the similarities first:

  • The Big Bang’s initial conditions were not complicated, just like the simple patterns that CA starts with: extreme temperatures coupled with extreme densities governed by a unified form of the forces of nature.
  • As the universe’s temperature cooled down, the forces of nature broke up and took distinct forms. Small structures like atoms and molecules started to form, driven by weak, strong, and electromagnetic forces. The cooling temperature changed the rules and not just the state of the universe, similar to what we saw in the environment-organism model. (See The Greatest Story Ever Told So Far for a popularized version of the Standard Model of Particle Physics).
  • Today’s universe is filled with intricate structures of galaxies, stars, black holes, and many other objects of fascinating sizes and properties. The most evolved of the lot is probably life on Earth. What made it possible was the successive application of simple quantum mechanical laws to every bit of the universe over 13.7 billion years.
  • In his Conformal Cyclic Cosmology (CCC) model, Professor Roger Penrose argues that the universe will contain massless photons after some googol years, thus removing any signs of scale. The universe will be moving forward in time, but the notion of physical distances will disappear. Assuming the theory is true, this would be another example where the rules governing the universe will have changed under the influence of its previous states and the state of the universe itself.

While the similarities are striking, two additional ingredients make the cosmic story somehow more convoluted:

  1. The original state of the universe was very homogenous. Without random quantum fluctuations, local regions of space would not have been lumped under the effect of gravity, and no large structures such as stars and galaxies would have emerged. The state of the universe today would have been very different if it was not nudged in a specific direction by tiny quantum fluctuations during its very early stages.
  2. While the cells in the cellular automata grid were either on or off, the bits in the universe are simultaneously on and off. The CA cells and rules are classical in every sense, while the universe is quantum mechanical in every bit. Its bits are qubits, and its computations are quantum-mechanical.

The idea that the universe is computing its behaviour, rules and states, physical laws and objects is mind-bending.

The simpler laws that the universe started with were the basis of a richer and more diverse set that appeared later when the universe’s conditions changed (sparser and cooler). During the Big Bang, the state of the universe was simplicity itself, a stark contradiction to the diversity and complexity we observe today.

The idea that the future evolution of the universe was fully encoded in the initial states and laws that it began with, just like two cells combine and evolve into a fetus, is, in my view, what makes complexity so fascinating.

3.3 History and Human Progress

We tend to think of history and human progress as a slow-paced, incremental build-up of tiny achievements here and there with occasional scientific discoveries and momentous events. This might have been true until the Scientific Revolution some 500 years ago.

In today’s modern world, progress is dominated by extremely rare events with huge impacts, Nassim Taleb calls The Black Swan. These events radically change some aspects of our lives such that life before them would seem like a remote past.

Because they are so rare, they are unpredictable. Nobody could have predicted the two World Wars, the internet, smartphones, or the 2008 financial crisis.

We can think of today’s world as one big society due to globalization. It is an artificial structure that resembles living organisms, replicating, growing, adapting, changing, and self-organising. These complex properties make Black Swans more frequent, unpredictable, and severe (ref. Antifragile by Nassim Taleb).

Lions are exterminated by the Canaanites, Phoenicians, Romans, and later inhabitants of Mount Lebanon, leading to the proliferation of goats who crave tree roots, contributing to the deforestation of mountain areas, consequences that were hard to see ahead of time.

— Antifragile — Nassim Taleb

In a complex world, mapping causes to effects is challenging, making it hard to describe its dynamics and predict its evolution. In a chaotic system, tiny local changes can have significant impacts somewhere else, and this dynamic is commonly referred to as the butterfly effect.

Stories of vicious cycles and butterfly effects have been related marvellously in Yuval Noah Harari’s book, Sapiens.

In this bestselling work, Harari, for example, links the topic of social welfare in capitalist countries in the West. Social welfare and caring for employees did not necessarily come from moral and ethical sources but from the need to keep communist ideas among the working class at bay.

When discussing complex systems, Taleb distinguishes between two, maybe three, categories: fragile, robust, and antifragile. Fragile and robust systems are commonplace and are not interesting to us here. Antifragile, on the other, are those systems (like our skeletons or a social group with complex dynamics) that grow stronger after acute (but not destructive) short periods of stress followed by extended periods of recovery.

In Taleb’s discussion, he uses the abovementioned stressors as information conveyed to the different parts of a complex system.

Antifragile systems will grow weak without this crucial information about the environment where stressors originate. With enough stressors, they become stronger and can withstand more load.

Again, we see information as a distinct entity vehicled around in complex systems by unusual methods.

4. What Is Complexity

Now that we have covered the introductory examples of complexity and complex systems, we hope you have built an intuitive image of what complexity might look like.

Valuable as it may be, this intuitive image can be crude and need formalization and rigour.

From Wikipedia, as well as this excellent paper where the authors have collected nine different definitions, we quote the following:

Definition 1
In one characterization, a complex system is one whose evolution is very sensitive to initial conditions or too small perturbations, one in which the number of independent interacting components is large, or one in which there are multiple pathways by which the system can evolve. Analytical descriptions of such systems typically require nonlinear differential equations. A second characterization is more informal; the system is ”complicated” by some subjective judgment and is not amenable to exact description, analytical or otherwise.

Definition 2
In recent years the scientific community has coined the rubric’ complex system’ to describe phenomena, structures, aggregates, organisms, or problems that share some common theme: (i) They are inherently complicated or intricate […]; (ii) they are rarely completely deterministic; (iii) mathematical models of the system are usually complex and involve non-linear, ill-posed, or chaotic behaviour; (iv) the systems are predisposed to unexpected outcomes (so-called emergent behaviour).

What is a Complex System? — J. Ladyman, J. Lambert, K. Wiesner

Definition 3
[…] something with many parts where those parts interact with each other in multiple ways, culminating in a higher order of emergence greater than the sum of its parts.

Wikipedia — Complexity

Unfortunately, scientists have not yet agreed on a single definition of complexity that is domain-agnostic and consists of a closed list of necessary and sufficient conditions for a system to be complex.

In his book Programming the Universe, Seth Lloyd provides a straightforward and sensible explanation for this bizarre disagreement. In his words: “complexity is like physics”, and whilst you can’t measure physics, you can, on the other hand, measure physical quantities such as temperature, velocity, and mass.

Complexity is not a property of a single physical state but rather a property of a process.

Seth Lloyd and Heinz Pagels. Complexity as thermodynamic depth, 1988.

Therefore, complexity is not a unique quantifiable attribute; you can only measure specific features. You can then combine those measures to place a system’s complexity on a scale from zero to infinity.

To illustrate these ideas, consider the stochasticity of the system’s response to a particular input. If the response was fixed or completely random, we could safely assume the system has zero complexity (if there is no other interesting feature).

On the other hand, if the response showed some particularly interesting (and non-trivial) correlations with the input, would that be sufficient to consider the system complex? Not quite.

In effect, the authors of this paper argue that neither non-linearity nor feedback loops, spontaneous order, lack of central control, emergence, or hierarchical organisation are sufficient conditions for complexity.

So how can we determine whether a system is complex or not? Professor Lloyd suggests using the following analysis:

  1. How difficult is it to describe?
  2. How difficult is it to create?
  3. How organized is its structure?

Many measures can provide us with quantitative and precise answers to those questions. In the next section, we will explore some of these measures.

5. Complex vs Complicated

Complex and complicated are very different terms.

A computer is an intensely complicated piece of electronic equipment, but it is not complex. The same program with the same inputs always produces the same results. There is no stochasticity, non-linearity, or novel behaviour. The personal computer cannot self-organize, reprogram itself, or change the laws of computer science.

When modelling a computer, the complicated subsystems can always be abstracted away with simpler representations. Despite this reduction, the abstract model can faithfully replicate the behaviour of the actual physical system.

Attempting to understand and control a complex system, such as a social group, as though it was merely complicated can lead to much frustration.

6. Measures of Complexity

Complexity can be measured in a few different ways. Some measures are concerned with the description of the system, such as the amount of randomness of regularity.

Others focus more on how easy or difficult it is to create the system’s final state if one starts with the simplest form.

Finally, a third measure attempts to quantify the degree of organization, such as the level of decomposition and hierarchies.

6.1 Descriptive Measures

The list below enumerates the measures that attempt to describe one feature of a complex system, such as the amount of information or regularity it contains.

  • Information (typically measured in bits) and entropy (the amount of disorder in a system) are tightly-coupled and commonly used concepts in information theory.
  • Algorithmic complexity or algorithmic information content was formulated independently by Solomonoff, Kolmogorov, and Chaitin in the mid-1960s. It measures the length of the shortest computer program that generates a system description.
  • Lempel-Ziv complexity is a measure of regularity in a system similar to a compression algorithm that attempts to reduce the number of storage bits of a regularly-occurring word in a string.

Unfortunately, all three measures are not adequate representations of complexity as they highly reward completely random systems, not exactly according to our intuitive notions of complexity.

6.2 Difficulty of Creation (Logical Depth)

Below are some of the measures that we can apply to determine how difficult it would be to create a given complex system:

  • Computational complexity
  • Time computational complexity
  • Space computational complexity
  • Information-based complexity
  • Logical depth
  • Thermodynamic depth
  • Cost
  • Crypticity

Logical depth is, in my view, the most insightful and subtle of the lot. Its premises are as follows:

  • Quoting from Charles Bennet’s paper on the subject: “Some mathematical and natural objects (a random sequence, a sequence of zeros, a perfect crystal, a gas) are intuitively trivial, while others (e.g. the human body, the digits of pi) contain internal evidence of a nontrivial causal history”. In other words, upon close inspection of particular objects, we will notice that they may have undergone a very long and slow evolution from simple and random initial conditions to highly-ordered and self-organized structures.
  • Charles Bennet also explains why we assume complex objects should have had humble origins. In his view, the fewer assumptions we make on the creation hypothesis, the more plausible it should be. He uses “ad-hocness” to describe redundant assumptions that a more concise hypothesis can replace. This idea agrees with the principle of Occam’s razor for theory construction or evaluation.
  • Logical depth is the shortest time a universal computer requires to compute the object by an incompressible program. In this context, an incompressible program is also the shortest from the simplest hypothesis we can put forward to explain the object’s creation. The running duration of this program then provides us with a measure of an object’s complexity.

6.3 Degree of Organization

To discuss the architecture of complexity, we will refer to a now obsolete but useful paradigm of the brain’s anatomy.

In the 1960s, physician and neuroscientist Paul D. MacLean proposed the Triune model, which consisted of the reptilian complex (or lizard brain), the paleomammalian complex (limbic system), and the neocortex

The main drawbacks of this model are that it viewed each subsystem as independently active and as structures sequentially added in the course of evolution.

However old and obsolete, this paradigm retains one attractive and accurate feature relevant to our discussion: extending an existing system with additional capabilities is much easier than building it from scratch.

We encounter the same principle in software architecture where additional subsystems specialized in specific domains can be added to the solution to extend its functionality. With each new subsystem added, new interfaces must be set up to integrate the new member into the group.

Complex systems will always possess some hierarchical structure, allowing for a speedy and cost-efficient evolution.

Each step in evolution produces a stable object capable of subsisting independently. If the intermediate object is good enough, it will be naturally selected for the next stage, where additional layers will be built on top. From this architecture stems decentralized control.

7. Managing Complexity

We now turn our attention to issues of a more practical nature or how to manage and control complexity in real-world scenarios. Let’s look at a few examples.

  • Software project deliveryWaterfall methodologies focused heavily on design efforts to alleviate any potential rework. However, detailed solution designs require well-defined business requirements for the entire project. This constraint was challenging to implement, and projects often failed. Agile principles accepted changing requirements and adopted an iterative approach, building on successive stable software versions to remedy the problem. Software designed from scratch is much less complex than one iteratively built as it has less logical depth. Still, additional complexity was justifiable in return for better delivery.
  • Solution architecture and software design — I once heard that architects were the guardians of complexity. As mentioned earlier, no engineering practice can be cost-efficient if it does not leverage prior gains instead of redesigning the whole system from the ground up. This compromise, however, creates technical debt, and without efficient management, it will spiral out of control. Technical debt can be tamed with diligent refactoring or redesigning code that “smells”.
  • Verification and validation of software products — how can you ensure adequate test coverage of a complex solution? Aside from the traditional software testing methods, there are tests designed to ensure that the system always responds in an orderly fashion, no matter the input. Penetrations tests guarantee that the system boundaries are not open to cyberattacks. Another test consists of bombarding the system with random messages to test its ability to handle potentially large volumes of incorrectly formatted messages.
  • Via negativa — This principle was introduced in Nassim Taleb’s masterpiece Antifragile. Its principal tenet can be summarised as follows: when dealing with a complex system, you do not fully understand, such as the global economy or local dictatorships, your best bet is not to intervene directly. Building democracy from scratch can never work; it must be constructed slowly and meticulously, layer upon layer, by people with skin in the game.

8. Conclusion

My definition of complex systems would encompass any object or construct where the number of parts you need to keep track of to reasonably comprehend the system is large enough to overwhelm our cognitive capacities.

Reducing the number of parts by logical groupings or abstractions can be helpful if it doesn’t seriously restrain our understanding of the system and our abilities to predict its response and evolution.

Under such circumstances, the best course of action would be to admit our ignorance, practice maximal prudence, and reject obsolete unidimensional tools and paradigms, as they will do more harm than good.

Complexity is inescapable, and it has also exponentially grown due to globalisation and the speed of technological progress. We have not done an excellent job of keeping up.

However, complexity is an active area of research, and progress is being made. For now, perhaps all we can do is educate ourselves on the topic and eliminate old and obsolete concepts and paradigms.

9. References

Leave a Reply

Your email address will not be published. Required fields are marked *