Complex Problems: What Does the Nature of the Problem Tell Us About Its Solution

1. Overview

In The 7 Timeless Steps to Guide You Through Complex Problem Solving, we discussed a generic approach that could be systematically applied to solving complex problems. Since not all problems are complex, and many gradations of complexity exist, it is probably a good idea to start by defining what complex problem-solving involves and what categories of problems are most suitable to tackle using that approach. For this reason, understanding complex problems made the top position on the list.

Practically all living organisms deal with complex problems, from single-celled amebas to societies of Homo sapiens, and surprisingly, the solution-creation process can be very similar, at least on the conceptual level. This article will elaborate on this point further, articulating the terminology and ideas often associated with how complex adaptive systems solve complex problems. More specifically, we will answer the following questions:

  • What distinguishes complex problems from simple ones?
  • What risks do we run by applying simple solutions to complex problems?

2. Agenda

This article is part of a series on complex problem-solving. The list below will guide you through the different subtopics.

3. An Intuitive Definition of Complex Problems

We all intuitively grasp the characteristics of challenging problems, at least at their fundamental levels. For instance, we can promptly recognize that fixing a faulty washing machine is relatively simple. First, we need basic technical skills to identify the faulty part. Next, we would read the code on the back, order a spare part, and finally replace it.

In simple problems, there is no uncertainty around the root cause or the solution.

On the other hand, deciding whether or not to accept a job offer is anything but simple. Firstly, you will never have sufficient information to make an optimal decision. Secondly, you cannot predict the consequences of such a decision. Finally, whichever choice you make will change your worldview, rendering any forecasts you have made of the future almost instantly obsolete.

The following characteristics distinguish complex problems.

So, what are complex problems?

1

Non-triviality

Complex problems generally admit non-trivial solutions. In addition to strong field expertise and solid analytical skills, they require a high cognitive load to formulate.

2

Uncertainty

Solutions to complex problems cannot be guaranteed as the behaviour of the system to which the solution is applied is always unpredictable.

3

Consensus

Diagnosing complex problems is especially challenging because consensus on facts, root causes, and solutions can be difficult to obtain, especially in large groups.

3. Challenges of Working With Complex Problems

Experts like Nassim Taleb, Gerd Gigerenzer, and Daniel Kahnemann insist that solving complex problems is relatively easy once we understand which tools to apply. In their view, failures come from applying engineering methods like optimization rather than intuition, heuristics, biases, imitation, and many other techniques refined over millennia of evolution and accumulated wisdom.

4. Complex Problems in the Literature

Experts have extensively researched topics associated with intuition, cognitive psychology, risk management, organisational behaviour, and decision-making under uncertainty. This has left us with a rich body of knowledge popularized by best-selling authors such as Daniel Kahneman and Nassim Taleb, which will be reviewed next.

4.1 Fooled by Randomness (Taleb, 2001)

Fooled by Randomness is one of Taleb’s best-selling books, and its central story revolves around the hidden role of chance in our lives. In Taleb’s view, we grossly and routinely overestimate our capabilities to forecast future events (the turkey problem) and cope with that failure through mechanisms like the narrative fallacy and our ability to reconstruct past events based on new information.

Key takeaways from Fooled By Randomness

  • In social, financial, economic, and political systems, Gaussian distributions mislead at best by providing a comforting but shifting ground for modelling events.
  • Power laws like Pareto’s provide more suitable models for examining complex systems.
  • Time-tested heuristics, formulated through a long knowledge acquisition and refinement period, are more valuable for decision-making under uncertainty than optimisation techniques, which require a well-behaved underlying model (such as the Gaussian).

4.2 Thinking, Fast and Slow (Kahneman, 2011)

Thinking, Fast and Slow is a best-selling book by Daniel Kahneman popularizing his work in cognitive psychology about the mechanism and efficiency of human judgment and decision-making under conditions of uncertainty. His original idea revolves around modelling the human mind as two systems, which he refers to as System 1 and System 2.

Key takeaways from Thinking, Fast and Slow

  • Systems 1 and 2 perfectly cover our decision-making needs for simple and complex problems.
  • System 1 is fast and inexpensive, allowing us to make critical decisions with imperfect or unreliable data.
  • System 1 relies on heuristics and biases to compensate for unreliable information and processing time.
  • System 2 is slow and expensive but more accurate, allowing us to make decisions requiring a high cognitive load and processing larger amounts of information.

4.3 Process Consultation (Schein, 1969)

Professor Edgar Schein is a leading authority in organizational behaviour, culture, and psychology. His short but insightful book Process Consultation: Its Role in Organizational Development dedicates a full chapter to group problem-solving and decision-making. Schein explores how leaders and their groups tackle complex problems in this chapter.

Key takeaways from Process Consultation

  • Group problem-solving presents challenges and dynamics that differ from those of individuals and is a subject in its own right.
  • In both cases, events that cause tension and anxiety trigger a solution-finding process that culminates in applying changes to the environment or the individual or group’s interactions with it.
  • However, problem formulation, solution creation and implementation differ significantly between the two cases.

5. The Information Sufficiency Problem

5.1 How Much Data Is Enough?

During the Newtonian age, physicists believed that once the initial conditions of a physical system were precisely determined, its future evolution could be predicted with arbitrary precision. For example, the laws of dynamics allow us to calculate the infinite trajectory of a point mass given its initial position and velocity.

What happens when the system consists of innumerable particles, each with a different initial speed and position? For practical reasons, we substitute the individual particles with a unit of volume where its macro properties can be calculated by averaging over its constituent particles. For example, instead of registering the speed and position of every molecule in a gas container, we substitute those numbers with temperature and pressure calculated on a coarse-grained subvolume. This coarse-graining allows us to explore the system’s physical properties without drowning in data.

5.2 The Rise of Statistical Mechanics and Probabilistic Models

The coarse-graining method and the impracticality of precise calculations on the molecular level gave rise to statistical mechanics, which Boltzmann and others pioneered. Under statistical mechanics, physical systems are governed by the laws of thermodynamics. The second law of thermodynamics is the most famous, dictating that a system’s entropy (or disorder) must always increase.

The practical advantages of using coarse-graining came at a cost, as a probabilistic model replaced the classic view of deterministic evolution. In this new paradigm, a physical system is predisposed to evolve into one of numerous states. We can only predict the probability that it will be in a given future state, but we can never be sure which one.

But all is not lost. Even with the probabilistic model, we can still calculate a system’s future state and create contingency plans for each scenario. We might even be able to influence the outcome by applying pressure on known system levers. This assumption forms the basis of Strategic Choice Theory.

Strategic choice theory, in the realm of organizational theory, emphasizes the influence of leaders and decision-makers on an organization’s direction. It contrasts with earlier views that saw organizations solely responding to external forces.

Managing Probabilistic Systems

In probabilistic models, we assume that the system’s future states are well-defined and their probabilities are calculable. Given this information, adequate planning and optimization processes can be applied to maximize a specific utility function.

5.3 Probabilistic Models Cannot Account for Innovation

Any physical, chemical, or biological system that shows innovation cannot, by definition, be analyzed using probability models, as the latter assumes all future states are static and knowable in advance. Also, the probabilities for reaching any of those states are either fixed or vary according to well-specified rules.

Therefore, probabilistic models are not good enough to predict the future behaviour of human systems. This also spells trouble for Strategic Choice Theory, which relies on simple causal relationships between leaders’ interventions and desired consequences to achieve progress or resolve conflicts.

If a system can produce novel behaviour, it is unpredictable and, therefore, hard to manage. Ecologies of living organisms can only be understood through complexity theory and managed by principles that consider that.

Complex systems presenting complex problems will never offer sufficient information, and managers must make choices under uncertain conditions.

Even if we consider every atom (or elementary particle) in the universe, we still would not be able to predict the rich diversity of phenomena (including biodiversity on Earth) that we currently observe. Quantum mechanics and symmetry breaking ensure enough randomness is injected into the system to produce rich but unpredictable results.

The same applies when we try to understand the source of consciousness in our brains. Would it help to incorporate every neuron and synapse in a gigantic mathematical model? Even if this becomes practical someday, experts seem to believe that emerging consciousness in the inanimate matter is far away.

In summary, there seems to be a hard limit on how much useful information, in principle and practice, can be gleaned by observing a complex system.

6. Problem Classification

6.1 Maximizing Utility Functions

Problems can present themselves in many different ways. However, we are interested in those characterized by a utility function.

A utility function is a concept primarily used in economics, decision theory, and game theory to represent an individual’s preferences over different outcomes or states of the world. It assigns a numerical (or utility) value to each possible outcome or combination, reflecting the individual’s subjective satisfaction or preference associated with those outcomes.

Utility Functions Usage

How Are Utility Functions Used?

  • In economic contexts, utility functions are often used to model consumer preferences, where individuals seek to maximize their utility subject to constraints such as income or budget.
  • In decision theory, utility functions help decision-makers evaluate and compare different choices, considering factors such as risk and uncertainty.
  • In game theory, utility functions represent players’ preferences over possible strategies and outcomes in strategic interactions.

Here are some key points about utility functions:

  • Measures satisfaction, not just money:
  • While commonly used in economics and finance, utility isn’t just about money. It can represent any kind of satisfaction or benefit someone gets, like winning a competition, getting enough sleep, or achieving a goal.
  • Utility values are relative:
  • The utility function itself is designed so that higher numbers represent greater satisfaction. But the exact numbers themselves are arbitrary.
  • For instance, a utility function might assign a value of 10 to get $100 and a value of 20 to get $200. That doesn’t necessarily mean getting $200 is twice as satisfying as getting $100; it’s just more satisfying.
  • Accounts for diminishing returns:
  • Utility functions can capture the idea of diminishing returns.
  • For example, the first sip of water on a hot day might be incredibly satisfying, but the 10th sip might not be as pleasurable.
  • A well-designed utility function would reflect this by increasing in value at a slower and slower rate as the amount of any of its parameters increases.

Using utility functions, people can compare complex options involving chance or risk and make decisions based on their preferences and risk tolerance.

6.2 Ordered, Chaotic, Complex, and Random Systems

Imagine that you have the following problem. You are required to configure an air conditioning system for a data centre. The system is composed of two machines: a cooling engine and a computer connected to it. The computer has temperature and humidity sensors and various switches and dials that allow operators to set control parameters such as maximum temperature or humidity.

A central computer with multiple switches and dials controls an industrial cooling machine. The system did not have a user guide, and the engineer had to figure out the right configuration through trial and error.

The engineer setting up the system must configure it to minimize power consumption while keeping the room at a given temperature and humidity level. The only issue is that the system does not have an operations guide, and the engineer has to figure out how to set it up using trial and error.

Four scenarios are possible: Ordered, Random, Complex, and Chaotic.

1

Ordered Systems

  • Changes in the switches or dials produce a clear response in the cooling machine.
  • Although some settings may impact others, the engineer can, through trial and error, understand the relationship between the controls and the outcomes.
  • Ordered systems have direct causal relationships and hard constraints between their components.
  • Problems in an ordered system can be resolved through the relationships between control parameters and the utility function.
  • Computers, watches, and washing machines are examples of such systems.
2

Random Systems

  • Changes in the switches or dials produce different responses every time. There seems to be no correlation between the settings and the outcomes.
  • Random systems have no causal links and no constraints between their components.
  • Random systems present problems that cannot be resolved; they are, by definition, unmanageable.
  • A reward system based on rolling two dice is a random system.
3

Chaotic Systems

  • Small changes in the switches or dials produce wild responses. Although the system appears random and unpredictable, it shows regular behavioural patterns over the long term.
  • Chaotic systems have causal links and hard constraints between their components in addition to non-linear dynamics.
  • Chaotic systems are also challenging to manage. However, causes can be linked to effects, and regularities can be leveraged.
  • The weather, a turbulent water flow, and three bodies rotating around each other in gravitational fields are examples of chaotic systems.
4

Complex Systems

  • Changes in the switches or dials produce different responses every time. Slight correlations can be measured between the settings and the outcomes.
  • Complex systems have indirect causal links and loose constraints between their components.
  • Complex systems come in two varieties: adaptive and non-adaptive.
  • An example of a non-adaptive complex system is the Brusselator. An example of a complex adaptive system is a microbiome.
  • Complex systems present problems that can be resolved through heuristics, safe-to-fail experimentation, and managing in the present rather than towards a desirable future state.

7. Small Worlds, Optimization, and Unknown Unknowns

7.1 Leonard J. Savage’s “Small World”

Leonard Jimmie Savage (1917-1971) was an American statistician and economist who significantly contributed to statistics, decision theory, and econometrics.

“Savage’s Small World” refers to a thought experiment proposed by the statistician and economist Leonard Jimmie Savage. This concept is often cited in discussions about subjective probability and decision theory.

Decision-Making in a Simple World

L. J. Savage’s “Small World”

  • In Savage’s Small World, imagine a small society where everyone knows each other’s preferences, capabilities, and the outcomes of their decisions.
  • Within this world, individuals can communicate freely and exchange information about their beliefs, desires, and experiences.
  • In such a setting, decision-making becomes more transparent and informed, as individuals can access comprehensive knowledge about each other’s perspectives and choices.

The significance of Savage’s Small World lies in its implications for decision theory. It illustrates an idealized scenario where uncertainty is minimized, and individuals have perfect knowledge about the consequences of their actions. In reality, however, decision-makers often face uncertainty and incomplete information, prompting probabilistic reasoning and subjective judgment.

By contrasting Savage’s Small World with the complexities of real-world decision-making, Savage highlighted the importance of subjective probability for navigating uncertainty and making rational choices. Subjective probability allows individuals to express their beliefs and uncertainty in a formal framework, facilitating reasoned decision-making even when complete information is lacking.

7.2 Optimization Techniques

Optimization techniques can be effectively applied in a Small World scenario where all outcomes and probabilities can be precisely computed beforehand. This is because decision-makers have complete knowledge of the system, allowing them to accurately assess the consequences of their actions and choose the optimal course of action based on predetermined criteria.

In contrast, in the real world, uncertainty, complexity, and incomplete information often make it challenging to compute outcomes and probabilities beforehand precisely. As a result, optimization techniques may not be as effective, as they rely on accurate information to generate optimal solutions. Decision-makers must contend with uncertainty and imperfect knowledge, which can lead to suboptimal outcomes even when applying optimisation techniques.

One example where optimization relies on known outcomes and their probabilities is in the context of inventory management.

In inventory management, a retailer determines the optimal inventory level for each product to minimize costs while ensuring that customer demand is met. In this case, the utility function represents the retailer’s objective, which typically involves minimizing inventory holding costs and stockouts.

Optimisation Process in Inventory Management

Here’s a rigorous breakdown of the optimization process:

  • 1 – Identify outcomes and probabilities:
  • The retailer analyzes historical sales data and forecasts future demand for each product.
  • Based on this analysis, the retailer can estimate the probabilities of different demand scenarios occurring during a given time period, such as daily or weekly sales volumes.
  • 2- Define the utility function:
  • The retailer specifies a utility function that captures their objectives and preferences.
  • This function typically considers costs associated with holding inventory (e.g., storage costs, capital tied up in inventory) and costs related to stockouts (e.g., lost sales, potential damage to customer relationships).
  • 3- Formulate the optimization problem:
  • The retailer formulates an optimization problem to determine the optimal inventory levels for each product.
  • This involves maximizing the expected utility of the inventory system subject to constraints, such as storage capacity constraints, budget constraints, and service level requirements (e.g., desired level of product availability).
  • 4- Solve the optimization problem:
  • Using mathematical optimization techniques, such as dynamic or stochastic programming, the retailer finds the inventory policy that maximizes their expected utility while satisfying the constraints.
  • This may involve determining each product’s reorder points, order quantities, and safety stock levels.

By incorporating known outcomes (demand scenarios) and their probabilities into the utility function and using optimization techniques, retailers can manage their inventory effectively, minimizing costs while ensuring customer satisfaction and maintaining adequate product availability.

7.3 Optimisation in Complex Worlds

Optimization techniques may encounter challenges in complex situations, particularly those governed by power laws (see discussion on Gaussian Distributions vs Power Laws: Your Ultimate Guide to Making Sense of Natural and Social Phenomena and their impact on our understanding of complex natural phenomena), due to several reasons:

  • Non-linearity and non-convexity:
  • Power law relationships often exhibit non-linear and non-convex behaviour, challenging optimisation problems.
  • Traditional optimization algorithms may struggle to find global optima in such complex and rugged landscapes, leading to suboptimal solutions or convergence to local optima.
  • High dimensionality:
  • Complex systems governed by power laws may involve many variables and parameters, leading to high-dimensional optimization problems.
  • As the dimensionality increases, the computational complexity of optimization algorithms grows exponentially, making it challenging to explore the solution space efficiently and find optimal solutions within reasonable time frames.
  • Sensitivity to initial conditions:
  • Power law relationships can be highly sensitive to initial conditions and small parameter changes, leading to unpredictable behaviour in optimization algorithms.
  • This sensitivity can exacerbate convergence challenges and make obtaining robust and reliable solutions difficult.
  • Sparse data and tail events:
  • Power law distributions often exhibit heavy tails, with a few extreme events occurring infrequently but significantly impacting the system.
  • Estimating model parameters such as mean and variance in power laws can be challenging due to the scarcity of data in the tail region and the potential for extreme observations to bias parameter estimates.
  • Estimation biases and sample size requirements:
  • Power law distributions can exhibit estimation biases, particularly in the presence of finite sample sizes and sampling biases.
  • Estimating parameters such as the exponent of a power law distribution requires a sufficiently large and representative sample size, which may be hard to obtain in practice, especially for rare events or long-tailed distributions.

Practical challenges of estimating model parameters in power laws versus Gaussians

Comparing the practical difficulties of estimating model parameters such as mean and variance in power laws versus Gaussians:

  • Mean estimation
  • In Gaussian distributions, the mean can be estimated directly from the sample mean, an unbiased estimator under the assumption of a normal distribution.
  • However, estimating the mean in power law distributions may be challenging due to extreme values and the potential for biased estimates, particularly in sparse data or sampling biases.
  • Variance estimation
  • In Gaussian distributions, the variance can be estimated directly from the sample variance, an unbiased estimator under the assumption of a normal distribution.
  • However, in power law distributions, estimating the variance may be more challenging due to heavy tails and the potential for biased estimates, especially in the tail region where data may be sparse or subject to extreme observations.

7.4 Unknown Unknowns

The concept of “unknown unknowns” refers to phenomena or factors that are not only unknown but also unknowable.

In Savage’s “Small World,” which represents an idealized scenario where decision-makers have perfect knowledge of outcomes and their probabilities, the concept of “unknown unknowns” highlights the limitations of this idealization. In a Small World, nothing new ever happens, and there can be no “Unknown Unknowns”.

In contrast, complex adaptive systems constantly display emergent behaviour; patterns that could not have been anticipated. A leader managing such a system cannot list all possible outcomes, let alone assign each probability.

8. Subjectivity and the Role of the Observer

In decision-making, a leader’s subjective experience contrasts with their role as an objective observer. For example, in systems thinking and cybernetics, the leader must diagnose problems based on data and evidence and formulate a logical and rational solution.

A leader working on complex problems in a social group is an integral part of the system. As we have seen in previous sections, the leader is unable to gather sufficient information about the system in principle and practice, and whatever data they gather will be coloured by their subjective experience

  • Subjective experience includes:
  • Emotional Influence: Leaders are emotionally invested in their organization’s success, which can colour their subjective experience. Personal goals, biases, and emotional attachments may drive them.
  • Cognitive Filters: Their subjective experience is shaped by cognitive filters, including past experiences and biases, which may cloud their judgment and decision-making process.
  • Individual Perspective: Leaders bring their personal perspectives, preferences, fears, and desires to the table, impacting their subjective interpretation of situations.

Systems Thinking in a Nuntshell

Systems Thinking is a holistic approach to understanding complex systems by examining their interconnectedness, interdependencies, and dynamics. It views the leader’s role in an organization as crucial for effective strategy formulation and decision-making by emphasizing the following principles:

  • Holistic Perspective:
  • Systems Thinking encourages leaders to consider the organization a dynamic system composed of interconnected parts rather than isolated components.
  • This holistic perspective enables leaders to understand how changes in one part of the system can affect the entire organization.
  • Interconnectedness:
  • Leaders using Systems Thinking recognize that actions taken in one part of the organization can have ripple effects throughout the system.
  • Therefore, they strive to understand and anticipate these interconnections when formulating strategies and making decisions.
  • Feedback Loops:
  • Systems Thinking emphasizes the presence of feedback loops within the organization, where actions produce outcomes that, in turn, influence future decisions.
  • Leaders analyze feedback mechanisms to identify reinforcing feedback loops that amplify desired outcomes and balancing loops that counteract undesirable consequences.

Systems Thinking addresses the paradox of the leader being part of the system being managed by acknowledging the leader’s dual role as both a participant within the system and an external observer guiding its direction. Several key principles help resolve this conflict:

  • Personal Reflection:
  • Leaders continuously question their assumptions, biases, and actions within the system. By reflecting on their role and impact, leaders can mitigate potential blind spots and unintended consequences of their decisions.
  • Boundary Spanning:
  • Leaders are encouraged to span boundaries within the organization, connecting disparate parts of the system and facilitating communication and collaboration across departments, teams, and hierarchies.
  • By transcending organizational silos, leaders can gain a more comprehensive understanding of the system and its interdependencies.
  • Feedback Analysis:
  • Leaders leverage feedback loops within the system to assess the effectiveness of their actions and interventions.
  • By monitoring feedback mechanisms, leaders can adapt their strategies and behaviours in real time to align with the system’s evolving needs and dynamics.
  • Co-creation of Meaning:
  • Leaders collaborate with stakeholders to develop shared mental models and narratives about the system’s purpose, goals, and values.
  • This collective sense-making process helps align individual actions with the system’s objectives.

9. Summary

Exploring problem-solving reveals that not all problems are created equal. Distinguishing between simple and complex problems reveals the underlying nature of the systems they belong to. Simple problems are typically found within ordered systems, whereas complex problems are inherent to complex systems. These systems extend beyond biological ecologies to encompass social groups and organizations, where intricate interactions and emergent behaviours define their complexity.

One defining characteristic of complex systems is their governance by power laws, rendering traditional optimization techniques ineffective. Unlike in ordered systems, where linear solutions may suffice, complex systems defy such neat categorizations. Applying optimization strategies proves futile due to the non-linear, unpredictable dynamics governed by power laws.

Heuristics emerge as promising alternatives to optimization in navigating the labyrinth of complex systems. These intuitive, rule-of-thumb approaches allow for adaptive decision-making, acknowledging complex systems’ inherent uncertainty and non-linearity.

10. References

Leave a Reply

Your email address will not be published. Required fields are marked *