Uncertainty, Randomness, and Risk: A Very Short Walkthrough

1. Overview

Uncertainty, randomness, and risk are three common words we effortlessly use in our daily dialogues.

While an intuitive understanding of such concepts is readily available, exploring them with rigour is always great before we use them in professional settings.

This exploration is the objective of this article.

3. Uncertainty

Uncertainty is integral to our daily experiences, taking several forms and producing various side effects. We learn how to deal with it differently, sometimes through caution, others through contingency plans.

First, we look at the physical roots of uncertainty before investigating some of its impacts on our lives.

3.1 Stochastic Processes

The first situation where uncertainty can be experienced is when the system of interest has a random (stochastic) process for generating outcomes; therefore, all we can do to predict its outcomes is to guess.

These guesses can be quantified by probability distributions, laws assigning individual probabilities to each possible outcome. Andrei Kolmogorov published a formal representation of these laws in 1933.

The first studies on uncertainty were conducted in the sixteenth century by Gerolamo Cardano, who analysed gambling exercises and started the modern mathematical theory of probability.

In the early 19th century, Pierre Simon de Laplace published a paper providing an interpretation of probability that we continue to use.

Laplace suggested three ways to calculate the probabilities of random events:

  •  A priori — by logical inference, we determine the ratio of the probability of occurrence of a specific event. For example, in a fair die, we have six identical faces; therefore, the probability of the die landing on a particular face is 1/6.
  • A posteriori — by repeating the experiment and noting the ratio of the occurrence of favourable events over all possible events.
  • A third method postulates that we assign equal probabilities to the possible events in the absence of any knowledge we might have of the system. For example, if I am unsure about the odds of a gambling game, I assign 1/2 as the chance of both winning and losing.

3.2 Statistical Mechanics

The second situation where uncertainty arises is when the amount of information that needs to be processed before we have certainty about an outcome is impractically large.

An inert gas in a closed container is a perfect example of such a situation. If we can keep track of the position and velocity of every molecule, we can precisely determine its evolution in time. The same laws apply to weather, stock price, and crowd behaviour.

In these cases, the best we can do is calculate the statistical properties (such as average, variance, and standard deviation) of the coarse units in this system.

A coarse unit spans several system agents (air molecules, employees, stock investors) grouped (in a volume or area) and treated as one unit.

3.3 Heisenberg’s Uncertainty Principle

The two cases discussed above where uncertainties arise are familiar and part of our everyday experiences.

The third and final one that we present is not so familiar and is, in fact, very counter-intuitive. It occurs on the microscopic scale and is embodied in the Heisenberg Uncertainty Principle.

This principle postulates that we can never know, with arbitrary precision, the position and the velocity of a quantum mechanical system (such as an electron).

Unlike the first two situations, we can do nothing (unless we come up with a new theory to replace Quantum Mechanics) to rid ourselves of this randomness.

However, this property of nature, quantum randomness, in addition to quantum superposition, is why quantum computing can probe exponentially large spaces so efficiently.

3.4 Probability Theory

The classical formulation of probability theory relies on the three axioms proposed by Andrey Kolmogorov in 1933 and notions of set theory. The story is as follows.

We have an experiment that can produce several distinct outcomes on each run. The universe of all possible outcomes \Omega  can be defined as follows:

\Omega = \{A, B, C, \cdots \}

We define an event space F  on \Omega  , as a collection of events, each of which is a subset E \subseteq \Omega  .

For example, a fair die can produce six outcomes, therefore, \Omega = \{1, 2, 3, 4, 5, 6\}  . We define an event as the occurrence of an even number so that E=\{2, 4, 6\}  .

To complete our definition of probability space, we need a probability measure which we call P  , and (\Omega, F, P)  would qualify as probability space if it satisfies the following three axioms.

Axiom 1

The probability of any event is a non-negative number.

This translates to:

P(E) \in \mathbb{R}, P(E) \geq 0 \quad  \forall E \in F

This means that the probability of observing an event or a subset of events is greater than or equal to zero.

Axiom 2

The probability that at least one event in F will occur is 1.

This can be denoted as:

P(\Omega) = 1

Axiom 3

The probabilities of exclusive events are the sum of the probabilities of each event.

Which we can write as:

P(\displaystyle\bigcup_{i=0}^{\infty}E_i) = \displaystyle\sum_{i=0}^{\infty} P(E_i)

This means that the probability of obtaining 1 or 6 on a fair die is the sum of probabilities of obtaining 1 or 6.

This is all commonplace and so intuitive that we cannot imagine what negative or subtractive probability might look like. In effect, negative probability does not occur on a macro scale but in quantum systems.

4. Uncertainty in Software Projects

4.1 Internal and External Dependencies

Many factors can influence the smooth-rolling of a software project, many of which are random with a variable degree of controllability. Below are some of the more common examples:

  • Supplier dependencies for hardware, software, or services — Think of the supply chain problem created by a global pandemic.
  • Requirement volatility — User-defined business requirements are generally vague as users only know what they want after seeing what the technology can do for them.
  • Resource and skill volatility — Talent retention can be an issue; it depends on the market, the organisational culture, and each individual. Engineers can also take leaves, planned or otherwise, and they might be reallocated to other projects.
  • Organisational priorities — Priorities can change as organisations shift resources from one initiative to another according to the demands of the current situation.
  • Stakeholder influence — Stakeholder analysis and management is a significant aspect of managing project risk, as influential and interested stakeholders can either support or obstruct your projects. Reorganisations are notoriously risky regarding the outlooks of ongoing projects or those in the delivery pipeline.

4.2 Anxiety and Organisational Culture

Edgar Henry Schein, a renowned corporate psychology expert, proposed an organisational culture model that envisages a group of individuals working in an uncertain (sometimes even hostile) environment.

In Schein’s representation, the group constantly faces anxiety-generating internal and external pressures as they struggle to overcome them.

To cope with these pressures, the group deploys their shared learning (or “assumptions”, per the author) to make sense of the world.

The environment, however, is constantly changing, influencing and influenced by the group’s actions. The mutual influence creates feedback loops allowing complexity to rise, which, in turn, renders the future more uncertain.

Once a group has learned to hold common assumptions, the resulting automatic patterns of perceiving, thinking, feeling, and behaving provide meaning, stability, and comfort; the anxiety that results from the inability to understand or predict events happening around the
group is reduced by the shared learning.

Organisational Culture – Edgar Schein

At some point, the shared assumptions will become outdated and need a (sometimes drastic) revision, which Schein refers to as cultural transformation. The latter is not easy, generating more uncertainty and trauma.

Coping with uncertainty is a primary driver in organizational dynamics.

4.3 Predictability in Complex Systems

Human beings are complex, and human groups will also be highly complex (a team, an organisation, the economy, a country).

Complex systems are dynamic, non-linear, highly interconnected, and with a future that is almost impossible to predict.

Even a system of two individuals qualifies as complex. Just think of the meetings you had with your manager. Could you have predicted their outcomes with arbitrary precision?

5. Risk

Technical Risk Management

This article discusses the generic notion of risk. You can find it here if you would like to know more about Project Risk Management.

5.1 What is Risk

Risk is the possibility of loss arising from an uncertain situation. It can be defined as an objective and measurable quantity. We calculate risk as the probability of observing an adverse event.

For example, if you participate in a game where you lose 10$ if a fair die turns out six but otherwise gain 1$, your risk is precisely 1/6, and your expected return is 1$ x 5/6 – 10$ x 1/6 = -0.83$. This game is an example of an unfair gamble where the chances of winning vs losing are slightly uneven.

5.2 Asymmetrical Risk

Risk can be asymmetric. In this scenario, the risk profile distribution is skewed to the left or the right.

In a game with asymmetrical risk distributions, the participant pockets steady, small returns in most cases but may lose everything in a rare event.

A perfect example of asymmetric risk is investing in risky stock, where a market crash means the investor loses all his money.

5.3 Hidden Risk

Here we refer to hidden, unforeseeable, and unpredictable risks represented by the familiar Black Swan of Nassim Taleb.

Unpredictable risks are part of the unknown unknowns. Because they are extremely rare, they may not have been observed in the past and, therefore, not incorporated in the risk profile used.

A perfect example of “unknown unknowns”, which we borrow from the excellent book Fooled by Randomness, can be as follows.

Imagine a Russian roulette game played many times by many players where the number of barrels is very high (so you tend to forget about the bullet) and is unknown (so you can’t put a number against the probability).

Another example is the case of reputational damage inflicted on an organisation where prospective customers turn to a new vendor after a heavily publicised news article was run in the media.

In this rare event, the damage cannot be assessed from historical data. It might even be so rare that people might not know it has a non-zero probability.

6. Further Reading

Leave a Reply

Your email address will not be published. Required fields are marked *