- 1. Overview
- 2. Features of Megaprojects
- 3. The Planning Fallacy
- 4. Project Delivery and Management
- 5. An Agile Solution to Megaproject Delivery
- 6. Final Words
- 7. References
- 8. Featured Articles
Although many of the software development professionals visiting this website may not have participated in megaprojects, they will, as I have, find the challenges these projects face pretty familiar.
Therefore, we hope the ideas this article discusses might come in handy when planning or executing your next project.
Bent Flyvbjerg has been studying megaprojects for 30 years and has published several papers and books on the topic. His work will form the basis of our subsequent analysis and discussion.
So what are megaprojects?
Megaprojects include hydroelectric dams, chemical-processing plants, enterprise IT systems, nuclear power plants, space missions, and military invasions. Megaproject budgets typically span billions of dollars and can be business or government-sponsored.
Two characteristics set these initiatives apart from projects we are familiar with; they are monolithic and bespoke solutions generally relying on novel technology and design.
We conclude with some remarks on how insights gained in studies of megaprojects might apply to their smaller cousins.
2. Features of Megaprojects
Megaprojects present distinct features, according to Flyvbjerg, but, as any IT professional who worked on large projects might recognize, these characteristics are inadvertently present in large and medium-sized initiatives as well, although perhaps to a lesser degree.
Below is a list of the most prominent characteristics defined by Flyvbjerg.
2.1 Inaccurate Forecasts and Project Risk
In papers published by Flyvbjerg in 2006 and 2014 on inaccurate forecasts and project risks, the author outlines the following ideas:
- Three predominant factors lead to inaccurate project cost and demand forecasts: technical issues (low-quality data, poor statistical models), optimism bias, and strategic misrepresentation. The author maintains that technical difficulties alone do not explain the observed data; if this were the case, the distribution of cost overruns would be normal and zero-centred. In reality, it is always non-zero and follows non-normal distributions.
- Because high inaccuracies occur in both cost and demand forecasts, their impact is compounded when combined, for example, in cost/benefit analysis. This multiplicative influence renders the overall error magnitudes higher than reality instead of a few percentage points.
- The large inaccuracies in demand and cost forecasts inevitably lead to severe project risks, compounded again by the long planning horizon. The author maintains that people can accurately forecast significant events if they fall within the following year. Forecasts running over 3-5 years or more become notoriously erroneous.
The author recommends using reference class forecasting instead of traditional methods to combat such challenges. This method was invented by Nobel-prize winner and economist Daniel Kahneman (author of the international bestseller Thinking Fast and Slow).
We will discuss reference class forecasting in its own section.
2.2 Lack of Technical Expertise in Project Management
Project management involves complex decision-making requiring some familiarity with a project’s technical aspects and no small amount of domain expertise. Long-running projects tend to see project managers come and go and have little time to learn.
When project constraints are not met, they must be revisited and new schedules and budgets laid out. The scope might also be tightened as long as the solution remains viable. All this involves complex negotiations with business stakeholders, project sponsors, suppliers, contractors, and clients.
Inexperienced or new leadership will struggle under situations of intense pressure.
2.3 Decision-Making and Conflict of Interest
This diversity makes planning particularly challenging as people with different or conflicting interests are involved in decision-making.
2.4 The Uniqueness Bias
The uniqueness bias is a potential explanation of why managers of large projects who view their projects as unique are more prone to failure. The main ideas are as follows:
- A study by Budzier and Flyvbjerg in 2013 concluded that projects with significant cost overruns and delays could be sampled from the same statistical distribution as smaller ones, provided that power laws (or Pareto distributions) are used instead of Gaussian (or normal, bell-shaped) distributions. Under this regime, the authors theorize that large projects are not fundamentally different from small ones.
- Analysis in the same study showed that cost and schedule overruns of IT projects follow a power law; the more you have waited for a project to be completed, the lesser the chances it will be completed soon. This topic was addressed elaborately in The Black Swan, which I invite anyone interested in the role of uncertainty in decision-making to read (more than once!).
- Flyvbjerg believes that projects implementing novel technologies or designs induce planners to view them as unique. Flyvbjerg calls this the uniqueness bias.
- Consequently, project managers believe there is little to be learned from past experiences and current best practices, thereby increasing project risk due to poor practices. Flyvbjerg believes that holding those views makes their owners a liability to the organization. For megaprojects, these are mega-liabilities.
They then use overcommitment to at least partially explain the significant overruns that megaprojects may incur.
Their ideas are as follows:
- Overcommitment is a psychological coping mechanism associated with the inability to withdraw from obligations. Overcommitment typically happens before sufficient data is gathered and careful analysis is made. Stakeholders might commit to a specific idea, plan, or course of action before formally announcing it.
- Overcommitment is sometimes good, as it encourages quick decision-making and avoids delays.
- Lock-in occurs when you choose not to deviate from a suboptimal path, although a better alternative exists. Lock-in is an overcommitment to a specific decision or course of action despite its inefficacy. It can be seen through the leadership’s reluctance to consider alternative and viable options, sunk costs (time and money), and the need for justification.
- Escalating commitment is another phenomenon emerging from sunk costs, an early commitment to a specific outcome, or political vulnerability. While lock-ins happen during the decision-making phase of a project, escalating commitments are observed during execution.
2.6 Strategic Misrepresentation, Principal Agent, and Rent-Seeking
According to Flyvbjerg’s studies, three behavioural biases contribute to our understanding of megaproject outcomes regarding cost overruns.
- Strategic misrepresentation is the tendency to distort facts and data to make project costs and future benefits look better on paper. It is, therefore, a form of deception to get consequential decisions approved. Strategic misrepresentation is a rationalization of actions that serve a strategic goal, like getting a signature architecture building plan approved by presenting a budget much lower than reality.
- The principal-agent problem refers to situations where an agent puts their interests ahead of the principal’s. The principal is typically the owner of an asset and allows the agent to manage it at a cost, provided they keep the principal’s best interests at the forefront.
- Rent-seeking behaviour is another issue that surfaces in megaprojects when large sums are involved. In economics, rent-seekers attempt to increase their wealth without increasing production or benefiting society.
2.7 Impact of Rare Events
- The rare and unexpected has a skewed effect on projects — it’s almost impossible to think of a project where a natural catastrophe, a significant change in leadership, or a shift in organisational strategy has not adversely (never favourably) impacted delivery.
- Planning errors associated with routine tasks are smaller than those involving great novelty. But even with routine tasks, something non-routine may always happen because of an ever-changing environment.
- Rare events with high impacts are typically not thought of during planning. They are unknown unknowns that we could never have predicted and, therefore, are not risks we can mitigate through, for example, contingency planning.
The scientific school of management, championed by mechanical engineer and business consultant Frederick Taylor, taught that any production process could be measured and quantified for efficiency and low variations in the output’s quality.
Not long after that, the intricacies of stakeholder relationships lept to the forefront, and complexity emerged as a result.
Complexity arises when the human element is introduced into any mechanistic Newtonian system, especially in the form of human social groups. Direct causal relationships disappear, and rigid control and direction become negligible.
3. The Planning Fallacy
3.1 Problem Description
Daniel Kahneman and Amos Tversky ran experiments in 1979 to prove how inadequate people are at forecasting. Their ground-breaking work helped Kahneman win the Nobel prize for his theories on behavioural economics.
The result of their experiments was as follows:
- People systematically and consistently underestimate the effort required (in time and money) to complete a task. Their views are always more optimistic than reality, hence the reference to Optimism Bias as a description of this psychological phenomenon.
- Even when shown reliable statistical data that contradicted their views, planners did not significantly alter their forecasts and continued to ignore the possibility of adverse events occurring in the future.
- Planners showed more pessimistic views when forecasting efforts for tasks to be completed by other groups. When describing their future, planners were more optimistic than what the historical data supported. When predicting other people’s futures, they consistently showed more realism.
In fact, this happens all the time, even for mundane tasks that we commit to every day. This serious underestimation of future efforts leads to serious planning issues, hence the term planning fallacy to denote the illusion of having things under control.
3.2 Reference Class Forecasting
Kahneman and Tversky recommended that forecasters make every effort to frame the forecasting problem into something they are familiar with to facilitate utilizing all the available historical information (in the form of probability distributions).
They emphasized how using distributional information from previous initiatives like the current one helps take an outside view. Reference class forecasting helps forecasters take the outside view. It works as follows:
- Select a reference class of past, similar projects. Attributes such as size, state of technology used (novel, mature), and industry can be used to define a similarity measure.
- Generate a probability distribution for the selected reference class. For example, if you are forecasting costs, the probability distribution should show the cost on the x-axis.
- Compare your new project with the reference class distribution to establish the most likely outcome. You must also look at variability to determine whether project risk is high enough to warrant contingency planning.
3.3 Forecasting Without Error Rates
In his monumental work The Black Swan, Taleb discusses three fallacies as a result of looking at the expected outcome of a forecast rather than a range of probable values. These are as follows:
- Variability — Assume a software developer provides an effort estimation of 30 days for a given task. It would make a lot of difference for the project manager if this estimate were 30 plus or minus 15 or 30 plus or minus 5. Taleb emphasizes the importance of considering the variability of outcomes in a plan to account for the best-case and worst-case scenarios.
- Forecast duration — Most can reasonably predict what will happen over the next few weeks or even months, but when projections attempt to cover years, they go seriously awry. Cartoons from the 1980s imagined that 2025 would see fascinating technological advances and interstellar travels. In 2022, when writing this article, we see how far off this prediction has been.
- Black Swans — By definition, Black Swans (rare events with high impact) are difficult, if not impossible, to predict, yet their consequences can be tremendous. For example, a business making handsome yearly profits but with no reserves might do well for many years but go under following the first natural disaster or global recession.
4. Project Delivery and Management
4.1 The Break-Fix Model
Flyvbjerg describes the current project delivery methodology of megaprojects with the Break-Fix model. It works as follows:
- Megaproject planners and managers use the Break-Fix model to deliver projects either because they know of no other way or because they do not have an incentive to plan it accordingly (see above discussions on Optimism Bias and Strategic Misrepresentation.
- Eventually, the project must be reorganised after the current situation spirals away from past projections and reality dawns on the project managers. To “fix” the project, the scope might be decreased, reducing overall benefits and customer value, the quality might be compromised, or additional costs incurred.
- Given the sunken cost, abandoning the project becomes impossible, and lock-in and escalating commitments are observed.
- The result is an overpriced, limited-benefit solution from which the stakeholders derive little or no business value but could not have pulled the plug on it earlier.
4.2 Negative Learning
In his paper Make Megaprojects More Modular, Flyvbjerg tells the story of the Japanese nuclear reactor project Monju, whose construction was approved in 1983 and began in 1986.
In December 2016, three decades after its inauguration, the power plant was shut down indefinitely. After 34 years and 12 billion dollars, Monju generated an hour’s worth of electricity!
Monju’s story is a perfect example of negative learning, a situation where the more you learn, the slower you go.
The engineering teams working on the Monju project kept finding one problem after another, with each issue pushing the reopening date farther into the future until the government finally decided to terminate the project.
The problem with Monju, as with most failed megaprojects, was its monolithic and bespoke nature.
On the opposite side of the spectrum, we have Tesla’s Gigafactory 1 and Madrid’s railway expansion project, whose secret was an iterative approach and modular design based on mature technology and robust engineering. More on this in the next section.
5. An Agile Solution to Megaproject Delivery
5.1 Replicable Modularity
Consider the following approaches to delivering a megaproject.
- In the first approach, you build a monolithic and bespoke system.
- The project can only be productive when it’s 100% completed.
- Each component is highly customised and requires massive integration efforts. Interfaces are complex, proprietary, and cannot be reused.
- Because every element is unique, few learning opportunities present themselves, and even fewer lessons can be carried over into other project areas.
- Experimentation and rework are challenging and expensive.
- Under this regime, there are no fail-fast and fail-safe experiments.
- Once the project is completed, scaling up is exceptionally challenging.
- In the second approach, the design is modular, and the entire project can be built from similar components.
- With this approach, the project can start production as soon as its first module is installed.
- Reusable modules allow experimentation and inexpensive learning.
- Lessons learned can be applied in the installation of the next module.
- The loss of some modules due to structural or installation errors does not jeopardise the entire venture.
- The project can start small and scale on demand.
5.2 Speed in Iterations
You typically cannot predict with arbitrary precision how users will interact with a product or how their preferences will change as they come to understand what the technology can do for them. This statement is especially true for novel systems.
For example, a Minimum Viable Product or MVP is first produced for new software products. Its objective is to get user feedback as early as possible, allowing product managers to gauge the product’s viability and provide insights into what features to build next.
In this context, speed is of the essence, usually facilitated by applying an iterative approach to releases.
Combined with modularity, an iterative approach allows engineers to constantly improve the quality of their deliveries while learning as they go.
If the modules used are the same, a self-similar architecture will emerge. Otherwise, a complicated or even complex system will take shape.
5.3 Technology Selection
We have dedicated an entire article to technology stack selection in software engineering. This section will reuse some of the ideas in the context of megaprojects gleaning further insights from Flyvbjerg’s studies.
The first idea is to use reliable and mature technology (Principle 6 from Operational Excellence in Software Engineering). While that may sound dull and not as innovative as we would like it to be, innovation can still be created by combining old ideas in new ways.
The second idea is to use technology that lends itself easily to modularity. In the energy industry, wind turbines are a perfect example of modularity. Software solution architectures can be designed for modularity and scalability. Standard interfaces facilitate integration efforts.
These parameters (tech stack selection, architecture, solution design) can be selectively chosen for maximum modularity.
6. Final Words
At the beginning of this article, we set out (as software practitioners) to explore the influential factors in the success or failure of megaprojects so that perhaps we can draw some parallels with the more manageable IT projects we are familiar with.
Below is a list of similar challenges and what we can do about them.
- Projections and forecasts — The cognitive biases and systematic flaws that drive us to underestimate megaproject efforts are generally the same regardless of the project’s size, except that for larger projects, the errors are naturally also heftier. Reference class forecasting is valuable, especially when historical data is available.
- Significance of having talented project management — The problems resulting from a lack of skilled project management and sound practices are familiar. Technically proficiency is a must to allow project managers to participate effectively in decision-making sessions for changing project scope or reprioritizing features in a roadmap.
- Planning, execution, and learning — Overcommitment, lock-in, escalating commitments, and the uniqueness bias are also preponderant among project planners. Self-awareness of these flaws is vital when the stakes are high. Blindness to rare events and their adverse side effects always carries substantial risk. Use Agile where uncertainty and business requirement volatility is highest, and a hybrid Waterfall methodology otherwise.
- Solution architecture and solution design — Design modular solutions to maximize learning opportunities. Use an iterative exploratory approach to cater for unarticulated needs. Emphasize business value over innovation. Use reliable and mature technology to serve your customers.
- Make Megaprojects More Modular — by Bent Flyvbjerg
- What You Should Know About Megaprojects and Why: An Overview — by Bent Flyvbjerg
- From Nobel Prize to Project Management: Getting Risks Right — by Bent Flyvbjerg
- Making Sense of the Impact and Importance of Outliers in Project Management through the Use of Power Laws — by Alexander Budzier and Bent Flyvbjerg
- Lock-in and Its Influence on the Project Performance of Large-Scale Transportation Infrastructure Projects: Investigating the Way in Which Lock-In Can Emerge and Affect Cost Overruns — by Bent Flyvbjerg, Chantal C. Cantarelli, Bert van Wee, Eric J E Molin
- Thinking Fast and Slow — by Daniel Kahneman
- Antifragile — by Nassim Nicholas Taleb