Engineering Superior Production Processes: A No-Nonsense Guide for Everyone

Georges Lteif

Georges Lteif

Software Engineer

Last Updated on May 19, 2022.
Subscribe now to stay posted!
About Us
17 min read

1. Overview

One of the subtle and fundamental principles of software development is that no single person can deliver software by themselves; even in the most trivial cases, at least one employee from the client and vendor side must be involved.

Straightforward as it seems, the constant collaboration of all parties will be required, as is the case in any complex social group.

This collaboration must be effective and efficient, two concepts we will explore later in this article. Of the necessary prerequisite for achieving these objectives, properly engineered production processes must be a top priority, and these will be the topic of our present discussion.

In the software business, these processes constitute the Software Development Lifecycle or SDLC.

Having written down the rules governing the SDLC is a significant step but is not sufficient by itself to guarantee success. 

You need a few metrics to gauge the process performance and a mechanism to fix or update it if it fails or becomes obsolete. Process improvement would then need to kick off and remain steady for the entire lifetime of the business.

Designing superior production processes and applying a strategy for continuous improvement can be complicated.

We will try to provide insights that will hopefully help you design efficient and effective processes throughout the following paragraphs. 

It is important to note that for these suggestions to be useable, there are two main criteria that it needs to follow.

  • Firstly, the concepts must be methodology-agnostic, i.e. they must be universal enough to apply to any project management and delivery methodology we choose (Agile, Waterfall, or DevOps).
  • Secondly, any process design that satisfies those rules must be in total harmony with the principles we laid out for Operational Excellence. For example, there is no point in optimizing our activities for maximum performance at the cost of the staff’s wellbeing.

To complete the discussion, we will examine some of the symptoms of poor processes. We will also have a few words to say on Six Sigma and perhaps borrow some powerful concepts.

Next, we describe the rules that should be applied when designing superior processes.

Finally, we briefly discuss process improvement and redesign (the topic is discussed in fuller detail in another article).


2. Table of Contents


3. Performance and Poor Processes

3.1 Symptoms of Poor Processes

The following symptoms are usually observed when poorly designed and implemented processes are in play:

  1. Symptom 1: Unable to meet basic customer requirements, let alone offer any innovative solutions.

In the customer satisfaction model proposed by Noriaki Kano, there are three levels of customer requirements.

  1. Basic Requirements (or Dissatisfiers). These are the must-haves. You get absolutely no credits for meeting those requirements but a whole lot of dissatisfaction if you don’t.
  2. Variable Requirements (or Satisfiers). These have a direct impact on your organization’s ratings. They typically include timely delivery and good customer service.
  3. Latent Requirements (or Delighters). These set you apart from competition and are usually the product of innovation.
  1. Symptom 2: Inconsistent results that vary wildly between teams, projects, and people.

In Six Sigma parlance, this is known as variation. When variance is high, performance wobbles significantly, and constant effort by management is invested to get performance back on track.

  1. Symptom 3: Process Improvements are typically local, sporadic, and short-lived; they are not properly assessed, standardized, or propagated to other teams.

The improvement of production processes must itself be a robust, well laid-out process.

It must also be sustainable so that continuous improvement, propagated across the organization, can result in a net positive and measurable difference.

  1. Symptom 4: Scaling problems immediately arise when the team takes on bigger projects or grows in size.

Fragile systems can fracture and break under pressure. One of the challenges observed in startups for example is their inability to scale once they start achieving exponential growth.


Among IT projects, failure rate corresponds heavily to project size. An IT project with a budget over $1M is 50% more likely to fail than one with a budget below $350,000.
Gartner

The problem is not better for larger companies as they take on bigger projects. Poor processes lead to less resilience in the face of challenges.

  1. Symptom 5: Frequent management intervention is required to put out fires.

Constant fire-firefighting and crisis management are sure signs of processes breaking down.

3.2 Poor Processes and Organizational Cultural

Organizational culture plays a pivotal role in the performance of a group. This can make things a bit confusing as the symptoms of a toxic culture can overlap with those of poor processes. A thorough understanding of both is required to identify the subtle nuances.

A great culture coupled with refined knowledge and expertise on the leadership level creates superior processes.

Sadly, we cannot say that this would be true the other way around. In fact, good processes and great knowledge can diminish the effects of toxic cultures but will not eliminate them.


4. Modelling Production Processes

4.1 Proposed Model

The model we use is derived from the Supplier, Input, Processes, Output, Consumer model also known as SIPOC.

In its basic form, we can define production processes as a set of activities that takes raw input and transforms it into consumable outputs.

Software Production Process Model

We can then break up those activities into different, specialized stages.

Each stage takes a product that’s still a Work-In-Progress (WIP), adds value to it, and pushes the modified WIP to the next stage.

Once the process is completed, the product would have increased in value by an amount that we will call:

V = \displaystyle\sum_{i=o}^{s} V_i

Where s is the total number of stages and V_i is the business value generated in stage I.

The Six Sigma model introduces the notion of opportunity. A complex product is one where more than one thing could go wrong. Each “thing” that can go wrong is one opportunity.

This view makes the comparison of the production processes of different or complex products possible.

This makes the value introduced at each stage the sum of the opportunities added:

V_i = \displaystyle\sum_{j=o}^{m_i} O_j

4.2 Defining Value-Adding Effort

4.2.1 Tangible Value

Tangible business value can be observed, measured, and compared. The most common form this value can take is the dollar figure that we believe customers will pay to acquire and use the product. In this case, the term “business value” is a synonym for “company asset”.

A product, software, idea, or service is valuable if it satisfies the below conditions:

  1. Addresses the customer’s business needs
  2. Has a low cost of ownership and maintenance
  3. Boasts excellent customer support

Tangible Business Value can be present in any of the following offerings:

  1. Software code and applications
  2. Data and insights derived from it
  3. Customer services or consultancy
  4. Intellectual property and innovations
  5. Hardware, specialized tools, or high-tech equipment
  6. A differentiating and efficient business model

4.2.2 Intangible Value

There are also other forms of value that we cannot directly observe or measure such as benefit to society.

A familiar example would be the number of families whose members have a job in the company. Another example is how certain technologies have considerably improved the quality of life of some segments of a population.

For the majority of small businesses, perhaps it is OK not to worry too much about anything other than the monetary value of your products.

This might be more relevant to mega-corporations whose products can have a significant impact on society.

4.2.3 Resellable Value

It is typical when producing IT products such as software that you would want to sell that product as many times as you could.

  • In fact, one of the reasons why stocks in software companies are valued based on revenue, rather than profits, is exactly due to their resellable value.

We can now rewrite the business value of a certain product as a combination of two elements follows:

V = V_0 + a V_A

Where:

  • V_0  is the monetary value that can be tapped only once
  • V_A  is the monetary value that can be received for each new sale
  • a  being the size of the Serviceable Addressable Market (SAM)

A more conservative approach would be to take the Serviceable Obtainable Market instead, which is usually a subset of the SAM.

4.3 Defining Non-Value Adding Efforts or Waste

We can define waste as any activity, expense, or overhead that requires effort but does not add any value to the overall product.

Waste business value Production Processes
Value and Waste from Production Processes

To better understand waste, we break it down into four separate categories.

W_T = W_O + W_H + W_E + W_R

We call W_T the total amount of waste generated throughout the whole process and:

  1. W_O  is the amount of overhead required for the process to completed.
  2. W_H  is the amount of hidden waste. More on that in the next paragraph.
  3. W_E  refers to waste resulting from low skill levels, waiting times, inefficient processes.
  4. W_R  this is the amount of work that needs to be redone because something was missed in the first run.

Overhead Waste

This type of waste refers to any effort expended to get the work done but which does not contribute directly to the product’s value.

A perfect example of overhead is management. Management intervention can vary depending on how smooth and resilient internal processes are.

Other forms of overhead can include internal documentation, compliance, and security measures.

Hidden Waste

When calculating the price of a certain consumable product, most companies do not factor in hidden costs such as pollution, or health and safety risks. This helps keep the prices competitive at a cost that will only be evident in the future.

A similar concept can be applied to software development where waste generated today can only be visible in the future. The most notorious example of such waste is technical debt.

Unfortunately, we are not able to detect or measure hidden waste as easily as we want to.

This leaves us with an approximate value W_A of waste which we can think of as a fraction of the total waste W_T :

W_A = \eta W_T , \quad \eta \in [0,1]

Waste from Inefficiency

This type of waste is the result of the inefficient execution of production processes.

Some examples of such types of waste can be the result of:

  1. Poorly trained resources
  2. Poor cross-functional team coordination
  3. Idling and waiting times
  4. Silos
  5. Ambiguous roles/responsibilities
  6. Low-quality execution standards.

Rework Waste

Any time a development task is not completed in the first pass, some of its subtasks will need to be reworked.

The top reason for rework in software development is oftentimes the results of unclear requirements.

There are other reasons as well like:

  1. Not following coding and security standards
  2. Compliance problems
  3. Poor user experience.

5. Cost of Poor Quality (COPQ)

According to this report, the Cost of Poor Quality (COPQ) in the US software industry is a whopping $2.08 trillion.

COPQ in Software Industry in the US, 2020
COPQ in Software Industry in the US, 2020

The COPQ is dividing, according to the same report, into three major pieces:

  1. Software failures – 75% or $1.56 trillion
  2. Legacy code – around $520 billion
  3. Unsuccessful development projects estimated at $260 billion

COPQ is another metric that can be used to measure the efficiency of your production processes.

Its advantages over Waste are:

  1. It provides a fair tool for comparing the performance of two processes or products
  2. It speaks a language that the leadership can understand (money)
  3. It allows the leadership to perform a cost/benefit anlaysis for addressing those issues by using the same unit of scale (the dollar value)

6. Superior Processes

What does it mean to have Superior Processes?

Superior Production Processes
What Having Superior Processes Means

Having Superior Processes in place means that your system is

  1. Processes are effective, customers are happy
  2. Product quality is consistent, variations are low
  3. Efficiency is high, the product is profitable
  4. Operations are smooth, crises are rare
  5. Processes that control the improvement of production processes are smooth and efficient, in which case the system is capable of responding to change
  6. Performance can be tracked, Key Performance Indicators can be measured

In fact, we can define superior processes as those that exhibit the following properties.

6.1 Effectiveness

Effectiveness describes your ability to deliver on time, and within budget. Equally important is meeting the customer’s expectations. All three pillars, time, budget, quality form the basis of this metric’s assessment.

You can assess the effectiveness by looking at the two metrics.

The first metric is how far off we are from the original estimations of the project in terms of timeline, budget, and scope.

The second metric measures the variation of the first one between projects. Here we are looking for consistency in our performance.

6.2 Efficiency

Efficiency describes your ability to deliver quality products within the budgeted/optimal amount of resources.

With efficient processes, the cost is tamed, profitability is high, and the price is competitive.

Superior processes are efficient if they produce little waste. To model efficiency, we propose the following formula:

E_T = \dfrac{V}{W_T} = \displaystyle\sum_{i=o}^{s} \dfrac{V_i}{W_i}

Where W_i is the amount of waste generated at stage I.

This metric can be used to monitor existing processes. You would typically want to kick off a process improvement exercise if efficiency goes below a certain threshold, or if it exhibits a constant downward trend.

The problem with this formulation is that it uses W_T , a quantity that cannot be easily measured, instead of W_A .

Substituting W_T for W_A won’t solve the problem either; if we do that, we would be overestimating the efficiency of our processes.

6.3 Smooth Operations

When running smooth operations, at no point in time does anyone have to stop and think what to do next, nor do they have to improvise to get things moving again.

In fact, superior processes:

  1. Prevent deadlocks, this is where two resources can end up in a situation where they wait on each other indefinitely
  2. Eliminate single points of failure
  3. Are rigorous enough to cover all types of scenarios, eliminating any need for improvisation

You can identify smooth operations very easily just by asking how many times your manager needs to intervene in your day-to-day routine to get things moving again.

6.4 Process Improvement Should Be Easy

Process improvement is essential if your organization wants to survive amidst constant internal and external pressures.

This means that executing smooth, process-improvement exercises should be easy and non-traumatic.

The top condition for successful process improvement is to have a solid process for just that, with clear roles and responsibilities.

Companies using Six Sigma have dedicated staff (Green Belts and Black Belts) for just that.

Process improvements should result in a measurable net benefit in efficiency over time.

To model this property, we assume that process improvement efforts are factored in W_O  .

For process improvement to take place, the efficiency difference of the system over an arbitrary period of time T_P  should be positive:

\Delta_E = E_T(t + T_P) - E_T(t) > 0, \quad t > T_0

Where T_0  is the time after which the process changes have been deployed into production.

While the overhead effort may increase temporarily due to the additional effort investment in process improvement, the net overall benefit inefficiency should be noticeable in the long term.


7. Guide to Implementing Superior Processes

Now that we know what superior processes look like, let’s see if we can come up with some rules on how to properly build and govern them.

7.1 Accountability and Responsibility

The first rule to look at when designing superior processes is perhaps the proper allocation of ownership and accountability.

A designated accountable person(s) for a process is someone who makes sure:

  1. All the elements of the process are well-defined and documented. If you are using the SIPOC model, these would be the: Supplier, Inputs, Outputs, Customer, internal stages, operational guidelines.
  2. The process governing the process improvement excercises is established
  3. Regularly follows-up with accountable staff and initiates discussions on how things can be improved

In terms of responsibility, the person responsible for the process (or a stage in the process) has the following duties:

  1. Ensure that the process is efficient and that it delivers on its promises.
  2. Ensures process improvements are executed whenever required.

A few rules of thumb:

7.1.1 Every Process Must Have an Owner

This should appear trivial but the process owner is not necessarily the person executing it. He/she need to be explicitly assigned.

7.1.2 Multi-Level Failures

If something goes horribly wrong, it would be a sure sign of a process, rather than an individual, failure. It’s good to be able to thoroughly understand what went wrong when things do go wrong.

7.2 Inputs, Outputs, and Transformations

Every stage of the process should have a well-defined set of inputs and outputs in line with the SIPOC model. These are generally referred to as artifacts.

Artifacts can be:

  1. Project documents such as project plans
  2. Technical documentation such as design documents or user guides
  3. Source code
  4. Test and automation scripts

Activities that consume inputs and produce outputs are referred to in this context as transformations or operations.

A few rules-of-thumb to follow:

7.2.1 Securing the Inputs

Do not start the process (or kick-off a stage) unless the relevant info is ready. It’s just a waste of time and a source of frustration if the assumptions you have made turn out to be untrue.

7.2.2 Minimizing Byproducts

Some artifacts can be produced as byproducts of the process and are not really propagated between the stages. They usually serve no other value except as a workspace for the current stage. They better be minimized. Class-level Unit Tests are one stark example.

7.2.3 Articulate Transformations

Make sure that everybody is clear on what the transformation processes consist of. Also, people need to understand what skills are needed, what tools should be used, and when. For development, have a Development Guide ready. For testing, make sure you publish a Test Strategy.

7.3 Utility and Applicability

Superior processes should be useful and applicable in a very wide range of scenarios.

Opportunities, where staff need to improvise or get assistance from management to unblock a situation, should be extremely rare.

Having this in place means less overhead, reduced waiting times, and higher efficiency.

This means that for every different scenario you will have a specific process or sub-process that governs it.

Example of Different Scenario: Project Size

One example where you might want to consider having different sub-processes is project size. The number of documents that you prepare and the templates you use may vary between projects of different sizes. You might also want to apply additional stages to the bigger ones.

A few rules-of-thumb to follow:

7.3.1 Handling New Scenarios

If a situation arises where a scenario is not covered by existing processes, try to set up and define a process that deals with it instead of creating ad-hoc, one-time solutions.

7.3.2 Usability in Different Situations

It’s quite difficult to implement a universal, one-size-fits-all process that is also efficient. Projects of different sizes and tasks of different nature (requirement gathering, development, testing…) will certainly need different handling methods.

7.4 Quality Assurance Checks and Balances

Law number four decrees that there should be no instances where quality problems are allowed to be hidden or passed on to the next stage.

You might want to stay away from sophisticated metrics that measure minute details of every process.

These tools are either reminiscent of bygone ages (Taylorism) or only applicable for large manufacturing businesses.

For small software development teams, they are almost always are counterproductive.

Instead, try to form a culture that rewards swift and immediate resolution of quality problems.

Another potent method to apply is the standardization of processes across your organization.

This will help you ensure that improvements are propagated between different teams and that the results of those improvements are statistically significant.

7.5 Involving Stakeholders

Processes that are meant to be followed by smart individuals need to make sense to them. This is rule number 5.

Therefore, relevant stakeholders must be involved in the decision-making process which led to those processes in the first place.

Stakeholder management and good communication ensure group compliance and processes will be followed. Remember no involvement, no commitment.

Beware of Vetocracies

Do not confuse stakeholder involvement with an (ugly) form of democracy, sometimes referred to as vetocracy, where decisions can only move forward if unanimously approved.

7.6 Clarity and Accessibility

Finally, always make sure to publish an Operations Guide with all the details of the processes mentioned above.

This will help everybody get on the same page, provide constructive feedback when required, and make continuous improvements.


8. Further Reading


Technical Risk Management and Decision Analysis — Introduction and Fundamental Principles

1. Overview I could not find a better way to start an article on Risk and Risk Management than by quoting the opening lines of Donald Lessard and Roger Miller’s 2001 paper that, briefly but lucidly, summarizes the nature of large engineering endeavours. It goes like this: This article leans heavily on three handbooks thatContinue reading “Technical Risk Management and Decision Analysis — Introduction and Fundamental Principles”

Complexity and Complex Systems From Life on Earth to the Universe: A Brief Introduction

1. Overview Dealing with complexity is an integral part of our lives, even if we do not realise it.  An organisation can be modelled as a complex system from the scale of megacorporations right down to the smallest teams. The architecture of software solutions can be equally complicated, and megaprojects and implementations are certainly involved.Continue reading “Complexity and Complex Systems From Life on Earth to the Universe: A Brief Introduction”

Book Review: Programming the Universe — A Quantum Computer Scientist Takes on the Cosmos

Synopsis Most physical theories adopt a mechanistic view when examining natural phenomena where any system can be modelled as a machine whose initial conditions and dynamics govern its future behaviour. In this book, Programming the Universe — A Computer Scientist Takes on the Cosmos, Professor Seth Lloyd proposes a radically different approach centred around aContinue reading “Book Review: Programming the Universe — A Quantum Computer Scientist Takes on the Cosmos”

From Abstract Concepts to Tangible Value: Software Architecture in Modern IT Systems

1. Overview Software design and architecture are two very elusive concepts; even Wikipedia’s entries (ref. architecture, design) are somewhat fuzzy and do not clearly distinguish between the two. The Agile manifesto’s statement on architecture and design is especially brief and raises more questions than answers. The most common definition of software architecture is as follows:Continue reading “From Abstract Concepts to Tangible Value: Software Architecture in Modern IT Systems”

Business Requirements and Stakeholder Management: An Essential Guide to Definition and Application in IT Projects

1. Overview The complexity of business requirements in IT projects has experienced exponential growth due to pressures by increasingly sophisticated client preferences, novel technologies, and fierce competition. Consider, for example, the case of financial payments. In the mid-80s, most payment transactions occurred inside bank branches, and only the biggest banks offered services on ATM orContinue reading “Business Requirements and Stakeholder Management: An Essential Guide to Definition and Application in IT Projects”

Loading…

Something went wrong. Please refresh the page and/or try again.

Leave a Reply