Engineering Superior Production Processes: A No-Nonsense Guide for Everyone

1. Overview

One of software development’s subtle and fundamental principles is that no single person can deliver software alone; even in the most trivial cases, at least one employee from the client and vendor side must be involved.

Straightforward as it seems, the constant collaboration of all parties will be required, as is the case in any complex social group.

This collaboration must be effective and efficient, two concepts we will explore later in this article. Of the necessary prerequisite for achieving these objectives, properly engineered production processes must be a top priority, and these will be the topic of our present discussion.

These processes constitute the Software Development Lifecycle (SDLC) in the software business. Having written down the rules governing the SDLC is a significant step but is not sufficient by itself to guarantee success. 

You need a few metrics to gauge the process performance and a mechanism to fix or update it if it fails or becomes obsolete. Process improvement would then need to kick off and remain steady for the entire lifetime of the business.

Designing superior production processes and applying a strategy for continuous improvement can be complicated.

We will try to provide insights that hopefully help you design efficient and effective processes throughout the following paragraphs. 

It is important to note that for these suggestions to be useable, there are two main criteria it needs to follow.

  • Firstly, the concepts must be methodology-agnostic, i.e., universal enough to apply to any project management and delivery methodology we choose (Agile, Waterfall, or DevOps).
  • Secondly, any process design that satisfies those rules must be in total harmony with the principles we laid out for Operational Excellence. For example, there is no point in optimizing our activities for maximum performance at the cost of the staff’s wellbeing.

To complete the discussion, we will examine some of the symptoms of poor processes. We will also have a few words to say on Six Sigma and perhaps borrow some powerful concepts. Next, we describe the rules that should be applied when designing superior processes. Finally, we briefly discuss process improvement and redesign (the topic is discussed in fuller detail in another article).

2. Performance and Poor Processes

2.1 Symptoms of Poor Processes

The following symptoms are usually observed when poorly designed and implemented processes are in play:

  1. Symptom 1: Unable to meet basic customer requirements, let alone offer any innovative solutions.

In the customer satisfaction model proposed by Noriaki Kano, there are three levels of customer requirements.

  1. Basic Requirements (or Dissatisfiers). These are the must-haves. You get no credit for meeting those requirements but a lot of dissatisfaction if you don’t.
  2. Variable Requirements (or Satisfiers). These have a direct impact on your organization’s ratings. They typically include timely delivery and good customer service.
  3. Latent Requirements (or Delighters). These set you apart from the competition and are usually the product of innovation.
  1. Symptom 2: Consistent results vary wildly between teams, projects, and people.

In Six Sigma parlance, this is known as variation. When variance is high, performance wobbles significantly, and constant effort by management is invested to get performance back on track.

  1. Symptom 3: Process Improvements are typically local, sporadic, and short-lived; they are not properly assessed, standardized, or propagated to other teams.

The improvement of production processes must itself be a robust, well-laid-out process.

It must also be sustainable so that continuous improvement, propagated across the organization, can result in a net positive and measurable difference.

  1. Symptom 4: Scaling problems immediately arise when the team takes on bigger projects or grows.

Fragile systems can fracture and break under pressure. One of the challenges observed in startups, for example, is their inability to scale once they start achieving exponential growth.

Among IT projects, failure rate corresponds heavily to project size. An IT project with a budget over $1M is 50% more likely to fail than one with a budget below $350,000.

— Gartner

The problem is not better for larger companies as they take on bigger projects. Poor processes lead to less resilience in the face of challenges.

  1. Symptom 5: Frequent management intervention is required to put out fires.

Constant fire-firefighting and crisis management are sure signs of processes breaking down.

2.2 Poor Processes and Organisational Cultural

Organisational culture plays a pivotal role in the performance of a group. This can confuse things as the symptoms of a toxic culture can overlap with those of poor processes. A thorough understanding of both is required to identify the subtle nuances.

A great culture, refined knowledge, and expertise on the leadership level create superior processes.

Sadly, we cannot say this would be true the other way around. Good processes and great knowledge can diminish the effects of toxic cultures but will not eliminate them.

3. Modelling Production Processes

3.1 Proposed Model

The model we use is derived from the Supplier, Input, Processes, Output, Consumer model, also known as SIPOC.

In its basic form, we can define production processes as activities that transform raw input into consumable outputs.

Software Production Process Model

We can then break up those activities into different, specialized stages.

Each stage takes a product that’s still a Work-In-Progress (WIP), adds value, and pushes the modified WIP to the next stage.

Once the process is completed, the product will have increased in value by an amount that we will call:

V = \displaystyle\sum_{i=o}^{s} V_i

Where s is the total number of stages, and V_i is the business value generated in stage I.

The Six Sigma model introduces the notion of opportunity. A complex product is one where more than one thing could go wrong. Each “thing” that can go wrong is one opportunity.

This view makes the comparison of the production processes of different or complex products possible.

This makes the value introduced at each stage the sum of the opportunities added:

V_i = \displaystyle\sum_{j=o}^{m_i} O_j

3.2 Defining Value-Adding Effort

3.2.1 Tangible Value

Tangible business value can be observed, measured, and compared. The most common form of this value is the dollar figure customers will pay to acquire and use the product. In this case, “business value” is a synonym for “company asset”.

A product, software, idea, or service is valuable if it satisfies the below conditions:

  1. Addresses the customer’s business needs
  2. Has a low cost of ownership and maintenance
  3. Boasts excellent customer support

Tangible Business Value can be present in any of the following offerings:

  1. Software code and applications
  2. Data and insights derived from it
  3. Customer services or consultancy
  4. Intellectual property and innovations
  5. Hardware, specialized tools, or high-tech equipment
  6. A differentiating and efficient business model

3.2.2 Intangible Value

We cannot directly observe or measure other forms of value, such as benefit to society.

A familiar example would be the number of family members with a job in the company. Another example is how certain technologies have considerably improved the quality of life of some population segments.

For most small businesses, perhaps it is OK not to worry too much about anything other than the monetary value of your products.

This might be more relevant to mega-corporations whose products can significantly impact society.

3.2.3 Resellable Value

It is typical when producing IT products such as software that you would want to sell them as many times as possible.

  • One of the reasons why stocks in software companies are valued based on revenue rather than profits is exactly due to their resellable value.

We can now rewrite the business value of a certain product as a combination of two elements follows:

V = V_0 + a V_A

Where:

  • V_0  is the monetary value that can be tapped only once
  • V_A  is the monetary value that can be received for each new sale
  • a  being the size of the Serviceable Addressable Market (SAM)

A more conservative approach would be to take the Serviceable Obtainable Market instead, which is usually a subset of the SAM.

3.3 Defining Non-Value-Adding Efforts or Waste

We can define waste as any activity, expense, or overhead that requires effort but does not add value to the overall product.

Waste business value Production Processes
Value and Waste from Production Processes

To better understand waste, we break it down into four separate categories.

W_T = W_O + W_H + W_E + W_R

We call W_T the total amount of waste generated throughout the whole process and:

  1. W_O  is the amount of overhead required for the process to complete.
  2. W_H  is the amount of hidden waste. More on that in the next paragraph.
  3. W_E  refers to waste resulting from low skill levels, waiting times, and inefficient processes.
  4. W_R  this is the amount of work that needs to be redone because something was missed in the first run.

Overhead Waste

This type of waste refers to any effort to complete the work but does not contribute directly to the product’s value.

A perfect example of overhead is management. Management intervention can vary depending on how smooth and resilient internal processes are.

Other forms of overhead can include internal documentation, compliance, and security measures.

A similar concept can be applied to software development, where waste generated today can only be visible in the future. The most notorious example of such waste is technical debt.

Unfortunately, we cannot detect or measure hidden waste as easily as we want.

This leaves us with an approximate value W_A of waste, which we can think of as a fraction of the total waste W_T :

W_A = \eta W_T , \quad \eta \in [0,1]

Waste from Inefficiency

This type of waste results from the inefficient execution of production processes.

Some examples of such types of waste can be the result of:

  1. Poorly trained resources
  2. Poor cross-functional team coordination
  3. Idling and waiting times
  4. Silos
  5. Ambiguous roles/responsibilities
  6. Low-quality execution standards.

Rework Waste

Some subtasks must be reworked when a development task is not completed in the first pass.

The top reason for rework in software development is oftentimes the result of unclear requirements.

There are other reasons as well, like:

  1. Not following coding and security standards
  2. Compliance problems
  3. Poor user experience.

5. Cost of Poor Quality (COPQ)

According to this report, the Cost of Poor Quality (COPQ) in the US software industry is a whopping $2.08 trillion.

COPQ in Software Industry in the US, 2020
COPQ in Software Industry in the US, 2020

The COPQ is divided, according to the same report, into three major pieces:

  1. Software failures – 75% or $1.56 trillion
  2. Legacy code – around $520 billion
  3. Unsuccessful development projects estimated at $260 billion

COPQ is another metric that can be used to measure the efficiency of your production processes.

Its advantages over Waste are:

  1. It provides a fair tool for comparing the performance of two processes or products
  2. It speaks a language that the leadership can understand (money)
  3. It allows the leadership to perform a cost/benefit analysis for addressing those issues by using the same unit of scale (the dollar value)

6. Superior Processes

What does it mean to have Superior Processes?

Superior Production Processes
What Having Superior Processes Means

Having Superior Processes in place means that your system is

  1. Processes are effective, and customers are happy
  2. Product quality is consistent, variations are low
  3. Efficiency is high, the product is profitable
  4. Operations are smooth, and crises are rare
  5. Processes that control the improvement of production processes are smooth and efficient, in which case the system can respond to change.
  6. Performance can be tracked, Key Performance Indicators can be measured

We can define superior processes as those that exhibit the following properties.

6.1 Effectiveness

Effectiveness describes your ability to deliver on time and within budget. Equally important is meeting the customer’s expectations. All three pillars, time, budget, and quality, form the basis of this metric’s assessment.

You can assess the effectiveness by looking at the two metrics.

The first metric is how far off we are from the original effort estimations of the project in terms of timeline, budget, and scope.

The second metric measures the variation of the first one between projects. Here we are looking for consistency in our performance.

6.2 Efficiency

Efficiency describes your ability to deliver quality products within the budgeted/optimal resources.

With efficient processes, the cost is tamed, profitability is high, and the price is competitive.

Superior processes are efficient if they produce little waste. To model efficiency, we propose the following formula:

E_T = \dfrac{V}{W_T} = \displaystyle\sum_{i=o}^{s} \dfrac{V_i}{W_i}

Where W_i is the waste generated at stage i .

This metric can be used to monitor existing processes. You would typically want to kick off a process improvement exercise if efficiency exceeds a certain threshold or exhibits a constant downward trend.

This formulation uses W_T , a quantity that cannot be easily measured, instead of W_A .

Substituting W_T for W_A won’t solve the problem either; if we do that, we would be overestimating the efficiency of our processes.

6.3 Smooth Operations

When running smooth operations, at no point does anyone have to stop and think about what to do next, nor do they have to improvise to get things moving again.

Superior processes:

  1. Prevent deadlocks, this is where two resources can end up in a situation where they wait on each other indefinitely
  2. Eliminate single points of failure
  3. Are rigorous enough to cover all types of scenarios, eliminating any need for improvisation

You can easily identify smooth operations by asking how often your manager needs to intervene in your day-to-day routine to get things moving again.

6.4 Process Improvement Should Be Easy

Process improvement is essential if your organization wants to survive amidst constant internal and external pressures.

This means that executing smooth, process-improvement exercises should be easy and non-traumatic.

The top condition for successful process improvement is to have a solid process for just that, with clear roles and responsibilities.

Companies using Six Sigma have dedicated staff (Green Belts and Black Belts) for just that.

Process improvements should result in a measurable net benefit in efficiency over time.

We assume that process improvement efforts are factored in W_O  to model this property.

For process improvement to take place, the efficiency difference of the system over an arbitrary period T_P  should be positive:

\Delta_E = E_T(t + T_P) - E_T(t) > 0, \quad t > T_0

Where T_0  is the time after the process changes have been deployed into production.

While the overhead effort may increase temporarily due to the additional investment in process improvement, the net overall benefit inefficiency should be noticeable in the long term.

7. Guide to Implementing Superior Processes

Now that we know what superior processes look like, let’s see if we can develop some rules to properly build and govern them.

7.1 Accountability and Responsibility

The first rule to consider when designing superior processes is properly allocating ownership and accountability.

A designated accountable person(s) for a process is someone who makes sure:

  1. All the elements of the process are well-defined and documented. If you use the SIPOC model, these would be the: Suppliers, Inputs, Outputs, Customers, Internal stages, and Operational guidelines.
  2. The process governing the process improvement exercises is established.
  3. Regularly follows up with accountable staff and initiates discussions on improving things.

In terms of responsibility, the person responsible for the process (or a stage in the process) has the following duties:

  1. Ensure that the process is efficient and that it delivers on its promises.
  2. Ensures process improvements are executed whenever required.

A few rules of thumb:

7.1.1 Every Process Must Have an Owner

This should appear trivial, but the process owner is not necessarily the person executing it. He/she needs to be explicitly assigned.

7.1.2 Multi-Level Failures

It would be a sure sign of a process, rather than an individual, failure if something goes wrong. It’s good to thoroughly understand what went wrong when things do go wrong.

7.2 Inputs, Outputs, and Transformations

Every processing stage should have a well-defined set of inputs and outputs in line with the SIPOC model. These are generally referred to as artifacts.

Artifacts can be:

  1. Project documents such as project plans
  2. Technical documentation, such as design documents or user guides
  3. Source code
  4. Test and automation scripts

Activities that consume inputs and produce outputs are referred to in this context as transformations or operations.

Here are a few rules of thumb to follow:

7.2.1 Securing the Inputs

Do not start the process (or kick off a stage) unless the relevant info is ready. It’s just a waste of time and frustration if your assumptions are untrue.

7.2.2 Minimizing Byproducts

Some artifacts can be produced as process byproducts and are not propagated between the stages. They usually serve no other value except as a workspace for the current stage. They better be minimized. Class-level Unit Tests are one stark example.

7.2.3 Articulate Transformations

Make sure that everybody is clear on what the transformation processes consist of. Also, people must understand what skills are needed, what tools should be used, and when. For development, have a Development Guide ready. For testing, make sure you publish a Test Strategy.

7.3 Utility and Applicability

Superior processes should be useful and applicable in a wide range of scenarios.

Opportunities where staff need to improvise or get assistance from management to unblock a situation, should be extremely rare.

This means less overhead, reduced waiting times, and higher efficiency.

This means that for every scenario, you will have a specific process or sub-process governing it.

Example of Different Scenario: Project Size

Project size is one example where you might consider having different sub-processes. The number of documents you prepare and the templates you use may vary between projects of different sizes. You might also want to apply additional stages to the bigger ones.

A few rules of thumb to follow:

7.3.1 Handling New Scenarios

If a scenario is not covered by existing processes, try to set up and define a process that deals with it instead of creating ad-hoc, one-time solutions.

7.3.2 Usability in Different Situations

It’s quite difficult to implement a universal, one-size-fits-all, efficient process. Projects of different sizes and tasks (requirement gathering, development, testing…) will need different handling methods.

7.4 Quality Assurance Checks and Balances

Law number four decrees that there should be no instances where quality problems can be hidden or passed on to the next stage.

You might want to avoid sophisticated metrics that measure minute details of every process.

These tools are either reminiscent of the bygone ages (Taylorism) or only applicable to large manufacturing businesses.

In small software development teams, these methods are almost always counterproductive.

Instead, try to form a culture that rewards swift and immediate resolution of quality problems.

Another potent method to apply is the standardization of processes across your organization.

This will help you ensure that improvements are propagated between different teams and that the results of those improvements are statistically significant.

7.5 Involving Stakeholders

Processes that are meant to be followed by smart individuals need to make sense to them. This is rule number 5.

Therefore, relevant stakeholders must be involved in the decision-making process which led to those processes in the first place.

Stakeholder management and good communication ensure group compliance and processes will be followed. Remember, no involvement, no commitment.

Beware of Vetocracies

Do not confuse stakeholder involvement with an (ugly) form of democracy, sometimes called democracy, where decisions can only move forward if unanimously approved.

7.6 Clarity and Accessibility

Finally, always publish an Operations Guide with all the details of the processes mentioned above.

This will help everybody get on the same page, provide constructive feedback when required, and continuously improve.

8. Further Reading

Leave a Reply

Your email address will not be published. Required fields are marked *