Process Management, Improvement, and Redesign: The Essential Guide for Boosting Your Performance

1. Overview

One of the earliest, if not the earliest, descriptions of continuous process improvement was laid out in Frederick W. Taylor’s book on Principles of Scientific Management (1911):

[…] whenever a workman proposes an improvement, it should be the management’s policy to carefully analyse the new method and, if necessary, conduct a series of experiments to accurately determine the relative merit of the new suggestion and the old standard. And whenever the new method is markedly superior to the old, it should be adopted as the standard for the whole establishment.

— Principles of Scientific Management – Frederick Taylor

In this short paragraph, we can observe a glimpse of several notions that will become seminal in later works on production and quality management, like Six Sigma.

Process Improvement, Re-Design, and Management
Process Improvement, Re-Design, and Management

For example:

Closed-loop Systems: A constant stream of feedback from the worker on the ground is collected, analyzed, and assessed for potential improvements to existing processes.

Continuous Improvement: Taylor recommends that continuous improvement become integral to the management’s policies rather than ad-hoc, short-term solutions.

Piloting: A method of testing modifications on a small scale to confirm anticipated results before deploying into production.

Process Standardization: through the adoption of superior methods throughout the organization. Process standardization might seem too obvious, but facts suggest otherwise. In reality, “Lack of consistent processes” has been cited repeatedly as a prime obstacle to Agile adoption.

Bottom-up Approach: Finally, Taylor acknowledges that workers on the ground can suggest improvements to their work. Interestingly, such thoughts appeared in the early days of the manufacturing industry, where class separation, for example, was still the norm.

In this article, we will take the discussion beyond Taylor’s initial ideas and make a detailed, technical tour of the classical process management methods.

Process management might involve two activities: improvement, which revolves around small incremental changes, and redesign, where processes are re-invented and re-imagined.

Specifically, the following topics will be discussed:

  1. Why do we need process management?
  2. The prerequisites for success
  3. Process management in the software business
  4. The DMAIC technique
  5. The Toyota Way of continuous improvement

3. The Purpose of Process Management

Unless you are in the research business, you generally do not want to tamper with running processes. The risk and cost involved might be too high.

The purpose of process management, which invariably includes changing existing ways of doing things, is usually in response to environmental changes.

Process Management: Improvement vs Redesign
Process Management: Improvement vs Redesign

There are two types of process improvements, each with a specific purpose:

Incremental changes: small-scale, limited-scope changes that aim to improve the current system’s effectiveness or efficiency. Their purpose might also cover team safety and morale.

Redesign, or Transformation: Large scale, broad scope type of changes whose purpose is to address a potential threat to the business’s survival or benefit from a significant opportunity. Transformations might require changes in the level of organisational culture.

Process management is the vehicle that supports Operational Excellence as defined earlier: a continuous improvement process that allows the business to survive in an ever-changing environment.

4. Prerequisites for Success

For any process change to succeed, it must satisfy three conditions we present hereafter.

Granted, minor changes might require smaller doses of these conditions. Nevertheless, experts still believe in the necessity of these conditions in some quantity or another.

4.1 Existential Threat or Major Opportunity

Existential threats disrupt the current equilibrium of the system and introduce enough anxiety or guilt to compel the leadership to respond.

All human systems attempt to maintain equilibrium and to maximize their autonomy vis-à-vis their environment. Coping, growth, and survival all involve maintaining the integrity of the system in the face of a changing environment that is constantly causing varying degrees of disequilibrium.

Organisational Culture and Leadership – Edgar Schein

These threats can take the form of internal or external pressures. Below are some common examples:

Rising competition: this was amplified enormously by globalization and the vast capital available to entrepreneurs and startups. Customers are not restricted anymore to local vendors where new competition cannot quickly arise. Think of Huawei and Siemens.

Disruptive technology: the emergence of new technologies is a constant threat to all businesses. Examples abound: although people still like to read books or watch movies, online purchasing/streaming has completely changed the landscape and put many bookstores and movie rental places out of business almost overnight.

New regulation: Industries such as banking, finance, or insurance are heavily regulated. These regulations are also constantly updated, and for businesses to remain compliant, their processes and products must be maintained continuously.

Existential threats are not the only drivers for change. Sometimes, the emergence of innovative technologies can open new markets, which, in turn, spike the demand for new products.

Some examples are the smartphone and the first car from Ford.

If I had asked people what they wanted, they would have said faster horses.

— Henry Ford

4.2 Psychological Safety

Change is difficult because it requires the unlearning of previous assumptions. Accepting these assumptions as valid can be the basis of inclusion in your group.

On the other hand, abandoning them for new ones might mean being expelled from that group and compromising your identity and integrity.

Suppose an organization is forced to make major changes. In that case, it needs to have enough psychological safety to see the opportunity of solving its problems without losing its identity or compromising its integrity.

4.3 Support from Leadership

This applies more or less to small to medium-sized changes since the major changes are, by definition, carried out by senior leadership in the organization.

The following arguments justify this requirement.

Changes can be disruptive: Leadership needs to be on board if you face challenges or resistance from coworkers.

Change costs time and effort: A clear business case must be presented to the leadership showing the benefits of implementing those changes. This will help secure the resources and funds you need.

Changes can be risky: Difficult decisions might need to be made in the face of unanticipated challenges that can emerge while the changes are being applied.

Benefits might need a lot of time to appear: In this case, support from leadership helps maintain the momentum and avoid the premature abandonment of the project.

5. Process Management in Software

Lots of the ideas we presented so far come from the manufacturing industry. Let’s see whether these concepts can be transferred into software and services.

5.1 Process Improvement Challenges

Below is a list of the top reasons why process management and improvement can be difficult.

Some of these have been inspired by the State of Agile Report 2021 and the State of DevOps Report 2021, both of which dedicate a portion of their analysis to understanding the reasons for the slow adoption of what appears to be superior methodologies.

Inconsistencies in Processes and Practices: The lack of standardized processes makes problem definition, measurement, and analysis very difficult. Under this regime, solutions and changes are localized rather than spread to the organization. Finally, the lack of consistent processes makes testing the efficacy and useability of any improvements challenging.

Cultural Clashes: Both reports cite “cultural” blockers such as risk aversion, resistance to change, insufficient feedback loops, unclear responsibilities, and failure to share best practices as major obstacles to adopting better software delivery methodologies.

Dynamic Nature of Software Businesses: People believe that introducing substantial changes in a dynamic and rapidly-changing environment is ill-advised. There is, however, one major caveat in this argument. The double assumption that things will inevitably cool down and that there is an “ideal” time for making improvements is just incorrect. If we can learn anything from experience, bad habits tend to stick more the longer they are kept.

Lack of Skill and Experience: especially in managing people and change. Fluency in organisational culture, cultural transformations, process management, and process improvement techniques is essential for change success.

Support from Senior Management: this is one of the three conditions we listed earlier for success. Senior management in software, as in other places, is under constant pressure to perform: sales, profit, and revenue take precedence over long-term, risky, and costly changes where results may not be immediately apparent.

5.2 Improvement vs Redesign

As we have mentioned, there are two types of process changes: the first is incremental improvements which are limited in scope, while the second involves major transformations/redesign.

This is quite true in software as well. The table below shows the impact level of some of the more common improvement techniques.

Switching to AgileX
Implementing DevOpsX
Six SigmaX
Test-Driven DevelopmentX
New Tools/TechnologiesX
Improving DocumentationX
Training and workshopsX
Process Improvement Techniques

6. The DMAIC Technique

6.1 Overview

Key info on DMAIC:

DMAIC stands for Define, Measure, Analyze, Improve, and Control.

DMAIC was invented in the first half of the 20th century and is closely related to the Plan-Do-Act (PDA) technique, also known as the Deming Wheel, after its inventor, the legendary quality expert Edward Deming.

PDCA and DMAIC refer to a systematic, problem-solving technique for optimizing production processes and the cornerstone for continuous improvement.

DMAIC, an essentially data-driven approach, is the core tool to drive Six Sigma projects.

The below figure illustrates the process:

DMAIC Stages and Flow
DMAIC Stages and Flow

Establishing a Baseline

Before attempting to kick off the DMAIC process, a definite baseline must be established against which future gains could be measured. The baseline could be something like A) how much a project is delayed as a percentage of its estimated total duration or B) how many features had to be reworked due to unclear requirements.

In the next five sections, we will discuss each stage of the DMAIC process. We will also compare and contrast Six Sigma and The Toyota Way.

Since this blog is mainly concerned with software problems, we will also try to provide some useful examples from the industry.

6.2 Step 1: Define

The first step in the DMAIC process is defining the problem. Here are some key ideas to consider:

The problem needs to be reproducible and confirmed by data. This is essential for manufacturing businesses but might be difficult to stick to with smaller-sized companies where data/observations are scarce. Nevertheless, an effort should be made to establish whether a certain issue is persistent or just a one-off.

The problem we are trying to solve should come from the Voice of the Customer. It is easy to get side-tracked by issues that are inconsequential for the customer and do not add value to the product.

Any change to existing processes should improve the following areas: A) efficiency, B) effectiveness, C) team morale or safety, or D) address internal or external threats.

A few examples of issues from the software world illustrate the point.

IssueAgreement with the Rules for Define
Projects take significantly longer to deliverA) directly impacts the customer’s budget, timeline, and time-to-market.

B) Processes are not efficient and drive the profit down.
Delivered value is not up to the customer’s expectationsImpacts customer user experience of the product. Customers are frustrated by using software that doesn’t address their business challenges.
A significant amount of rework on product featuresProcesses are not efficient
Regression issues in new releasesA) Has an impact on customer project cost as releases for testing contain a lot of broken features that need to be reported and retested.

B) Is not efficient
Aggressive deadlinesHave a long-term impact on team morale and mental well-being.
Software Development Issues with Direct Customer/Employee Morale Impact

6.3 Step 2: Measure

Six Sigma is a data-driven, problem-solving technique that relies heavily on measurement and collected data.

The two main challenges with measurement are:

  1. What to measure: For a single outcome, you can collect multiple statistics: mean, standard deviation, median, etc. You can also transform data from continuous numbers into bad, Average, and great categories.
  2. How to measure: Is it enough to take a single snapshot in time or multiple snapshots? If multiple, how much is enough? How can you compare performance metrics between different products or processes? Do you adjust your results to eliminate bias? What is the best way to quantify subjective attributes such as customer satisfaction?
  3. How to collect data: some measurements are not easy to perform. Can you use proxies to measure hidden variables? How would you determine if you have enough data or not?

These challenges make Six Sigma a highly specialized technique that may not be suitable for every process.

They are also, in our view, a severe limitation. As we will try to argue in the next section, analysing data from your desk is much inferior to checking things for yourself.

6.4 Step 3: Analyse

The techniques explained in this section are heavily data-dependent and widely used in Six Sigma processes.

6.4.1 Statistical Analysis

Sometimes, it is possible to extract meaningful insights from data just by looking at it. Tools like Microsoft Excel allow the instant generation of scatter plots, histograms, pivot tables, and many other visualizations from large data sets.

These visualizations can help you spot patterns, trends, and correlations without requiring an advanced degree in statistics.

If these observations confirm your gut feeling, you can make reasonably safe conclusions allowing you to adjust your processes accordingly.

For other, more complex situations, these tools might not be enough. This is where the statistician’s toolset can offer rich and generous solutions.

Advanced statistical analysis tools like the t-test, \chi^2  -test, Analysis of Variance (ANOVA), regression, or correlation analysis allow you to test complex hypotheses.

The t-test, \chi^2  -test, and Analysis of Variance (ANOVA) tests, for example, allow you to infer whether a variation in an observed sample is either due to pure chance or something is indeed different between this sample and the population that it has supposedly come from.

Basic Example of the Application of Statistical Tests

A familiar example would be to determine whether a coin that has yielded heads (or tails) five times in a row is biased (this is referred to as the Null hypothesis) or not.

Another Example of the Application of Statistical Tests

A more sophisticated example could be this: we have observed during the last week a DPMO of 6.5. Given that our benchmark is 3.5, is the 6.5 an indication that our quality control has regressed or is this observation due to chance alone? Next week’s observation will probably show a smaller value.

The result is usually accompanied by a confidence interval, say 95%. This would mean that the probability that this coin is unbiased is less than 5%.

The regression analysis tests whether an output variable can be expressed as a linear or non-linear combination of input parameters:

y=ax_1 + bx_2 +cx_3

This equation would then allow us to predict the outcome y' for a new set of inputs (x'_1, x'_2, x'_3) .

Finally, correlation tests measure the degree to which two variables agree. A positive correlation means they rise and fall together, while a negative correlation signifies that, as one rises, the other falls, and vice versa.

The insights derived from the data must make sense. Otherwise, it might be safe to assume that some calculation or data collection error has occurred somewhere.

6.4.2 Pareto Principle or 80/20 Rule

Vilfredo Pareto was an Italian civil engineer, sociologist, economist, political scientist, and philosopher.

In 1906, he made one of his most notable observations. Indeed, he had found that 80% of Italy’s land was owned by 20% of the people.

This ratio of wealth distribution was not restricted to Italy but was also observed in different places.

The heavily-skewed distribution was also seen in many social and natural phenomena areas.

Pareto Principle or 80-20 Rule
Pareto Principle or 80-20 Rule

Joseph Juran, the quality management guru, turned it into a principle now known as the Pareto Principle, the 80/20 rule, or sometimes the law of the vital few.

The idea behind the Pareto Principle is simple enough: 80% of all effects are caused by only 20% of total causes. So how does that help us in improving our processes?

The answer is not hard to understand. If we collect all the various causes and list them in ascending order, we can isolate the top players (usually on the left-hand side) and eliminate those first.

By focusing on the major sources of problems, we first

First, avoid being side-tracked by a larger number of valid, albeit smaller, causes that, even when fully addressed, do not add up to much in terms of overall efficiency improvement. Second, we earn the highest gains in the earliest stages of the improvement process.

6.4.3 Traveller’s Notebook

The traveller’s notebook is a great tool for analyzing a process. It consists of registering useful information while the product being worked on moves between the consecutive stages of the production processes.

For example, while a new feature is being developed, you can keep track of the following items at each stage of the SDLC.

  1. How many people were involved: a coder and a reviewer were involved during development.
  2. How much time was spent waiting: waiting for the code to compile?
  3. How much time was spent working: active development or testing?
  4. How many defects have been observed/fixed?
  5. How much reword occurred: because the requirement was missing or unclear.

The traveller’s notebook is especially useful when trying to shorten production cycles.

6.4.4 Process Value Analysis

This type of analysis inspects every task that employees perform and tries to place it in one of the following three buckets:

Value-adding: to take an example from software development, this could be the code, test cases, and automation scripts developed for a new feature. An increase in the Intellectual Property of a software code base increases the product’s value.

Overhead: These tasks enable the value-adding functions to be performed, but they do not add value themselves. Examples of overhead charges abound. Team management, project management, documentation, and meetings are all activities that help get the work done, but customers typically would not be happy to pay for them.

Such activities are essential in large teams and complex environments. It is sometimes easy to confuse value-adding tasks with overhead to justify the existence of a specific department or job. Care should be exercised in that area.

Non-value-adding (or waste): Such activities can include delays, internal documentation, reporting, waiting for code to compile, fixing issues in the CI/CD pipelines after an upgrade, upgrading your OS, and many more. 

The result of a typical task, contrary to what one might think, would yield something like this:

Types of Tasks in a Process
Types of Tasks in a Process

Process Value Analysis aims to determine which tasks can be eliminated or reduced. These are typically in the Overhead or Non-value-adding groups.

Eliminating unnecessary tasks helps increase the system’s efficiency by reducing cost and decreasing delivery timeframes.

6.4.5 Logical Analysis

Logical analysis is a process improvement technique that involves looking at the process details and figuring out the root cause of the problem by following cause/effect trials.

Fishbone Diagram root cause analysis
Fishbone Diagram

Kaoru Ishikawa, a well-known organisational theorist and quality management expert, invented an excellent process analysis tool called the Fishbone diagram.

  • Starting at the head of the “fish”, we write down what we believe is the effect or problem we would like to fix. We draw up the major potential causes of creating such a problem from the main lines.
  • Once that is completed, we brainstorm each of those causes separately by asking, “why five times”.

Using this technique, we aim to solve the problem by finding the root cause and not just the symptoms.

6.5 Step 4: Improve

Now that we have found the problem’s root cause, let’s look at some different techniques for resolving it.

6.5.1 Simplification

Process simplification removes redundant, obsolete, or non-value-adding tasks such as overhead or waste.

For example, you can eliminate unnecessary internal reporting or internal documentation in software development.

Categorizing tasks into value-adding, overhead, or waste can be challenging. This challenge is primarily due to the insulated environments that people work in, where the end-to-end process is not visible to everyone.

In such situations, people can easily justify the tasks they spend their efforts on.

6.5.2 Automation

Automating repetitive and labour-intensive activities is an excellent way of improving process efficiency.

Automation, however, comes at a cost. It requires the introduction of new infrastructure and the hiring of special skills. You can also expect much pushback from people whose jobs might be at risk.

In software development, for example, automation is at the heart of DevOps practices. Test automation can tremendously affect your testing and overall product quality.

6.5.3 Managing Bottlenecks

Bottlenecks are chokepoints in the process flow, whereby the load or the cycle time at the bottleneck are significantly higher than in other places. The result is a slowing down of the overall process.

Below are some options you might use to widen the bottleneck:

  1. Increasing the resources or capacity
  2. Reducing the cycle time
  3. Adjusting the product itself

A typical example of a software development bottleneck is obtaining input from highly-skilled engineers such as solution architects or senior developers.

The solution, in this case, is to manage their time wisely while training other developers to assist them when required.

6.5.4 Parallelization

Parallelization can increase process efficiency by optimizing resource usage. Things that can be completed independently are carried out in parallel and integrated later in the process.

Parallel processing is expensive, as it requires more overhead to manage the parallel streams and some regular handshaking to keep vital information synchronized between them.

Test-Driven Development TDD is one area in software development that has enjoyed success in optimizing development and testing effort.

Using wisely allows testing and development to be completed in parallel rather than sequence. This parallelization of activities should, theoretically, at least, reduce the testing and bug-fixing time significantly.

6.5.5 Standardization

In software development, process standardization allows you, among other things, to switch developers and testers between projects relatively quickly and with little or no training.

Another significant advantage of having standardized processes is the ability to propagate and test improvements across many teams.

Without standard processes, it is impossible to make measurements and collect meaningful data that can be compared across multiple periods and between different teams or projects.

6.5.6 Improving Resiliency

Resilient processes can withstand internal or external pressures without breaking. They also cater for a wide array of possible scenarios. In the odd case where a novel situation evolves, mechanisms are in place to help the system return to its nominal mode.

Small software businesses can be particularly exposed to new situations as every project or customer can be unique.

You can turn average processes into superior ones by implementing a few steps, which we have discussed in this article.

6.6 Step 5: Control

The control phase involves modifying your current processes to include the latest updates. This modification can mean implementing new infrastructure and retraining your staff.

At the end of the control phase, based on established criteria and new data collected after the updates, you must decide whether or not your changes are responsible for driving the observed improvement.

Applying process changes is not very challenging from a hardware or software perspective, as the available tools abound in software development. You might, however, face other sorts of challenges; these can mainly be pushbacks from your team.

The cultural dimension in process improvement can be a dominant factor and has much to do with organisational culture.

7. The Toyota Way of Process Improvement

7.1 Overview

The way Toyota approaches process improvement is not unique to the organization.

It is deeply rooted in the Plan-Do-Check-Act (PDCA) problem-solving routine, also known as the Deming or Shewhart cycle, which Edward Deming originally proposed and encouraged Japanese car manufacturers to follow.

Toyota’s leaders modified and enriched the PDCA technique over the decades. It came to include simple yet powerful techniques that contrast with Six Sigma.

Specifically, we will discuss four concepts:

  1. Go and See for Yourself, or Genchi Genbutsu
  2. Continuous Improvement, or Kaizen
  3. Ask Why Five Times to find the root cause
  4. Self-Reflection, or Hansei

7.2 Go and See for Yourself (Genchi Genbutsu)

To define a problem, first-hand information is deemed essential, and reliance on data from reports should only be used to confirm what one has personally observed.

Once enough observation has been made, expert judgement and expertise are used to evaluate the situation.

Let’s see these techniques in action.

Tadashi Yamashina was the president of Toyota Technical Center. Principle 3 of the ten management principles that he laid out read as follows:

  • Principle 3: Think and speak based on verified information and data:
    • Go and confirm the facts for yourself.
    • You are responsible for the information you report to others

Taichi Ohno placed more emphasis on facts than data, as the latter is just a manifestation of the underlying process. Data is one step removed from facts:

  • Data is very important in manufacturing, but I emphasise facts most.

The commitment to these principles (and the benefits achieved) can be seen in how Toyota resolved their Sienna car’s sales problem in North America.

The car was unsuccessful, and Honda outsold it in 2003 by double the numbers.

Here is what happened: Yuji Yokoya, the chief engineer for the 2004 Sienna, started a road trip in 2001 where he drove the old Sienna and competitors’ vehicles 53,000 miles through every state in the U.S., Canada, and Mexico.

He wanted to get first-hand information on how the old Sienna fared on North American roads and how it compared with its top competitors.

The new Sienna had at least eight major improvements in aerodynamics, noise control, navigation systems, safety, and maneuverability which could probably never have been detected by analyzing charts on a desk in Japan.

7.2 Ask Why Five Times

The default method that Toyota engineers use for finding the root cause of particular problems is quite simple. It’s referred to as Ask Why Five Times.

Sophisticated statistical analysis is used here and there, but it’s pretty different from Six Sigma’s clear structure and total commitment to its data-driven, complex methods.

Asking why five times requires a painstaking effort to get to the bottom of the problem. It would be best to have a solid grasp of the underlying processes to appreciate the information you receive and be properly placed to make full use of it.

7.3 Self-Reflection (Hansei)

Hansei is a word in the Japanese language that we can break up into two. First, there is Han, meaning to change, turn over, or turn upside down. Then there is Sei, which means to look back, review, and examine oneself.

It embodies the act of self-reflection on a mistake you have committed and a commitment you make to remedy the situation and guarantee it doesn’t occur in the future.

Hansei is an alien and not an entirely welcome concept in western culture.

  • For starters, it seems to focus too much on the negative side of a given thing.
  • Second, Hansei doesn’t seem to encourage celebrating the small wins, a morale-boosting exercise.
  • Third, Hansei doesn’t dilute the responsibility of an individual by teamwork. On the contrary, it expects individuals to assume personal responsibility for mistakes, not to punish them, but to improve their work.

In the words of Yamashina: “Without Hansei, it’s impossible to have Kaizen”. This thought is very subtle and requires some unpacking. It seems that Kaizen is not about improving others but improving oneself.

A final thought, in the Toyota culture, even if you do a job successfully, there will still be a need for Hansei-kai, a meeting for evaluating the project, and a retrospective:

An inability to identify issues is usually seen as an indication that you did not stretch to meet or exceed expectations, that you were not sufficiently critical or objective in your analysis, or that you lack modesty and humility. Within the process, no problem is itself a problem.

Toyota Magazine – Hansei

8. Conclusion

Process management is one of the essential pillars on which Operational Excellence rests. It can keep your business afloat during times of internal and external change, whether you are onboarding new clients or facing emerging technologies and new competition.

Luckily, when it comes to tools and techniques, you have several options from which to choose.

The first option is Six Sigma, a data-driven, effort-intensive framework that may or may not suit the type of business you are running.

We published some thoughts on the suitability of Six Sigma for the software industry. You can follow the link if you are interested in exploring it further.

The second option is The Toyota Way, which, as we have seen, offers a somewhat different set of tools. It’s also more encompassing as it includes other aspects of the business, such as people and leadership. 

The Toyota Way emphasises three areas in particular when it comes to handling process improvement:

  1. First is the individual responsibility of every team member, which can be seen in the practice of Hansei.
  2. Second, a hands-on, close-up approach to problem-solving is compared to Six Sigma’s method of analysing a challenging situation from behind your desk.
  3. Third, a strive for easily noticeable perfection when rooting out problems or making decisions.

The third and final option is DMAIC, a straightforward, no-nonsense methodology for continuous improvement. DMAIC can be observed at Toyota in one form or another.

Unlike the Toyota Way, DMAIC is limited to managing and improving processes. This limitation makes DMAIC easy to explain and apply; it is also a perfectly suitable solution for small businesses that can’t afford more extensive equipment.

9. Further Reading

Leave a Reply

Your email address will not be published. Required fields are marked *