Software Testing and Quality Assurance: A Modern Analysis of Its Internal Dynamics and Impact on Delivery

Georges Lteif

Georges Lteif

Software Engineer

Last Updated on May 19, 2022.
Subscribe now to stay posted!
About Us
10 min read

1. Overview

To motivate this discussion, we will touch upon three topics, and the central role software testing plays in their success. These topics are value propositioncontinuous delivery, and software product value.

We briefly introduced the term value proposition in our article on solution architecture. Michael Lanning and Edward Michaels coined the term Value Proposition (VP) in their 1988 staff paper for McKinsey and co.

The paper stated that “a business is a value delivery system“, and a value proposition is “a clear, simple statement of the benefits, both tangible and intangible, that the company will provide, along with the approximate price it will charge each customer segment for those benefits”.

Your value proposition is what makes your offering unique. Naturally, you want to offer maximum value for the lowest prices if you wish to stay ahead of the competition.

Anybody who has worked in software testing can appreciate the impact of testing activities on a project’s budget, especially when testing is a predominantly manual activity that uses obsolete and outdated methods.

In summary, software testing using efficient processes will strengthen your value proposition, especially when you can demonstrate it via automated integration pipelines.

Continuous delivery is the ability to deliver software frequently and on-demand. It requires a successful installation of continuous integration and deployment pipelines using sophisticated DevOps tools.

It also requires DevOps practices in which development, build, testing, and operations are done by more or less the same people, a significant deviation from the silo approach that separated both worlds.

There is no place for inefficient manual testing in the DevOps world. Testing must be completed in several stages, targeting different areas, stopped as soon as an issue is deemed a blocker, and automatically restarted with every new release.

Finally, we close this introduction by asking the reader the following question: given two companies developing two software products that offer the same digital services, would you pay more for the company that has invested significantly more in unit, integration, and functional test suites that are all available within the source code and can be run in automated pipelines?

If the answer is in the affirmative, this article is for you.

The first section of this article introduces the V-Model, an intuitive paradigm for understanding software testing. The following section will discuss the two stages of verification and validation before moving on to specific topics on the evolution of software testing throughout the product lifecycle. Finally, we explain the testing process used in Agile and contrast it with Waterfall-based approaches.

2. Table of Contents

3. The V-Model

The V-Model is a diagram representing the main stages of the Software Development Lifecycle.

V-Model for Software Development
V-Model for Software Development

The left arm of the V-Model depicts the laying out of the concept of operations, requirement gathering and definition, architecture and decomposition, low-level design, and implementation stages.

The right arm represents the different types of testing: code review, unit tests, system integration tests (SIT), progression, regression, performance tests, and finally, user acceptance tests (UAT).

Whether you are using Agile or Waterfall to manage your deliveries, the V-model remains a faithful representation of the various stages that a project goes through, albeit with minor variations such as the inclusion of an iterative element for Agile-based methodologies.

The different types of testing can be defined as follows:

  1. Code Review: verification of requirements by manual inspection of code changes. Typically done on feature branches and before merging into the master branch.
  1. Unit tests: are used to test subcomponents (such as classes and functions) of the application. Despite its high cost of ownership, unit testing remains a valuable tool if applied in the correct situations.
  1. System Integration Tests (SIT) verify the connectivity and adherence to specifications of interfaces between different systems or components.
  1. System Testing ensures the solution works by verifying all functional and non-functional requirements, including performance testing. It also ensures that existing features have not been broken by running regression test suites.
  1. User Acceptance Testing (UAT) validates the operability of the solution. During this stage, stakeholders validate the operability of the system under conditions similar to productions.

4. Verification and Validation

Software testing constitutes two discreet phases: verification and validation. During the verification phase:

  • The solution is tested in a controlled environment under laboratory conditions. the tests are designed to work under specific conditions that may or may not resemble production.
  • Completed by the developers, testers, or DevOps engineers.

On the other hand, the validation phase:

  • Requires production-like conditions (infrastructure, configuration, artifacts, user data). Can be run in a live environment in the form of a Pilot.
  • Is used to ensure the operability of the solution, the satisfaction of business needs, and the validation of the concept of operations. it is used as the basis for user acceptance and the transfer of ownership to the client.
  • Is performed on the end-to-end solution. Mocking is rarely used.
  • Is completed by the end-user or the client and validated by the stakeholders.

It is not inconceivable that testing is completed with failures. If the time allocated to the testing phase has been consumed, a decision is made by the risk management team and would result in a go/no-go release to production.

The risk is assessed based on the severity of the issue and its impact on operations, functionality, safety (security, compliance, reliability), as well as the timeframe in which a patch to fix it could be released.

A waiver is typically requested from regulatory authorities if the application has to run in a non-compliant mode until the fix is released.

5. The Business Value of Software Testing

5.1 The Value of Manual Testing

Software testing is a labour-intensive activity primarily when it’s composed chiefly of manual effort. In this specific situation, it adds little or no value to the product.

When QA activities are predominantly manual and must be repeated for every change, they exist in an enabling or supportive role; they are not objectives per se.

5.2 Generating Value from Testing

If the software testing process is improved and enhanced, you can derive genuine value as it minimizes costs, and increases your deliveries’ reliability, thereby strengthening your value proposition.

Business Value of Software Testing
Business Value of Software Testing

Four ingredients are required to make that happen.

  • A necessary ingredient is an automated test strategy, which will remove any constraints on when and how much you can run your tests, allowing for (virtually) unlimited testing capacity.
  • The third ingredient is constant refactoring to keep technical debt down. This particular requirement does not work without automated testing and a reliable battery of test cases that gives you the confidence you need to touch production code.
  • The test cases must be efficiently designed to reduce their total cost of ownership while maximizing their utility. A key topic in this discussion is unit tests as they have the propensity to quickly grow in numbers and couple with implementations.

6. Dynamics of Software Testing

This section investigates the relationship between testing effort vs product size and feature richness for the duration of the product’s lifecycle.

Analysing the dynamics of software testing efforts starts by answering the two questions:

  1. How does the testing effort required to achieve adequate coverage increase with the number and size of new features?
  2. How does the quality of the testing change with the size and complexity of new features?

We will answer those questions next.

6.1 Testing Effort vs Software Features

Any cost, effort or liability increase had better change linearly so that it remains a viable investment and does not overwhelm any value generated. This requirement applies to the rate of change of the testing effort as a function of lines of code.

New lines of code would need to be added not just to support a new feature but also to the test cases around it. New test cases are required to test new functionality and ensure that existing functionality is not broken.

We can model the total effort of testing features A and B (E_T(A), E_T(B)  respectively) as the sum of individual efforts required plus an additional amount proportional to the coupling of both features:

E_T(A+B) = E_T(A) + E_T(B) + E_T(A, B)

Where E_T(A, B)  is the effort required to test the coupling between features A and B.

It is easy to see that this coupling effect can span not just features A and B but every single feature in the system. Great architecture strives to ensure minimal coupling of system components and, therefore, can potentially reduce the overall effort.

In reality, this reduction is usually hard to achieve. Complex architecture is not uncommon, and soon enough, it will be challenging to keep track of all the dependencies between the different subsystems.

To mitigate any risk resulting from inadequate coverage, the optimal strategy would be to run all kinds of tests. The cost associated with the extra coverage is that E_T(A, B)  increases at a faster rate than we would like to.

Dynamics of Efficient Software Testing

The figure above shows roughly the dynamics of this change. Theoretically, this metric increases non-linearly with new features. In contrast, an initial investment in test automation might seem prohibitive, but in the long run, however, it will serve to keep the total effort down.

If you have an excellent test automation infrastructure in place, the additional effort from new features would be to write (but not also run!) the new test cases.

Although new test cases should cover existing functionality and coupling effects, you do not have to compromise on the scope of testing to keep the overall effort manageable; you can run the entire suite anytime you wish. This option facilitates the early detection of broken features.

6.2 Quality vs Size and Complexity

Now we look at the second question: how testing quality varies with respect to the size and complexity of the features under test. First, a few words on what we mean by size and complexity in this context.

Size and complexity may refer to things like high availability, cloud deployment, clustering, and complex (perhaps third-party) integration of many subsystems.

Software Test Quality vs Feature Size and Complexity
Software Test Quality vs Feature Size and Complexity

The requirements to test this category of features are a bit more involved than the typical functional tests of user stories.

What we generally see in this case is a sudden drop in quality for a number of reasons:

  • Lack of adequate technical knowledge: In this case, testers may not have adequate knowledge of such complex setups or perhaps not enough experience thus underestimating the effort and difficulties involved.
  • Testing rigour: A typical example is deploying and configuring a fresh test environment for each new build, and each test run. That’s almost never done when the processes are manual. What happens is testers use existing systems that may be polluted by previous test runs.

Luckily for us today, tools such as Jenkins or GitLab CI/CD are designed to help solve such problems.

You can write scripts that Jenkins uses to spin fresh environments, apply the customer-specific configuration, deploy the application and run entire test suites.

That’s a very powerful way to ensure the quality, consistency, and reliability of software quality.

7. QA and Operational Excellence

7.1 Why Bother?

In the context of software development, Operational Excellence is the flawless execution of an IT implementation project. Achieving Operational Excellence will strengthen your value proposition and ensure a competitive edge over the competition.

If you are unsure why attaining such high standards is necessary, even crucial, for the survival of your business, consider the following question. Would you ride with an Uber driver who has less than five stars? How about four or three?

Operational Excellence guarantees minimum variation in your project success rates, and prospective customers will be comfortable doing business with you.

To attain high standards of project execution, every step in the delivery process should be running at a golden standard, up to and including software testing and quality assurance.

Software testing and quality assurance are barriers you erect in your production processes to prevent faults from flowing over to production.

7.2 Agile, DevOps, and Flawless Execution

The standard software testing process consists of an iterative approach that starts with software releases, followed by testing, bug fixing, more releases and testing until all issues have been closed.

Software Development and Testing Cycles
Software Development and Testing Cycles

Several options are available to make the improve the process and make it more efficient:

  1. Reuse of existing software test suites that are appropriately managed, documented, and maintained.
  2. Leveraging test automation as a stage of Continuous Integration/Continuous Deployment pipelines. Test automation allows you to run complete suites on-demand for the best coverage on every feature branch.
  3. Effective knowledge management through staff training, talent retention, key staff redundancy, knowledge documentation and sharing.
  4. Streamlining testing activities with development by using Agile practices and Test-Driven Development.

What does Agile bring to the world of testing? The Agile manifesto stresses the importance of face-to-face conversations over documentation. That’s a great start.

With practices like stand-up meetings, people can share their problems, prioritize their work, organize their schedules, and coordinate their efforts for the best results.

Most importantly, with Agile practices like daily stand-up meetings, people can shout out if they need help instead of just leaving a note in the ticketing.

Face-to-face interactions are potent and help eliminate barriers and overcome misunderstandings. It also allows groups to self-organize, leading to better decision-making.

7.3 Team Cooperation and Collaboration

Testing and development were, until recently, separate exercises where the parties involved made little effort to coordinate their work and share their pains.

Both teams operated in silos, and significant overhead was spent trying to coordinate their efforts.

The testing process with all the paperwork involved (issue assessment, documentation, reporting, developer-tester dialogues) does not add value to the product. It will, however, weaken the value proposition, especially when it impacts the cost or schedule or increases project risk.

Aside from the time and material effort spent, much frustration usually follows, impacting team morale and motivation, both of which can be contagious in complex social or organizational groups.

A combination of Agile practices, Test-Driven Development, and Test Automation should be enough to remove most of the hurdles and improve the overall efficiency and synergy of the development and testing teams.

7.4 Building a Career Path for Testers

Manual repetitive testing is the most boring job anybody could have. There is not much glory, fame, or career growth if you work in that space.

Building and supporting test automation systems requires some IT and coding skills at a minimum.

Difficult or challenging as it is, a career as an automation expert – or even better as a DevOps engineer – is a significant achievement compared to a test engineer.

What does DevOps add to the mix? DevOps took Quality Assurance quite a bit further by placing operations under its scope. Testing now incorporates different platforms, configurations, deployments methods, and upgrade procedures in addition to the traditional functional testing.

Because Operational Excellence also concerns itself with the welfare and morale of employees, this point becomes crucial.

It’s not just good for the people, but it’s also great for the business as it allows smart and hard-working employees to build themselves a career in testing.

This means they now have the option (but also the desire) to stay in the organization and grow with it thus increasing talent retention and effective knowledge management.

8. Final Words

Testing has historically been viewed as an auxiliary exercise, a second-rate tedious non-glamorous task, and a necessary evil to maintain software quality and prevent bugs from making it into production.

Unfortunately, it is also the first place to cut corners when the project is late or has gone over budget. This is not by chance but due to the relatively high budget and the extended timeframe allocated for testing.

With Agile and DevOps practices becoming mainstream, modern and progressive organisations’ testing activities have been elevated and mixed with development and operations.

For example, we now see DevOps engineers becoming the highest-paid, most sought technical experts on the market.

Specialised software and best practices have also been developed and perfected to provide testers with all the necessary tools for efficiently completing their work.

9. References

Technical Risk Management and Decision Analysis — Introduction and Fundamental Principles

1. Overview I could not find a better way to start an article on Risk and Risk Management than by quoting the opening lines of Donald Lessard and Roger Miller’s 2001 paper that, briefly but lucidly, summarizes the nature of large engineering endeavours. It goes like this: This article leans heavily on three handbooks thatContinue reading “Technical Risk Management and Decision Analysis — Introduction and Fundamental Principles”

Complexity and Complex Systems From Life on Earth to the Universe: A Brief Introduction

1. Overview Dealing with complexity is an integral part of our lives, even if we do not realise it.  An organisation can be modelled as a complex system from the scale of megacorporations right down to the smallest teams. The architecture of software solutions can be equally complicated, and megaprojects and implementations are certainly involved.Continue reading “Complexity and Complex Systems From Life on Earth to the Universe: A Brief Introduction”

Book Review: Programming the Universe — A Quantum Computer Scientist Takes on the Cosmos

Synopsis Most physical theories adopt a mechanistic view when examining natural phenomena where any system can be modelled as a machine whose initial conditions and dynamics govern its future behaviour. In this book, Programming the Universe — A Computer Scientist Takes on the Cosmos, Professor Seth Lloyd proposes a radically different approach centred around aContinue reading “Book Review: Programming the Universe — A Quantum Computer Scientist Takes on the Cosmos”

From Abstract Concepts to Tangible Value: Software Architecture in Modern IT Systems

1. Overview Software design and architecture are two very elusive concepts; even Wikipedia’s entries (ref. architecture, design) are somewhat fuzzy and do not clearly distinguish between the two. The Agile manifesto’s statement on architecture and design is especially brief and raises more questions than answers. The most common definition of software architecture is as follows:Continue reading “From Abstract Concepts to Tangible Value: Software Architecture in Modern IT Systems”

Business Requirements and Stakeholder Management: An Essential Guide to Definition and Application in IT Projects

1. Overview The complexity of business requirements in IT projects has experienced exponential growth due to pressures by increasingly sophisticated client preferences, novel technologies, and fierce competition. Consider, for example, the case of financial payments. In the mid-80s, most payment transactions occurred inside bank branches, and only the biggest banks offered services on ATM orContinue reading “Business Requirements and Stakeholder Management: An Essential Guide to Definition and Application in IT Projects”


Something went wrong. Please refresh the page and/or try again.

Leave a Reply