To motivate this discussion, we will touch upon three topics, and the central role software testing plays in their success. These topics are value proposition, continuous delivery, and software product value.
We briefly introduced the term value proposition in our article on solution architecture. Michael Lanning and Edward Michaels coined the term Value Proposition (VP) in their 1988 staff paper for McKinsey and co.
The paper stated that “a business is a value delivery system“, and a value proposition is “a clear, simple statement of the benefits, both tangible and intangible, that the company will provide, along with the approximate price it will charge each customer segment for those benefits”.
Your value proposition is what makes your offering unique. Naturally, you want to offer maximum value for the lowest prices if you wish to stay ahead of the competition.
Anybody who has worked in software testing can appreciate the impact of testing activities on a project’s budget, especially when testing is a predominantly manual activity that uses obsolete and outdated methods.
Continuous delivery is the ability to deliver software frequently and on-demand. It requires a successful installation of continuous integration and deployment pipelines using sophisticated DevOps tools.
It also requires DevOps practices in which development, build, testing, and operations are done by more or less the same people, a significant deviation from the silo approach that separated both worlds.
There is no place for inefficient manual testing in the DevOps world. Testing must be completed in several stages, targeting different areas, stopped as soon as an issue is deemed a blocker, and automatically restarted with every new release.
Finally, we close this introduction by asking the reader the following question: given two companies developing two software products that offer the same digital services, would you pay more for the company that has invested significantly more in unit, integration, and functional test suites that are all available within the source code and can be run in automated pipelines?
If the answer is in the affirmative, this article is for you.
The first section of this article introduces the V-Model, an intuitive paradigm for understanding software testing. The following section will discuss the two stages of verification and validation before moving on to specific topics on the evolution of software testing throughout the product lifecycle. Finally, we explain the testing process used in Agile and contrast it with Waterfall-based approaches.
2. Table of Contents
- 1. Overview
- 2. Table of Contents
- 3. The V-Model
- 4. Verification and Validation
- 5. The Business Value of Software Testing
- 6. Dynamics of Software Testing
- 7. QA and Operational Excellence
- 8. Final Words
- 9. References
- 10. Featured Articles
3. The V-Model
The V-Model is a diagram representing the main stages of the Software Development Lifecycle.
Whether you are using Agile or Waterfall to manage your deliveries, the V-model remains a faithful representation of the various stages that a project goes through, albeit with minor variations such as the inclusion of an iterative element for Agile-based methodologies.
The different types of testing can be defined as follows:
- Code Review: verification of requirements by manual inspection of code changes. Typically done on feature branches and before merging into the master branch.
- Unit tests: are used to test subcomponents (such as classes and functions) of the application. Despite its high cost of ownership, unit testing remains a valuable tool if applied in the correct situations.
- System Integration Tests (SIT) verify the connectivity and adherence to specifications of interfaces between different systems or components.
- System Testing ensures the solution works by verifying all functional and non-functional requirements, including performance testing. It also ensures that existing features have not been broken by running regression test suites.
- User Acceptance Testing (UAT) validates the operability of the solution. During this stage, stakeholders validate the operability of the system under conditions similar to productions.
4. Verification and Validation
Software testing constitutes two discreet phases: verification and validation. During the verification phase:
- The solution is tested in a controlled environment under laboratory conditions. the tests are designed to work under specific conditions that may or may not resemble production.
- Tests isolated subsystems or components where some dependencies will be mocked out of convenience or necessity.
- This phase constitutes all the different types of testing mentioned earlier (unit testing, code review, SIT, stress and load testing) except for UAT.
- Completed by the developers, testers, or DevOps engineers.
On the other hand, the validation phase:
- Requires production-like conditions (infrastructure, configuration, artifacts, user data). Can be run in a live environment in the form of a Pilot.
- Is used to ensure the operability of the solution, the satisfaction of business needs, and the validation of the concept of operations. it is used as the basis for user acceptance and the transfer of ownership to the client.
- Is performed on the end-to-end solution. Mocking is rarely used.
- Is completed by the end-user or the client and validated by the stakeholders.
It is not inconceivable that testing is completed with failures. If the time allocated to the testing phase has been consumed, a decision is made by the risk management team and would result in a go/no-go release to production.
The risk is assessed based on the severity of the issue and its impact on operations, functionality, safety (security, compliance, reliability), as well as the timeframe in which a patch to fix it could be released.
A waiver is typically requested from regulatory authorities if the application has to run in a non-compliant mode until the fix is released.
5. The Business Value of Software Testing
5.1 The Value of Manual Testing
Software testing is a labour-intensive activity primarily when it’s composed chiefly of manual effort. In this specific situation, it adds little or no value to the product.
When QA activities are predominantly manual and must be repeated for every change, they exist in an enabling or supportive role; they are not objectives per se.
5.2 Generating Value from Testing
Four ingredients are required to make that happen.
- A necessary ingredient is an automated test strategy, which will remove any constraints on when and how much you can run your tests, allowing for (virtually) unlimited testing capacity.
- The second ingredient is applying Test-Driven Development (TDD), a practice specifically designed to make resource usage and production processes more efficient.
- The third ingredient is constant refactoring to keep technical debt down. This particular requirement does not work without automated testing and a reliable battery of test cases that gives you the confidence you need to touch production code.
- The test cases must be efficiently designed to reduce their total cost of ownership while maximizing their utility. A key topic in this discussion is unit tests as they have the propensity to quickly grow in numbers and couple with implementations.
6. Dynamics of Software Testing
This section investigates the relationship between testing effort vs product size and feature richness for the duration of the product’s lifecycle.
Analysing the dynamics of software testing efforts starts by answering the two questions:
- How does the testing effort required to achieve adequate coverage increase with the number and size of new features?
- How does the quality of the testing change with the size and complexity of new features?
We will answer those questions next.
6.1 Testing Effort vs Software Features
Any cost, effort or liability increase had better change linearly so that it remains a viable investment and does not overwhelm any value generated. This requirement applies to the rate of change of the testing effort as a function of lines of code.
New lines of code would need to be added not just to support a new feature but also to the test cases around it. New test cases are required to test new functionality and ensure that existing functionality is not broken.
We can model the total effort of testing features A and B ( respectively) as the sum of individual efforts required plus an additional amount proportional to the coupling of both features:
Where is the effort required to test the coupling between features A and B.
It is easy to see that this coupling effect can span not just features A and B but every single feature in the system. Great architecture strives to ensure minimal coupling of system components and, therefore, can potentially reduce the overall effort.
In reality, this reduction is usually hard to achieve. Complex architecture is not uncommon, and soon enough, it will be challenging to keep track of all the dependencies between the different subsystems.
To mitigate any risk resulting from inadequate coverage, the optimal strategy would be to run all kinds of tests. The cost associated with the extra coverage is that increases at a faster rate than we would like to.
The figure above shows roughly the dynamics of this change. Theoretically, this metric increases non-linearly with new features. In contrast, an initial investment in test automation might seem prohibitive, but in the long run, however, it will serve to keep the total effort down.
If you have an excellent test automation infrastructure in place, the additional effort from new features would be to write (but not also run!) the new test cases.
Although new test cases should cover existing functionality and coupling effects, you do not have to compromise on the scope of testing to keep the overall effort manageable; you can run the entire suite anytime you wish. This option facilitates the early detection of broken features.
6.2 Quality vs Size and Complexity
Now we look at the second question: how testing quality varies with respect to the size and complexity of the features under test. First, a few words on what we mean by size and complexity in this context.
Size and complexity may refer to things like high availability, cloud deployment, clustering, and complex (perhaps third-party) integration of many subsystems.
The requirements to test this category of features are a bit more involved than the typical functional tests of user stories.
What we generally see in this case is a sudden drop in quality for a number of reasons:
- Lack of adequate technical knowledge: In this case, testers may not have adequate knowledge of such complex setups or perhaps not enough experience thus underestimating the effort and difficulties involved.
- Testing rigour: A typical example is deploying and configuring a fresh test environment for each new build, and each test run. That’s almost never done when the processes are manual. What happens is testers use existing systems that may be polluted by previous test runs.
- Finally, missing non-functional requirements such as performance, fault tolerance, deployment options, security, or multiple OS platforms (when supported).
Luckily for us today, tools such as Jenkins or GitLab CI/CD are designed to help solve such problems.
You can write scripts that Jenkins uses to spin fresh environments, apply the customer-specific configuration, deploy the application and run entire test suites.
That’s a very powerful way to ensure the quality, consistency, and reliability of software quality.
7. QA and Operational Excellence
7.1 Why Bother?
In the context of software development, Operational Excellence is the flawless execution of an IT implementation project. Achieving Operational Excellence will strengthen your value proposition and ensure a competitive edge over the competition.
If you are unsure why attaining such high standards is necessary, even crucial, for the survival of your business, consider the following question. Would you ride with an Uber driver who has less than five stars? How about four or three?
To attain high standards of project execution, every step in the delivery process should be running at a golden standard, up to and including software testing and quality assurance.
Software testing and quality assurance are barriers you erect in your production processes to prevent faults from flowing over to production.
7.2 Agile, DevOps, and Flawless Execution
The standard software testing process consists of an iterative approach that starts with software releases, followed by testing, bug fixing, more releases and testing until all issues have been closed.
Several options are available to make the improve the process and make it more efficient:
- Reuse of existing software test suites that are appropriately managed, documented, and maintained.
- Leveraging test automation as a stage of Continuous Integration/Continuous Deployment pipelines. Test automation allows you to run complete suites on-demand for the best coverage on every feature branch.
- Effective knowledge management through staff training, talent retention, key staff redundancy, knowledge documentation and sharing.
- Streamlining testing activities with development by using Agile practices and Test-Driven Development.
What does Agile bring to the world of testing? The Agile manifesto stresses the importance of face-to-face conversations over documentation. That’s a great start.
With practices like stand-up meetings, people can share their problems, prioritize their work, organize their schedules, and coordinate their efforts for the best results.
Most importantly, with Agile practices like daily stand-up meetings, people can shout out if they need help instead of just leaving a note in the ticketing.
7.3 Team Cooperation and Collaboration
Testing and development were, until recently, separate exercises where the parties involved made little effort to coordinate their work and share their pains.
Both teams operated in silos, and significant overhead was spent trying to coordinate their efforts.
The testing process with all the paperwork involved (issue assessment, documentation, reporting, developer-tester dialogues) does not add value to the product. It will, however, weaken the value proposition, especially when it impacts the cost or schedule or increases project risk.
A combination of Agile practices, Test-Driven Development, and Test Automation should be enough to remove most of the hurdles and improve the overall efficiency and synergy of the development and testing teams.
7.4 Building a Career Path for Testers
Manual repetitive testing is the most boring job anybody could have. There is not much glory, fame, or career growth if you work in that space.
Building and supporting test automation systems requires some IT and coding skills at a minimum.
Difficult or challenging as it is, a career as an automation expert – or even better as a DevOps engineer – is a significant achievement compared to a test engineer.
What does DevOps add to the mix? DevOps took Quality Assurance quite a bit further by placing operations under its scope. Testing now incorporates different platforms, configurations, deployments methods, and upgrade procedures in addition to the traditional functional testing.
Because Operational Excellence also concerns itself with the welfare and morale of employees, this point becomes crucial.
It’s not just good for the people, but it’s also great for the business as it allows smart and hard-working employees to build themselves a career in testing.
This means they now have the option (but also the desire) to stay in the organization and grow with it thus increasing talent retention and effective knowledge management.
8. Final Words
Testing has historically been viewed as an auxiliary exercise, a second-rate tedious non-glamorous task, and a necessary evil to maintain software quality and prevent bugs from making it into production.
Unfortunately, it is also the first place to cut corners when the project is late or has gone over budget. This is not by chance but due to the relatively high budget and the extended timeframe allocated for testing.
For example, we now see DevOps engineers becoming the highest-paid, most sought technical experts on the market.
Specialised software and best practices have also been developed and perfected to provide testers with all the necessary tools for efficiently completing their work.
- NASA Systems Engineering Handbook
- MIT OpenCourseWare, Verification and Validation Lecture: