Find the Latest in DevOps & More in Our Learning Library Start Here.

Key Methods for Optimizing the Software Testing Lifecycle

Dan Holloran March 17, 2020

DevOps Collaboration Release
Key Methods for Optimizing the Software Testing Lifecycle Blog Banner Image

Software testing, both automated and manual, is essential for QA, DevOps and IT practitioners looking to maintain CI/CD pipelines without hurting the reliability of their underlying applications and services. Testing can be incorporated across all aspects of software development and delivery, not simply maintained in a silo by your QA and testing team. Testing is like overall service reliability in DevOps – everyone is accountable for its success.

So, we thought it’d be worthwhile to talk about maintaining an agile, effective software testing lifecycle (STLC). Especially with a large number of teams working remotely right now, breakdowns in communication can happen more often and can lead to more problems in testing. Automation can lead to fewer complications with the software testing lifecycle but it’s not an end-all-be-all solution.

There isn’t a QA or testing situation in the world that can eliminate every bug, error or incident. But, DevOps, SRE, and IT engineers all over can use the following methods to reduce the likelihood of a production incident and improve the way they fix issues when they do occur.

What is the software testing lifecycle (STLC)?

Before we can look at improving the software testing lifecycle, we need to understand what it is. The STLC is a process adopted by DevOps and QA teams to outline the specific steps required for adequate testing of their applications and infrastructure in order to achieve the desired quality of systems and services. We’ve outlined the most generally accepted six steps of the software testing lifecycle in the flowchart below.

Software Testing Lifecycle Flowchart Graphic

1) Requirement analysis

What are the requirements for testing? Not only should you look at the requirements for the entire system’s software testing lifecycle but you should look at the requirements around specific tests across your applications and infrastructure. Are the requirements even testable? What’s the scope of your testing? It’s also necessary to look at what you might be okay with not testing – especially if you’re setting up automation in your testing process.

2) Test planning

Once you know the requirements involved with testing, you need to start planning the tests themselves. Also, what are the metrics you’re looking at in order to measure a “successful” testing practice? The software testing plan you decide to implement needs to be based on the overall strategy of your QA and testing process and take the risk of not testing certain things into account. How much risk are you comfortable taking on? A lot of this depends on customer expectations and the underlying services and applications your team builds and supports.

3) Test case development

Design and analyze the outcomes to be achieved from the test conditions. How will you actually run this test and how should you define all of the elements involved with the test case? Look at the levels of depth of testing and the complexity of the product or service you’re testing. At this stage, you’ll create the metrics for coverage and you’ll identify the ideal test environment. Look at the potential test cases and see which can be used for regression testing, security testing, automated testing, etc.

4) Environment setup

Now, set up the test environment. If you’re real edgy, maybe you’ll test in production. Figure out the ideal environment for your team to run the test cases and then let it rip. It’s important to look at your environments in the context of how you can mitigate blast radius and risk in case something goes wrong. Risk isn’t only if something goes wrong during the test itself but there’s risk involved with not identifying bugs or flaws due to the scope or complexity of the tests you’ve run.

5) Test execution

Now’s the fun part – you get to execute the tests and log the bugs, defects and vulnerabilities found by the test. You’ll run the tests, monitoring them to ensure they don’t go fail and run how they’re supposed to. Additionally, you’ll have to keep updated reports and logs around the tests and the problems detected by the tests. You’ll ensure the tests meet the criteria you developed at the start and you’ll be sure to update your traceability metrics to track your test’s progress.

6) Test cycle closure

The test is done! You’ve compiled and logged all of the test’s results and you’ve analyzed these reports for actionable insights. Are all of the planned deliverables delivered? Did you get the results from the test case you were looking for? If so, now you need to archive the test environment and process to understand how you conducted this test in case you want to use it in the future. Hold a retrospective, just like you would hold a post-incident review for a production incident. How do you learn from these tests to pass critical information on to the rest of the DevOps and IT teams in order to help bolster service reliability without slowing the development speed more than necessary?

Creating a Culture of Reliability

Goals of a software testing lifecycle

The primary goal of a software testing lifecycle should be to expose bugs, vulnerabilities and performance errors to improve the overall quality of applications and services. But, secondary goals could include things like creating a repeatable process for finding these problems or detecting ways to improve software development and delivery lifecycles. But, at the end of the day, software testing is meant to deliver better customer experiences to end-users using your production environment.

It can be difficult to quantify reliability or to determine the overall importance of an efficient software testing lifecycle. Measuring pure downtime vs. uptime for an entire service doesn’t portray the complete picture of reliability. QA engineers and software engineers in the testing world need to think creatively about what their tests should truly uncover in order to improve the lives of customers. Then, when a test appears to successfully deliver the information you need, automate the automatable. The more you can proactively run tests through your environments and identify problems before customers become aware of them, the better your customers will feel about using your product or service.

Quality assurance through better, automated testing

QA and DevOps are similar in the sense that automation should be a core philosophy for each practice. QA is an important element of engineering teams that adopt a DevOps mindset. And, automation is a basic requirement for QA engineers who are implementing a software testing lifecycle for DevOps-centric organizations. DevOps requires the IT and software development teams to work closely together, reduce feedback loops and maintain a reliable service without hindering the speed of development.

Historically speaking, software developers and IT practitioners in charge of deployment and release are often frustrated with QA engineers for running tests that slow down the CI/CD process. But, QA is a crucial part of delivering this code to production reliably. So, how do you strike this balance between speed and reliability? Well, a powerful implementation of the DevOps mindset, for all engineers and IT operations teammates, for planning, development, testing, deployment, release and production upkeep will allow you to have both.

The right level of automation in QA can appease developers and operations teammates. QA engineers can focus on customized test cases for major problems or new services being worked on while automation can consistently maintain a reliability standard across other services and workflows. QA testers, software developers, operations teams and IT security practitioners can now collaborate more in real-time around the largest concerns while automation handles most of the menial tasks in the CI/CD pipeline.

Addressing the entire software testing lifecycle

The entire team is responsible for maintaining the resilience of the services they support. QA engineers aren’t the only ones who should be looking at the overall application and infrastructure quality. Developers can use practices like unit testing to break testing into small chunks of code and continuously test during the development process.

Continuous testing shouldn’t only be applied by a QA or testing team, tests can and should be adopted by other IT professionals and developers. Then, after tests are conducted, it’s important that cross-functional engineering disciplines come together to discuss what test results mean to their respective departments. This is often where site reliability engineering (SRE) teams can help improve the overall observability of a team’s applications and services while driving better collaboration between engineering departments.

Incident management preparation

Unfortunately, no matter how robust your software testing lifecycle is, your team will still encounter production incidents. Because of QA’s deep involvement across multiple applications and numerous aspects of a service’s infrastructure, they’re fairly well-informed of a system’s dependencies and connections. So, involving QA engineers into the on-call process alongside the IT practitioners and developers who maintain production environments can also lead to more reliable systems. Better testing can help reduce the number of incidents in production but, if reports and post-test retrospectives are done well, it can also lead to a more prepared on-call incident management and response team.


Learn how DevOps and IT operations teams are helping QA engineers get more out software testing and drive improved incident management. Sign up for a 14-day, free trial of VictorOps to learn about a collaborative approach to maintaining continuous testing and a more reliable CI/CD pipeline through better incident response.

Let us help you make on-call suck less.

Get Started Now