Shift-left testing is a software testing approach that QA and development discover defects earlier in the cycle. This benefits software engineers from putting in early checks into the code and handing over more advanced functionality to QA for more extensive test coverage. In general, this typically involves automated testing of the UI and APIs on an early regression basis. But in order to get to this stage, you’ll want to consider 5 related areas to effectively cover the testing basis.
Cover all basic functionality of the product
The key to any new product testing is to highlight its core functionality and any new features added within that release. But while some functionality may be obvious, and others not so much, always refer to the product documentation and requirements to define what areas will need to be covered. Proper test coverage is unbounded, as you can never cover enough edge cases before the customers do in production. But if you lay down the strategy to start with basic functionality first and then increase coverage as you further plan, you can target hitting over 95% of test coverage if properly following the documentation and test coverage guidelines. Here are a few tips on how to identify the basic functionality:
Consider the categories of applications.
The same application may have different features depending on the platform it’s running on. The most common platforms would be desktop, web, and mobile applications —
- Consider UI, business logic, databases, reporting, usability, functionality, performance, and security.
- UI functionality can be limited to the browser of choice, but consider cross-browser functionality, regression, compatibility, and multilanguage support. In addition, focus more importance on security, performance, and load.
- Consider speed of usage and lighter UI functionality, limited CPU/GPU resource intensive functionality, but a large range of device compatibility given a larger range of screen sizing, orientation, and battery life. Of course, consider all the other functional coverage like regression, integration, exploratory, network, and location-based services.
Apply application testing techniques:
Group your test cases into test approaches like black box, white box, and grey box areas. Further breakdown of these groups is defined by:
Black Box testing
- Primarily User acceptance scenarios, executing the testcase defined by what a manual user may occur.
- Boundary Value Analysis – set up min/max boundaries around the test cases and verify the item passes or fails within the defined regions
- Equivalence Partitioning – A method of defining equivalence classes into classes of data. Set up ranges and values upfront, and upon execution, derive if the value is achieved or declined.
- State transition testing – Setting up finite values in your test case and checking if each test sequence of events occurs in the application under test.
- Exploratory testing – evolving natural logic and common sense to the product and testing the Application as a user under real-world conditions.
White Box Testing
- Code coverage – Focus on validating the business logic through code paths. These could be in the form of unit tests, API tests, and stepping through code functions to validate they are being executed through logic.
- Path coverage – Similar to code coverage, this would be tailing the path of the application functionality and exhibit where the coverage ends. This is typically applying data sets and sample tests to walk through the functional paths.
Basic functionality is defined in all product requirements, so consider this place to start your shift left testing plan. Start with positive test coverage, usually minimal and in the form of “smoke tests”. As functionality gets added to builds, spin off more test cases and create more categories for each of the core functional areas as you spider down the well of other end-user test scenarios. Document all test cases that you write but consider prioritizing which tests need to be run on each build versus once a build is fine. Create a test traceability matrix across platforms to track the largest set of scenarios that can be touched and documented upon testing. Regression testing won’t cover all scenarios, and some test cases will be a large time sync to set up and execute, so choose your time wisely on what left shift testing cases you are investing in. And of course, automate as many test cases as possible to avoid manual coverage and duplicity in future test rounds.
Perform Test and Code Reviews
Developing your test plan is best complemented with a test plan or code review. This step is meant to collaborate with other stakeholders and determine if coverage is sufficient or missing certain angles. Similar to engineers that perform code reviews with their peers to identify security holes, potential defects, code coverage, and complete functionality, a left shift testing review is meant to qualify similar matters. The goal of test reviews (and code reviews for automation), is a great proactive way to identify any missing areas of testing, but also work together with your stakeholders to be on the same page of execution.
Test reviews should always be looked like a positive step in the process of assertive left shift testing coverage. It is not meant to misguide, judge, or criticize the intention of missing functionality or misunderstood coverage. Here are a few tips on how to best approach test and code reviews with your peers:
- Organize your document. Prepare your document in an organized manner, defining your workflow from top to bottom. Your audience should be able to determine the order of your approach and understand certain test cases may come later if you are demonstrating a sequence of test events. Group your test cases into functional areas that are easy to follow. If test cases have dependencies on other areas, state those definitions and back up the areas with links and references to the next section. Define your test cases with the proper environments that are required to test (ie: test harness needed, physical space required, data sets, etc.) Your audience can only help if they understand an organized workflow.
- Follow other testing examples. When doing a code review, its common practice to follow best practices on how tests and code are developed within the organization. Build your tests within existing development frameworks if they exist. For example, writing python tests within a .js framework would only require extra work porting over and causing extra work in interoperability. Instead, write your tests in .js which leaves room for faster integration. In addition, there’s no shortage of adding more unit tests to the system. Don’t worry so much on code coverage percentages, but instead focus on more test coverage that can only help find regressions when they are discovered later. On the flip side, unit tests are also a good reminder for developers to continue including compatible units in their code (ie: methods, classes, components) to accommodate a wider range of valid and invalid inputs. This helps strengthen the partnership between QA and developers to provide stronger test infrastructure and work hand in hand as more code lands.
- Seek guidance, not criticism. During the review, make it clear that you are seeking your peers’ opinions and looking for ways to improve your document. Don’t be defensive if your peers are missing context or don’t understand your approach. Navigate through your structured approach but leave room for constructive criticism and feedback. Your audience may see the event differently, and sometimes viewing the functionality from a different perspective can help catch edge cases. Take notes and follow up afterward with applicable feedback.
Remember, left shift testing and code reviews are about partnering up with other teammates that have the same goal as you do; churn out quality code and catch defects earlier. By partnering with team members and communicating your plan and approach, they can only identify areas you missed early on and save time and costs sooner if rightfully identified and executed at an earlier stage in the cycle.
Cover non-functional single-usage testing like performance and load scenarios
Consider non-functional test coverage as part of shift left testing when designing a strategy. Many applications have a backend or service component to them, which also relies on working when exhibiting a high load or stress of the system. While networking and production services are typically covered in a devops environment, requiring some sort of performance testing backed into the continuous integration process, it doesn’t often include single-user performance testing at the application level. To document a simple testcase that checks if a web application takes greater than 5 seconds to load on screen can be handled independently as part of a smoke test. If a testcase like this fails to pass independently, it saves a lot of time and energy when deploying this to a production-based environment. Take the time to strategize lightweight performance and load scenarios by setting up emulated or local environments, while minimizing external factors like web servers or cloud services. Before you point problems to a slow networking environment, make sure your independent tests pass first.
Incorporate CI/CD into your test
Continuous Integration and Continuous Deployment approaches are key to faster deployment and real time updates without disrupting workflow. By shifting more testing into the development environment, you can catch many more issues with security, functionality, and delayed handoffs before pushing to production. While this is more of a devops procedure, here are a few best practices for adopting shift-left testing in your CI/CD pipelines:
- Integrate security. static application security testing (SAST) is an approach to analyzing static code and fixing issues as they appear in the pipelines. Tools like Gitlab can deploy SAST reports and display vulnerabilities between the source and its branches.
- Reduce toolchain complexity. complex toolchains can open the system to more security risks. The more tools and custom integrations added to the CI/CD environment, the more risks can be penetrated through the different layers of abstraction. by reducing the toolchains, you can reduce the number of points of potential failure. Tests can be developed and supported to identify these holes.
- Secure the system for attacks. Your tests not only test the functionality of the application, but they should also execute sample tests that may penetrate code injections and security holes in a production environment. Many of these tests won’t work offline or in a local branch but will be uncovered once they connect to the network. Continue to design test cases with security in mind, especially when deployed in a live environment that is made up of live accounts and not just test data.
Remember that the objective of shift-left testing is to identify defects early and often. The combined approach of test planning, automation, non-functional coverage, and leveraging production environments are just multiple ways to track down bugs from all angles of code and non-code discovery. Work with different audiences to communicate what’s needed for thorough application coverage. Remember, your audiences will differ (from product to engineering to UX), so be sure to listen and adopt all angles of feedback in your test planning. Finally, implement your tests early in the cycle, and automate accordingly so it can evolve and keep up with the schedule sooner. You won’t want to wait until all code has landed before executing tests and identifying defects. By working closely with engineering and devops on testing sooner, the costs and severity of issues will greatly reduce by the time your product is scheduled to go live.