Key Takeaways
- Smoke testing helps identify potential defects early in the software development process.
- Prioritizing and automating critical test cases ensures efficient defect detection.
- Continuous integration and analyzing key metrics improve the overall quality of the software build.
Introduction to smoke testing
Smoke testing is a type of software testing you conduct after new builds or code changes to evaluate basic functionality and stability. The purpose of smoke testing is to identify potential major defects early in the development cycle. As the name suggests, such testing scans your software and looks for “smoke”. If you see smoke, you should halt the build and investigate the source of the issue.
Smoke tests focus on critical test cases and workflows to verify that the major functions of the software are operating as expected. The goal is to determine if the build is stable enough to proceed with more in-depth testing. Smoke testing provides a quick initial assessment to reveal major failures or issues that would prevent further testing.
Smoke testing differs from other software testing approaches in its purpose, scope, and execution:
- Purpose: Smoke testing evaluates overall stability and basic functionality, not the performance of every little feature in a software package.
- Scope: The test suite covers the most important flows, not exhaustive scenarios.
- Execution: Smoke testing is done quickly after new builds, not on a fixed schedule.
So while more rigorous regression testing aims to systematically validate new code changes, smoke testing provides a lightweight pass/fail evaluation to detect major defects before investing in further testing.
Caption: Example of where smoke testing fits in the broader automated software testing workflow.
Types of smoke testing
Smoke testing can take various forms depending on the stage of software development and release testing objectives. The main types of smoke testing are:
Functional testing
Functional smoke tests validate that the critical functions of your software work as expected. They focus on the main workflows, user journeys, and application features. Even though test coverage is limited, significant functional gaps can be detected.
Non-functional testing
Non-functional smoke testing evaluates key quality attributes like performance, security, and reliability at a basic level. Common non-functional smoke tests include load testing, penetration testing, and failover testing. This provides confidence in non-functional aspects before detailed testing.
Integration testing
Integration smoke testing verifies application modules and components work together correctly. Critical integrations between frontend, backend, databases, APIs, and other services are checked to avoid integration defects upfront.
The specific type of smoke testing depends on the project lifecycle stage and testing priorities. But in general, smoke tests exercise the most important functionality across multiple quality dimensions.
When to perform smoke testing
You can perform smoke tests at various stages in the software development life cycle to validate builds and catch defects early. Some key points where smoke tests add value:
Integration testing stage
Conducting smoke tests after integrating components or services is important to check for defects and stability issues arising from integration. This is especially useful in complex systems built from multiple components.
System testing stage
Before progressing to more rigorous system testing, smoke tests help validate that the integrated system meets basic requirements. This weeds out major issues before spending further time on in-depth system testing.
User acceptance testing stage
Smoke tests give confidence that the system is ready for user acceptance testing. By confirming main functions work, smoke testing indicates the system is stable enough for acceptance testing.
Continuous integration
Executing automated smoke tests on every new build is a best practice for continuous integration. This provides rapid feedback on potential regressions and defects introduced in new commits.
In summary, well-timed smoke testing reduces escapes into later testing stages. Conducting smoke tests at multiple levels ensures quality and stability for progressive builds.
Smoke testing process and techniques
Smoke testing follows a systematic process to ensure it’s effective and provides maximum value. Here are the key steps involved in carrying out smoke testing:
Identifying critical test cases
The first step is to analyze the system under test and identify the critical test cases that you should include in the smoke test suite. These test cases should cover the most important functionality that provides core value to end users. For example, for a website, critical test cases would include user login, search, adding items to a cart, and checkout.
Prioritizing tests
Once the critical test cases are identified, they need to be prioritized so that the most important functions are tested first. High-priority test cases usually involve core workflows and frequently used features. Prioritization ensures that critical regressions are caught early if the build is unstable.
Executing tests
You then need to execute the prioritized set of test cases on the build to verify if it works as expected. It is recommended to automate these smoke tests so that they can be run quickly whenever a new build is available. If tests are automated, multiple rounds of smoke testing can be performed on a daily basis.
Analyzing results
After test execution, the results need to be analyzed to identify which test cases passed or failed. Failed tests indicate potential regressions or defects introduced in the new build. Any crash, error message, or unexpected behavior is carefully examined.
Reporting defects
All defects found during smoke testing are then consolidated into a test report that is shared with developers. Defect reports should clearly describe the issue and steps to reproduce it along with screenshots or other data. This enables developers to quickly diagnose and fix the problems.
Automating smoke tests
Automating smoke tests provides several key benefits compared to manual smoke testing:
- Increased speed and efficiency - Automated tests can run much faster than manual tests, allowing for more frequent smoke testing. Tests can be run on every code change.
- Improved test consistency - Automated tests perform exactly the same steps every time, reducing human error and fluctuations.
- Enhanced test coverage - Automated testing makes it practical to cover more test cases, scenarios and configurations.
- Earlier defect detection - With frequent automated testing, issues can be caught earlier in the development cycle.
- Reduced testing costs - Automation requires more upfront investment but saves on ongoing manual testing efforts.
Some popular tools used for test automation include Selenium, Appium, Cucumber, TestComplete, Ranorex, etc. These tools allow writing automated UI tests, API tests, unit tests, integration tests, etc. Frameworks like JUnit and TestNG are also commonly used.
When automating smoke tests, you should consider these common challenges:
- Maintenance of tests - Automated tests need to be updated whenever application features change.
- Brittle tests - Automated tests can break with minor UI changes. Page object model can make them more resilient.
- Debugging failures - More effort may be needed to analyze and debug failed automated tests.
- Slow test execution - Very large test suites can slow down test execution and feedback.
Best practices for automated smoke testing include:
- Prioritizing critical test cases for automation
- Using a test pyramid approach with more lower-level tests
- Applying patterns like page object model to improve maintainability
- Tagging and documentation for understanding tests
- Integrating testing into the build and deploy process
- Analyzing test results and optimizing over time
Smoke testing metrics
Smoke testing aims to provide quick feedback on the health of a software build. To measure the effectiveness of smoke testing, teams often track key metrics like:
- Time to Execute - How long does it take to run the entire smoke test suite? The goal is to minimize execution time while still providing adequate coverage. Smoke tests should run quickly.
- Defects Detected - The number of defects or failures identified during smoke testing. More defects indicates issues with the build (obviously). The trend of defects over time shows improving or worsening quality.
- Test Coverage - What percentage of critical components, features, or flows are covered by the smoke test suite? Higher coverage means more comprehensive testing, but too much coverage increases test duration.
- Test Pass Percentage - The pass rate or percentage of smoke tests that execute successfully. A high pass rate signals a stable build suitable for further testing. A low pass rate flags quality issues.
Teams should track these metrics over multiple test runs to identify patterns and trends. Failing tests, low coverage, long duration, or decreasing pass rate may indicate inadequate smoke testing. Optimizing these metrics improves early defect detection.
Smoke testing vs. sanity testing
Smoke testing and sanity testing are two techniques used in software testing to evaluate system stability and functionality. While they share some similarities, there are important differences between the two approaches:
Definitions
- Smoke testing refers to preliminary testing performed on a new software build to reveal simple failures that show the build is not ready for further testing. The "smoke" metaphor refers to looking for early signs of problems, just as smoke indicates the presence of fire.
- Sanity testing is a cursory testing effort to determine if a new software build is stable enough to accept more thorough testing. Sanity tests confirm that basic functionality works without worrying about more complex tests.
Differences
- Scope: Smoke tests have a broader scope, examining the entire system. Sanity tests focus on a subset of functionality.
- Test cases: Smoke tests cover critical test cases and primary functions. Sanity tests use a small sample of tests.
- Rigor: Smoke tests take a high-level approach. Sanity testing is more rigorous on the tested features.
- Timing: Smoke tests come first, providing a green light for more testing. Sanity testing comes next to double-check before detailed testing.
When to use each
- Use smoke testing after new builds or integration to quickly find major bugs before further testing.
- Use sanity testing after smoke testing to add lightweight confirmation of core functions and readiness for detailed testing.
Smoke testing provides breadth, sanity testing adds depth for key features. Use both for a lightweight yet thorough validation.
Smoke testing vs. regression testing
Smoke testing and regression testing are two important techniques used in software testing, but they serve different purposes.
Caption: This shows one example of how smoke tests and regression tests might be set up in a CI pipeline.
Definitions
- Regression testing is a type of software testing that seeks to uncover bugs in existing functionality that may have been introduced due to code changes. It involves re-running previously completed tests to ensure existing features still work as expected.
Key differences
The main differences between smoke and regression testing are:
- Scope: Smoke tests verify a limited set of core functionality, while regression tests cover a wider range of features.
- Test cases: Smoke tests focus on high-priority test cases, regression includes both high and low priority tests.
- Frequency: Smoke tests are executed with each new build, regression occurs less frequently.
- Purpose: Smoke testing checks basic stability, regression looks for regressions in existing features.
- Effort: Smoke testing requires less time and fewer resources than regression testing.
Relationship
While smoke and regression testing have distinct purposes, they complement each other in the testing process. Smoke tests provide confidence in basic functionality before regression tests are run. Issues caught in smoke testing can prevent wasted effort running full regression suites on unstable builds. Regression testing provides more comprehensive validation after smoke tests pass. Used together, they improve software quality and developer productivity.
Real-world smoke testing examples
Smoke testing can be applied to various software systems and applications. Here are some real-world examples of how smoke testing is utilized:
Smoke testing web applications
For web applications like e-commerce sites, social media platforms, or content management systems, smoke tests focus on:
- Verifying critical pages load properly
- Testing user login and authentication
- Checking major site navigation and menus
- Validating core site functionality like search, shopping cart, checkout etc.
- Testing integrations with payment gateways, APIs, external services
Smoke tests for web apps aim to simulate basic end user workflows and ensure the app is stable enough for broader testing. Popular tools used include Selenium, Cypress, and Postman.
Smoke testing mobile apps
On mobile apps, smoke tests are designed to:
- Confirm the app launches correctly on target devices/emulators
- Check that key screens open without crashing
- Validate major app features and flows
- Test connectivity with remote servers and services
- Verify integration with device capabilities like camera, GPS, notifications
The goal is to spot serious mobile app issues before more extensive testing. Appium is a common automation framework used.
Smoke testing enterprise software
For complex enterprise software like ERP, CRM, or HRIS systems, smoke testing focuses on:
- Making sure core modules/functions deploy successfully
- Checking database connectivity and queries
- Validating APIs and integration points
- Testing authentication, security, and access controls
- Confirming essential reports generate properly
Robust smoke testing is crucial for enterprise systems to catch issues early before full testing begins. REST Assured and SoapUI are popular tools.
Best practices for smoke testing
Define a comprehensive smoke test suite
The smoke test suite should cover the most important flows and functions of the application. Critical user journeys, core functionality, key integrations, and major non-functional aspects like performance and security should all be part of the test suite. Having thorough test coverage ensures that any major issues will be caught during smoke testing.
Maintain and update smoke tests
As the application evolves, you need to maintain and update smoke tests accordingly. You should add new critical flows to the test suite and remove outdated tests. Your teams should review and refresh smoke tests during each release cycle to account for new features or UI changes. Having current tests prevents false positives.
Integrate smoke testing into the development process
For maximum benefit, you should bake smoke testing into the development process. Executing smoke tests should be part of the build verification before any new release. Automated smoke tests can be triggered as part of continuous integration. By making smoke testing a mandatory step, you can fix defects proactively rather than reactively.
Following these smoke testing best practices will improve product quality, reduce bugs in production, and give you more confidence when releasing software.
But to mitigate risk and bolster your release confidence even further, in addition to smoke testing, we advise running controlled tests in production.
Controlled testing in production as a complement to smoke testing
The purpose of smoke testing—or any software testing—is to improve the reliability of your applications. It’s to catch bugs, vulnerabilities, and defects before they impact customers. Running controlled tests in a live production environment can help you achieve that goal and strengthen your overall testing program. We advise low-risk production testing with feature flags.
No matter how many pre-production tests you run, whether it’s unit tests, integrations tests, or smoke tests, when you expose your software to production traffic, errors will still occur. You can’t simulate every edge case.
That’s why, at LaunchDarkly, we encourage developers to break their releases down into individual features, wrap those features in feature flags, and then roll them out to a small subset of users in production. If the rollout leads to a performance degradation, you can toggle off the flag for the problematic feature. And you don’t need to push a change through your deployment pipeline; you resolve the issue instantly in runtime. (Read our deep-dive on testing in production.)
You can automate this workflow with Release Guardian in LaunchDarkly. In this same scenario, Release Guardian would monitor the operational health of your application during the feature rollout. And upon detecting a regression, it would notify you and automatically roll back the release (i.e., disable the bug-causing feature).
You achieve the greatest software reliability by implementing a holistic testing program that includes controlled production testing and, of course, smoke testing.