Testing
Why we need to test
Testing is essential to test the quantity of an application to meet both the functional requirements or non-functional requirements. Of course, we are able to test it manually but the problems are:
- It’s so time-consuming and easy to overlook bugs because of the human factor.
- When we deploy new features or bug-fixing release, we may introduce more bugs and things need to be retest again
That’s why we start to automate and write them. It gives us confidence that we made no mistake in the path of a use case and the automation speeds up the whole process. Both developers and QA engineers write tests. Anyway, we still need to do manual tests because not everything can be covered with automation, so we have also QA manual testers.
Types of Software Testing
Software Testing is a crucial phase of the software development cycle. There are many different types of software testing. Each of these testing types has its own purpose. The type of software testing that you choose depends on your testing objective, the complexity, and functionality of your software, and your testing team. The image below lists out some of the most common types of software testing used today.
Source: TatvaSoft - What are the Types of Software Testing?
Functional Testing
There must be something that defines what is acceptable behavior and what is not.
This is specified in a functional or requirement specification. It is a document that describes what a user is permitted to do so, that he can determine the conformance of the application or system to it. Additionally, sometimes this could also entail the actual business side scenarios to be validated.
Therefore, functionality testing can be carried out via two popular techniques:
- Testing based on Requirements: Contains all the functional specifications which form a basis for all the tests to be conducted.
- Testing based on Business scenarios: Contains the information about how the system will be perceived from a business process perspective.
Testing and Quality Assurance are a huge part of the SDLC process. As a tester, we need to be aware of all the types of testing even if we're not directly involved with them daily.
Non-Functional Testing
Non-Functional Testing is a type of testing used to evaluate a software application's performance, usability, dependability, and other non-functional characteristics. It is intended to test a system's readiness according to non-functional criteria that functional testing never considers.
Non Functional testing is essential for confirming the software's reliability and functionality. The Software Requirements Specification (SRS) serves as the basis for this software testing method, which enables quality assurance teams to check if the system complies with user requirements. Increasing the product's usability, effectiveness, maintainability, and portability is the goal of non-functional testing. It aids in lowering the manufacturing risk associated with the product's non-functional components.
There are lots of non-funational testing, to name a few well-known names - Performance testing, security testing, Reliablity testing, volumne testing, recoverying testing and visual testing. You may check the full list in browserstack
End to End Testing
End to end testing (E2E testing) is a software testing method that involves testing an application’s workflow from beginning to end. This method aims to replicate real user scenarios to validate the system for integration and data integrity.
Essentially, the test goes through every operation the application can perform to test how the application communicates with hardware, network connectivity, external dependencies, databases, and other applications. Usually, E2E testing is executed after functional and system testing is complete.
Naturally, detecting bugs in a complex workflow entails challenges. The two major ones are explained below:
- Creating workflows: To examine an app's workflow, test cases in an E2E test suite must be run in a particular sequence. This sequence must match the path of the end-user as they navigate through the app. Creating test suites to match this workflow can be taxing, especially since they usually involve creating and running thousands of tests.
- Accessing Test Environment: It is easy to test apps in dev environments. However, every application has to be tested in client or production environments. Chances are that prod environments are not always available for testing. Testers must install local agents and log into virtual machines even when they are. Testers must also prepare for and prevent issues like system updates that might interrupt test execution.
Functional Testing Types
Top 5 Functional Testing Types
Testing Type | Description | Example Use Case | Common Tools |
---|---|---|---|
Unit Testing | Tests individual components or functions in isolation to ensure they work as expected. | Verifying that a calculateTotal() function correctly sums values. | JUnit, NUnit, pytest, Jest |
Integration Testing | Tests the interaction between integrated components to identify issues in their interactions. | Checking that a user can log in and retrieve their profile from a database. | JUnit, TestNG, Mocha, Postman |
System Testing | Validates the complete and integrated software system to check if it meets the specified requirements. | Testing the entire booking system of a flight reservation application. | Selenium, TestComplete, QTP |
End-to-End (E2E) Testing | Verifies the entire application workflow from start to finish to ensure all components work together as expected. | Ensuring a user can search for a product, add it to the cart, and complete the purchase. | Cypress, Selenium, TestCafe |
Acceptance Testing | Verifies the system against user requirements and business processes; often performed by end-users or stakeholders. | Ensuring the application meets business requirements before going live. | Cucumber, FitNesse, Manual Testing |
Acceptance Testing vs. E2E Testing
They are essential for validating that the application functions correctly according to specified requirements and workflows. Here’s a deeper look at their roles, importance, and manual implementation options.
Aspect | Acceptance Testing | End-to-End (E2E) Testing |
---|---|---|
Purpose | Validates system against business requirements and user acceptance criteria. | Verifies Complete system workflows and integration points. |
Stakeholders | End users, business analysts, or clients. | Testers or developers. |
Scope | High-level business requirements. | Comprehensive user workflows. |
Typical Questions Addressed | Does the system meet user needs and business objectives? | Does the entire workflow function correctly from start to end? |
Execution | Can be manual or automated. | Can be manual or automated. |
Example | Verify a new feature adheres to business rules. | Verify the process of ordering a product online works as expected. |
- Since we can implemente UAT and E2E manually, which one is harder to be automated?
- Both acceptance testing and E2E testing have their own complexities when implemented in code, but generally, E2E testing is harder to automate due to its broader scope and complexity.
- Why is UAT necessary if E2E testing covers the entire software system?
- While E2E testing ensures that all components of the software function together, UAT specifically validates if the software aligns with end-user requirements and expectations, providing a final check from the user’s perspective.
- How do UAT and E2E testing differ in their testing environments?
- User Acceptance Testing (UAT) typically occurs in a controlled environment that closely mirrors production and focuses on validating business requirements with end-users, while End-to-End (E2E) testing is performed in a comprehensive test environment to validate complete workflows, including system integrations and dependencies. ::: |
Regression Test
Regression testing is considered a software testing practice, not a specific testing type. It involves re-testing to confirm that recent changes haven't introduced new defects in previously working functionalities.
Source: Regression testing
Regression testing verifies that recent code changes haven't adversely affected existing functionality. It involves re-running tests on the modified software to ensure stability and correctness of the system. This testing detects unintended side effects from bug fixes, enhancements, or new features. Automated regression tests are often integrated into CI/CD pipelines for efficiency. By ensuring that new updates do not break existing features, regression testing helps maintain software quality and reliability throughout the development lifecycle.
Smoke Testing and Sanity Testing
Smoke testing and sanity testing are both relevant and commonly used in * mobile app build testing * , as they help ensure the stability and functionality of the app at different stages of the development process.
After each mobile app updates, the app must undergo smoke testing and sanity testing to validate its functionality. Although the definition of sanity testing vs smoke testing is quite similar, they should not be used interchangeably.
A good distinction between sanity testing vs smoke testing is the depth of the testing objectives. Sanity testing is about making sure that the core functions of the app works fine after a code change, while smoke testing verifies that the app works at its bare minimum or not. Smoke testing has a narrower scope than sanity testing.
Source: guru99 - Sanity Testing Vs. Smoke Testing – Difference Between Them
Smoke Test Example: Verify that the mobile app launches successfully, the user can log in using default credentials, and the main dashboard loads correctly. This involves checking that critical functions like navigation between primary screens, basic user interactions, and essential features (e.g., viewing a list of items) are operational. If these tests pass, the build is considered stable for further testing.
Sanity Test Example: After fixing a bug in the user profile update feature, run tests to ensure users can now update their profiles correctly, the changes are reflected immediately, and no new issues arise in the profile management area. This focuses only on the updated functionality to confirm the fix works.
Regression testing is a software testing practice rather than a testing in itself. It contains multiple types of tests, with sanity tests being a checkpoint in the process to decide if the build can proceed with the next level of testing.
Basically, sanity testing works just as regression testing but deals with a smaller test suite. This subset contains critical test cases that are run first before the examination of the whole package of regression tests.
Reference: Katalon - Sanity Testing vs Smoke Testing: In-depth Comparison
How to know what to test?
Here is a comprehensive guide on determining what to test in software development, emphasizing practical and efficient testing strategies. The article covers key concepts, strategies, and practical examples to help developers focus their testing efforts effectively.
I found Charlie Roberts' video on "Testing Software and Systems at Amazon" particularly insightful, especially his analogy that effective testing is akin to asking the right questions.
Other interesting topic like mutation tseting in unit test and the topic of validating the local data representative to the real life situation.
Key Points
- Practical Testing Pyramid: Aligns with the traditional testing pyramid but emphasizes a practical approach tailored to your application's needs.
- End-to-End (E2E) Tests: Focus on high confidence tests that run through the entire system.
- Integration Tests: Verify the interaction between different parts of the system.
- Unit Tests: Test individual functions or components in isolation.
- Code Coverage vs. Use Case Coverage:
- Code Coverage: Measures how much of your code is executed by tests. It's a useful metric but not the ultimate goal.
- Use Case Coverage: Focuses on covering the real user scenarios and workflows that the application must support. This is more aligned with user needs and ensures meaningful testing.
- How to Decide What to Test:
- Identify Critical Paths: Focus on the most crucial user interactions and workflows in your application.
- Test the Happy Path First: Start with scenarios where everything goes right to ensure the basic functionality works.
- Consider Edge Cases: Once the happy path is covered, think about less common scenarios or potential failure points.
- Use Realistic Data: Tests should use data that reflects actual usage to be meaningful and effective.
- Automated Testing Strategies:
- Write Tests that Add Value: Focus on tests that provide meaningful feedback and confidence, rather than achieving high code coverage for its own sake.
- Maintainable Tests: Tests should be easy to understand, maintain, and adapt to changes in the codebase.
- Test in the Right Place: Ensure tests are located at the appropriate level of the testing pyramid to balance speed and confidence.
- Testing Tools and Techniques:
- Static Analysis Tools: Use tools like ESLint and Prettier to catch potential issues early.
- Linting and Formatting: Ensure code consistency and catch errors before they become problematic.
- Continuous Integration (CI): Integrate tests into your CI pipeline to catch issues early in the development process.
Code Coverage vs. Use Case Coverage
Aspect | Code Coverage | Use Case Coverage |
---|---|---|
Definition | Measures the percentage of code executed by tests | Measures how well tests cover actual user scenarios |
Focus | Lines, branches, and functions | Real-world user interactions and workflows |
Advantages | Identifies untested code | Ensures meaningful, user-focused testing |
Disadvantages | May miss real user issues | Harder to measure quantitatively |
Best Practice | Use as a metric, not a goal | Focus on critical paths and user experience |
Test Coverage in Unit Testing
Test coverage in unit testing measures the extent to which the source code of an application is tested by a particular set of tests. It quantifies how much of your code is exercised by unit tests, helping to identify untested parts of a codebase. Here’s an overview:
Types of Test Coverage
- Statement/Line Coverage
- Definition: Measures the percentage of executable statements that have been executed by the tests.
- Formula:
(Number of executed statements / Total number of statements) * 100
- Example: In a function with 10 executable lines of code, if 8 lines are executed by the tests, the statement coverage is 80%.
- Branch Coverage
- Definition: Measures the percentage of branches (if-else conditions) that have been executed by the tests.
- Formula:
(Number of executed branches / Total number of branches) * 100
- Example: If a function has 5 decision points and tests execute 4 of them, the branch coverage is 80%.
- Condition Coverage
- Definition: Measures the percentage of boolean expressions evaluated to both true and false.
- Formula:
(Number of executed conditions / Total number of conditions) * 100
- Example: In an if-statement
if (a && b)
, condition coverage checks if botha
andb
have been evaluated as true and false.
Example in Python
Here’s a function that calculates discounts based on different conditions:
def calculate_discount(price, customer_type):
discount = 0
if customer_type == "regular":
if price > 100:
discount = 10
elif price > 50:
discount = 5
elif customer_type == "vip":
if price > 200:
discount = 20
elif price > 100:
discount = 15
else:
discount = 5
return price - (price * discount / 100)
Here are test cases that do not fully cover all statements, branches, and conditions:
def test_calculate_discount():
assert calculate_discount(150, "regular") == 135.0 # Tests regular customer, price > 100
assert calculate_discount(70, "regular") == 66.5 # Tests regular customer, price > 50 but <= 100
assert calculate_discount(50, "vip") == 47.5 # Tests vip customer, price <= 100
# Commented out tests:
# assert calculate_discount(250, "vip") == 200.0 # Would test vip customer, price > 200
# assert calculate_discount(120, "vip") == 102.0 # Would test vip customer, price > 100 but <= 200