CS472 - Software Testing
Labs
This individual assignment is due Sept 12th, 2025
In this Lab your will practice writing unit tests and analysing test coverage using Python programming language. In the Lab you will also continue working with Git and GitHub facilities. You will submit your final report for this Lab in the Team’s repository you created and used in the Git and GitHub Lab.
Software testing
Software testing is the process of evaluating and verifying that a software product or application does what it’s supposed to do. The benefits of good testing include preventing bugs and improving performance IBM.
Tests: Your Life Insurance!
Tests are a crucial part of software engineering. They help to:
- Detect unwanted side effects when modifying code.
- Gain a deeper understanding of a system’s inner workings.
However, the presence of automated tests alone does not guarantee software quality. Important questions to consider include:
- Do the tests cover the entire system, or are some parts left untested?
- To what extent are different parts of the system covered?
Measuring test coverage is a valuable and necessary practice in assessing the effectiveness of a test suite, ensuring that critical components of the software are thoroughly tested.
Materials & Tools Used for this Session
- Install Python
>= 3.8. This exercise was tested with the following Python versions:3.8.1, 3.9.5, 3.9.6, 3.9.7and3.10.10but any version of python3.8+should work without any major configuration issues. - Download and install IDE of your choice. Popular options are Microsoft Visual Studio Code and IntelliJ IDE
- pytest Most popular python testing framework - makes it easy to write small, readable tests, and can scale to support complex functional testing for applications and libraries.
- flask a web framework, it’s a Python module that lets you develop web applications easily.
- (Optional) Read about RESTFUL API
- Test Coverage repository.
- Test Driven Development repository.
Task 1 – Test Coverage
In this task, you will practice writing tests and improving your tests coverage in Python. You will generate a test coverage report and interpret the report to determine which lines of code do not have test cases, and writing test cases to cover those lines.
Task 1: Set Up Your Team Repository for Test Coverage Lab (5 pts)
This lab is a continuation of the previous Git and GitHub lab. You will continue working in your existing team repository, organizing files using folders rather than creating a new repository.
1. Organize Your Repository
- Each team member should create a dedicated folder named
test_coverage_labin their local clone of the team repository to store all files related to the Test Coverage Lab. - Use the exact spelling:
test_coverage_labto maintain consistency and avoid duplicate folders when pushing to the repository.
2. Copy and Set Up the Starter Files
- Each team member should clone the provided Test Coverage repository and copy the starter files into the
test_coverage_lab/folder in their local clone of the team repository. - While copying, ensure that the
.git/directory is NOT included to avoid creating a nested repository. - No need to push changes to the main repository yet—this step is just for setting up your local environment.
3. Build and Verify the Setup
- Navigate into the
test_coverage_labfolder. - Install the required dependencies from
requirements.txt:cd test_coverage_labpip install -r requirements.txt # For stable versionsor
-
pip install -r requirements-dev.txt # For latest updates - Run
pytestto check if the provided test cases are passing: - Ensure that everything compiles successfully before proceeding.
- If any issues arise, debug and resolve them as needed.
- If your tests run successfully, you should see an output similar to this:
tests/test_account.py::test_account_role_assignment PASSED [100%] ---------- coverage: platform darwin, python 3.9.7-final-0 ----------- Name Stmts Miss Cover Missing -------------------------------------------------- models/__init__.py 7 0 100% models/account.py 45 18 60% 30, 34, 47-49, 53-55, 59-63, 67, 71, 76, 81-82 -------------------------------------------------- TOTAL 52 18 65% ---------------------------------------------------------------------- 1 passed in 1.06s - Commit the successful build to your forked repository:
git add . git commit -m "Successful setup and initial test run" git push origin main
4. Include in Your Lab Report
As the first task in your final lab report, include the following:
- Screenshot of Your Terminal**
Provide a screenshot that shows:- The
test_coverage_labfolder inside your repository. - A successful build of the setup files (i.e., running
pytestwithout errors).
- The
- Commit Link from Your Forked Repository**
Submit a commit link that includes:- The
test_coverage_labfolder added to your repository. - Your commit history reflecting the setup process.
- The
This section serves as proof that you successfully set up the environment before proceeding to Task 1.2.
Task 1.2: Working with Python Test Coverage
In this task, you will improve test coverage by writing new test cases. All work will be done in the test_coverage_lab/ folder, and you will submit your changes through a pull request (PR) from your branch to the team repository.
1. Understanding the Current Test Coverage
- Refer to your previous
pytesttest coverage report from Task 1. - Ensure you understand which lines of code are missing test cases before proceeding.
2. Assigning Test Cases
Your team will divide the uncovered code areas among students. Below are suggested tests that need to be implemented:
| Test Number | Description | Target Method |
|---|---|---|
| Student 1 | Test account serialization | to_dict() |
| Student 2 | Test invalid email input | validate_email() |
| Student 3 | Test missing required fields | Account() initialization |
| Student 4 | Test positive deposit | deposit() |
| Student 5 | Test deposit with zero/negative values | deposit() |
| Student 6 | Test valid withdrawal | withdraw() |
| Student 7 | Test withdrawal with insufficient funds | withdraw() |
| Student 8 | Test password hashing | set_password() / check_password() |
| Student 9 | Test account deactivation/reactivation | deactivate() / reactivate() |
| Student 10 | Test email uniqueness enforcement | validate_unique_email() |
| Student 11 | Test deleting an account | delete() |
Your team should discuss who will implement each test.
3. Writing Your Test Case
- Open
tests/test_account.pyand add your assigned test case. - Include your details at the top of your test case:
# ===========================
# Test: Account Role Assignment
# Author: John Businge
# Date: 2025-01-30
# Description: Ensure roles can be assigned and checked.
# ===========================
def test_account_role_assignment():
"""Test assigning roles to an account"""
account = Account(name="John Businge", email="johnbusinge@example.com", role="user")
db.session.add(account)
db.session.commit()
# Retrieve from database
retrieved_account = Account.query.filter_by(email="johnbusinge@example.com").first()
assert retrieved_account.role == "user"
# Change role and verify
retrieved_account.change_role("admin")
db.session.commit()
updated_account = Account.query.filter_by(email="johnbusinge@example.com").first()
assert updated_account.role == "admin"
4. Committing and Pushing Your Test Case
- Create a new branch for your test case:**
git checkout -b add-test-<your-test-name> - Commit your changes
- Push to your forked repository
git push -u origin <your-branch-name>
5. Submitting a Pull Request
- Open a Pull Request (PR) from your branch to the team repository.
- In the PR description, include:
- brief summary of what your test case does.
- The line(s) of code covered in
models/account.py.
6. Lab Report Submission
As a second task in the report, include:
- A link to your Pull Request.
- A copy of your test case.
- A brief explanation of what your test does and why it is important.
Task 2: Test-Driven Development (TDD) Lab Assignment (15 pts)
🔍 Overview
This lab follows a Test-Driven Development (TDD) approach where you will:
- Write a test case first for a missing feature.
- Run the test and observe it fail (Red Phase).
- Implement the feature to make the test pass (Green Phase).
- Refactor the code to improve structure (Refactor Phase).
Each student will be responsible for one test case and the corresponding implementation in the Counter API.
Refer to the README.md file in the TDD repository for setup instructions and common errors & solutions.
1. Setting Up Your Work Environment (5 pts)
1. Organize Your Repository
- Each team member should create a new folder in their local clone of the team repository, similar to the previous lab.
- The folder name should be
tdd_lab(use exact spelling for consistency). - This folder will store all files related to this TDD Lab.
2. Copy the Starter Files
- Copy all files from the root of the provided tdd repository and place them inside your newly created
tdd_lab/folder. - While copying, ensure that the
.git/directory is NOT included to avoid creating a nested repository.
3. Install Dependencies & Verify Setup
- Navigate into the
tdd_lab/folder and install the required dependencies:cd tdd_lab pip install -r requirements.txt - Run pytest (even though the tests will be empty initially):
pytest --cov=src - Expected output (since no tests exist yet):
collected 0 items - If any errors arise, refer to the README.md file in the repository for troubleshooting.
4. Commit & Push the Initial Setup
- After successfully setting up your environment, commit your changes to your forked repository:
git add . git commit -m "Successful TDD setup and initial test run" git push origin main
Task 2.2. Introduction to TDD (Worked Example)
- You should read the instructions in the README.md to ensure flask is running before they start testing.
- Before you begin writing your own test case, let’s go through a guided example.
- The provided
test_counter.pyfile will initially be empty.
Step 1: Create the src/counter.py File
touch src/counter.py
- Add the following code to
src/counter.py:
"""
Counter API Implementation
"""
from flask import Flask, jsonify
from . import status
app = Flask(__name__)
- Now the
counter.pyfile exists, but it does nothing yet.
Step 2: Write a Failing Test
- Before implementing a new feature, write a test that fails.
- Add the following test case in
tests/test_counter.py:
import pytest
from src import app
from src import status
@pytest.fixture()
def client():
"""Fixture for Flask test client"""
return app.test_client()
@pytest.mark.usefixtures("client")
class TestCounterEndpoints:
"""Test cases for Counter API"""
def test_create_counter(self, client):
"""It should create a counter"""
result = client.post('/counters/foo')
assert result.status_code == status.HTTP_201_CREATED
- Run
pytest --cov=src -
Expected failure:
RED -
AssertionError: 404 != 201 - The test fails because the endpoint does not exist yet.
Step 3: Implement the Minimum Code
- Modify
src/counter.pyto create the missing Flask app. Add the code below:
COUNTERS = {}
@app.route('/counters/<name>', methods=['POST'])
def create_counter(name):
"""Create a counter"""
if name in COUNTERS:
return jsonify({"error": f"Counter {name} already exists"}), status.HTTP_409_CONFLICT
COUNTERS[name] = 0
return jsonify({name: COUNTERS[name]}), status.HTTP_201_CREATED
- Run
pytest --cov=src
GREEN - All tests passed ✅
Step 4: Refactor for Reusability
- Refactor the counter creation check into a helper function:
def counter_exists(name): """Check if counter exists""" return name in COUNTERS - Now update the API to use this function:
@app.route('/counters/<name>', methods=['POST'])
def create_counter(name):
"""Create a counter"""
if counter_exists(name):
return jsonify({"error": f"Counter {name} already exists"}), status.HTTP_409_CONFLICT
COUNTERS[name] = 0
return jsonify({name: COUNTERS[name]}), status.HTTP_201_CREATED
- This makes the code cleaner and reusable.
Your tasks
Task Overview
Each student will be responsible for implementing one test case and the corresponding function.
| Student # | Test Case Description | Target API Method |
|---|---|---|
| Student 1 | Create a new counter | POST /counters/<name> |
| Student 2 | Prevent duplicate counters | POST /counters/<name> |
| Student 3 | Retrieve an existing counter | GET /counters/<name> |
| Student 4 | Return 404 for non-existent counter | GET /counters/<name> |
| Student 5 | Increment a counter | PUT /counters/<name> |
| Student 6 | Prevent updating non-existent counter | PUT /counters/<name> |
| Student 7 | Delete a counter | DELETE /counters/<name> |
| Student 8 | Prevent deleting non-existent counter | DELETE /counters/<name> |
| Student 9 | Reset all counters | POST /counters/reset |
| Student 10 | List all counters | GET /counters |
| Student 11 | Handle invalid HTTP methods | Unsupported HTTP Methods |
Each student must:
- Write a test case inside
tests/test_counter.py - Implement the feature inside
src/counter.py - Run
pytest --cov=srcto verify the test case passes - Submit a Pull Request (PR) to the team repository
- All PRs should be opened on the main branch of the team repository.
- Write a report for the activities.
Step 1: Create a Branch
git checkout -b add-test-<your-feature>
git push -u origin <your-branch-name>
Final Report Submission (10 pts)
Each student must submit one single report that documents both the Test Coverage tasks and the Test-Driven Development (TDD) tasks. The report should be comprehensive, meaning it includes all the required details for both tasks in a single PDF.
What to Include in Your Report:
- Test Coverage Lab Results
- A screenshot or commit link showing your repository setup.
- A summary of the test coverage before and after your test contributions.
- A description of the test cases you wrote, including the function(s) they cover.
- A link to your Pull Request (PR) for the Test Coverage Lab.
- Test-Driven Development Lab Results
- A link to your Pull Request (PR) for the TDD Lab.
- A copy of the test case you wrote.
- A brief explanation of what your test case does and how it contributes to the Counter API.
- A summary of your RED-GREEN-REFACTOR process, including:
- The failing test (RED Phase).
- The implemented feature that made the test pass (GREEN Phase).
- Any code improvements or refactoring you made (REFACTOR Phase).
Submission Instructions
- Your report must include both Task 1 (Test Coverage) and Task 2 (TDD).
- Do not submit separate reports for each task. Submit one PDF covering all required details.
- Ensure your report is clear and self-contained, so it can be understood without running your code.
- Upload your final report as a PDF on Canvas.