0% found this document useful (0 votes)
1 views4 pages

Test Plan

The document outlines the test plan for the JLR AI Benchmarking Dashboard, detailing the purpose, scope, and testing strategies including unit, integration, regression, and UI testing. It specifies the test environment, unit test coverage, integration test plan, UI testing checklist, bug reporting protocol, and optional test automation. Additionally, it defines test ownership and approval processes to ensure software quality and correctness.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views4 pages

Test Plan

The document outlines the test plan for the JLR AI Benchmarking Dashboard, detailing the purpose, scope, and testing strategies including unit, integration, regression, and UI testing. It specifies the test environment, unit test coverage, integration test plan, UI testing checklist, bug reporting protocol, and optional test automation. Additionally, it defines test ownership and approval processes to ensure software quality and correctness.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Test Plan

JLR AI Benchmarking Dashboard


Version 1.0 – July 2025

1. Purpose
This document describes the testing strategy for the JLR-AI-Benchmark-Prediction project
to ensure software quality, correctness, performance, and stability. It includes unit, integra-
tion, regression, and UI testing.

2. Scope
• Verify functionality of all ETL scripts and ML model pipelines

• Ensure predictions are consistent across platforms

• Validate Streamlit dashboard components and user interactions

• Ensure data correctness, bias correction accuracy, and UI responsiveness

3. Test Strategy
• Unit Tests: Test Python functions/modules independently

• Integration Tests: Test end-to-end workflows (data ingestion → model inference)

• Regression Tests: Run periodically to verify no functionality has broken after up-
dates

• UI Tests: Test user interface behavior manually or via screenshot comparison

4. Test Environment
• Python Version: 3.10+

• Libraries: pytest, pandas, xgboost, streamlit, joblib

• Database: PostgreSQL 14+

• Tools: VS Code, Streamlit, pgAdmin, GitHub Actions (optional)


5. Unit Test Coverage
• scripts/cleaning.py – test data formatting functions
• feature engineering.py – test metric generation and normalization
• bias correction.py – test bias application logic
• train xgboost.py – test model training with synthetic inputs
• predictor engine.py – test response structure

6. Sample Unit Test (Pytest)


def test_normalize_scores():
from scripts.bias_correction import normalize_scores
test_data = {’vendor’: [’NVIDIA’], ’score’: [100]}
df = pd.DataFrame(test_data)
result = normalize_scores(df)
assert ’normalized_score’ in result.columns

7. Integration Test Plan


• Input: ‘gpu.json‘
• Process:
1. Clean and normalize raw CSVs
2. Generate feature set and apply bias correction
3. Train model and test predictions from UI
• Expected Output:
– Valid latency, throughput, and efficiency predictions
– Confidence score > 90%

8. UI Testing Checklist
• Input panel loads and accepts JSON/gpu specs
• Predict button generates valid metrics
• Graphs render properly (bar chart, radar chart)
• Model selector updates outputs dynamically
• No component crashes under malformed input
9. Bug Reporting Protocol
• Reproduce the error and note environment

• Attach screenshot/log output

• File a GitHub Issue with:

– Reproduction steps
– Observed vs expected behavior
– Affected files/modules

10. Test Automation (Optional)


• Use pytest with CI tools like GitHub Actions

• Trigger tests on pull request or push to dev

• Sample .github/workflows/test.yml:

name: Test Pipeline


on: [push, pull_request]

jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.10
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run tests
run: pytest

11. Test Ownership


• Unit Tests: Each developer for their module

• Integration Tests: Lead developer(s)

• Dashboard/UX Tests: Frontend or Streamlit specialist

• Review: Code reviewer must verify test coverage before merging


12. Approval and Review
• Test plan must be reviewed by supervisor or technical lead

• Coverage reports should be generated if using CI

You might also like