This directory contains comprehensive testing infrastructure for the Integral Philosophy Publishing System. The infrastructure is designed to support all phases of testing from unit tests to end-to-end integration tests.
tests/
βββ conftest.py # Pytest configuration and shared fixtures
βββ pytest.ini # Test execution settings
βββ requirements.txt # Test dependencies
βββ run_tests.py # Test runner script
βββ utils/
β βββ __init__.py
β βββ test_helpers.py # Common test utilities
β βββ base_test_classes.py # Abstract base test classes
βββ fixtures/
β βββ __init__.py
β βββ sample_data/ # Sample test data files
β βββ mock_data/ # Mock responses and data
β βββ test_files/ # Test input/output files
βββ reports/ # Test reports and outputs
βββ unit/ # Unit tests
β βββ __init__.py
β βββ validators/
β βββ scripts/
β βββ api/
β βββ web_interface/
βββ integration/ # Integration tests
β βββ __init__.py
β βββ pipeline/
β βββ api_web/
β βββ validators/
βββ functional/ # Functional tests
β βββ __init__.py
β βββ scraping/
β βββ conversion/
β βββ tei_uml/
β βββ validation/
βββ e2e/ # End-to-end tests
β βββ __init__.py
β βββ workflows/
β βββ user_scenarios/
βββ performance/ # Performance tests
β βββ __init__.py
β βββ load/
β βββ stress/
β βββ benchmark/
βββ security/ # Security tests
βββ __init__.py
βββ auth/
βββ injection/
βββ xss/
- Pytest Configuration: Comprehensive pytest settings with markers, coverage, and reporting
- Shared Fixtures: Reusable fixtures for common test scenarios
- Custom Markers: Organize tests by type and requirements
- Test Data Management: Create and manage test files and directories
- WebDriver Management: Selenium WebDriver setup and cleanup
- Mock Server: HTTP server for testing API endpoints
- Performance Monitoring: CPU, memory, and timing metrics
- File Comparison: Tools for comparing files and directories
- Async Testing: Utilities for asynchronous test scenarios
- BaseTestCase: Common functionality for all tests
- APITestCase: HTTP API testing with request/response helpers
- WebTestCase: Flask web interface testing
- SeleniumTestCase: Browser automation testing
- PerformanceTestCase: Performance testing with metrics
- IntegrationTestCase: System integration testing
- SecurityTestCase: Security vulnerability testing
- Sample Documents: Markdown, HTML, LaTeX, JSON test data
- Mock Responses: Pre-configured API responses
- Test Configurations: Various component configurations
- Sample AST Data: Abstract Syntax Tree test data
# Install test dependencies
pip install -r tests/requirements.txt
# Setup test environment
python tests/run_tests.py setup# Run all tests
python tests/run_tests.py all
# Run unit tests only
python tests/run_tests.py unit
# Run integration tests
python tests/run_tests.py integration
# Run functional tests
python tests/run_tests.py functional
# Run end-to-end tests
python tests/run_tests.py e2e
# Run performance tests
python tests/run_tests.py performance
# Run security tests
python tests/run_tests.py security
# Run quick tests (excluding slow and selenium tests)
python tests/run_tests.py quick
# Run CI-optimized tests
python tests/run_tests.py ci# Run tests with coverage analysis
python tests/run_tests.py all --coverage
# Coverage report will be generated in tests/reports/htmlcov/# Run tests in parallel for faster execution
python tests/run_tests.py all --parallelTest individual components in isolation:
- Format converters
- TEI generators
- UML generators
- Validation functions
- Utility functions
Example:
class TestFormatConverter(ComponentTestCase):
def test_markdown_to_html(self):
result = self.format_converter.convert(
self.markdown_file,
"html"
)
assert result.success
assert result.output_file.exists()Test component interactions:
- API to Web Interface integration
- Pipeline component interactions
- Validator integration
- Data flow between components
Example:
class TestPipelineIntegration(IntegrationTestCase):
def test_full_conversion_pipeline(self):
# Test markdown β AST β HTML β TEI pipeline
passTest specific features and functionality:
- Web scraping capabilities
- Format conversion accuracy
- TEI generation correctness
- UML diagram generation
Example:
class TestWebScraping(FunctionalTestCase):
def test_scrape_website(self):
result = self.web_scraper.scrape_url("https://example.com")
assert len(result["pages"]) > 0Test complete user workflows:
- Document processing pipeline
- API job submission to completion
- Web interface user journeys
- System-wide workflows
Example:
class TestDocumentProcessing(E2ETestCase):
def test_markdown_to_publication_pipeline(self):
# Test complete workflow from upload to publication
passTest system performance and scalability:
- Load testing
- Stress testing
- Benchmark comparisons
- Resource usage monitoring
Example:
class TestPerformance(PerformanceTestCase):
def test_conversion_performance(self):
with self.performance_monitor:
result = self.format_converter.convert(large_file, "html")
metrics = self.performance_monitor.get_metrics()
assert metrics["duration_seconds"] < 10.0Test for security vulnerabilities:
- Authentication bypass
- Injection attacks
- XSS vulnerabilities
- Access control
Example:
class TestSecurity(SecurityTestCase):
def test_xss_prevention(self):
malicious_input = "<script>alert('xss')</script>"
response = self.web_client.post("/process", data=malicious_input)
self.assert_no_xss_vulnerability(response.get_data(as_text=True))The following pytest markers are available for organizing tests:
@pytest.mark.unit- Unit tests@pytest.mark.integration- Integration tests@pytest.mark.functional- Functional tests@pytest.mark.e2e- End-to-end tests@pytest.mark.performance- Performance tests@pytest.mark.security- Security tests@pytest.mark.slow- Slow-running tests@pytest.mark.selenium- Tests requiring Selenium WebDriver@pytest.mark.requires_components- Tests requiring all system components
Contains shared fixtures for:
- Test data and directories
- Mock WebDriver instances
- API and web clients
- Component instances
- Performance monitoring
Pytest configuration with:
- Test discovery patterns
- Coverage settings
- Output formatting
- Marker definitions
- Timeout and async settings
Test-specific dependencies:
- Testing frameworks
- Selenium WebDriver
- Performance tools
- Security scanners
- Reporting tools
Create and manage test files and directories:
data_manager = TestDataManager()
file_path = data_manager.create_sample_file("test.md", "# Test Content")
# Automatically cleaned up after testManage Selenium WebDriver instances:
driver_manager = WebDriverManager(headless=True)
driver = driver_manager.get_chrome_driver()
# Driver automatically cleaned upMonitor performance during tests:
monitor = PerformanceMonitor()
monitor.start_monitoring()
# Run test code here
metrics = monitor.stop_monitoring()
assert metrics["peak_memory_mb"] < 100Mock HTTP server for API testing:
server = MockServer()
server.add_response("/test", {"data": "value"})
server.start()
# Make requests to localhost:8080/test
server.stop()- Use appropriate test categories
- Follow naming conventions
- Group related tests
- Use descriptive test names
- Use fixtures for reusable data
- Create minimal test data
- Clean up test artifacts
- Avoid hardcoded paths
- Use specific assertion methods
- Include helpful error messages
- Test edge cases
- Verify state changes
- Keep tests fast
- Use parallel execution when possible
- Mark slow tests appropriately
- Monitor test performance
- Test for common vulnerabilities
- Validate input sanitization
- Test authentication/authorization
- Check for information disclosure
The test infrastructure is designed for CI/CD pipelines:
# Example GitHub Actions workflow
- name: Run Tests
run: python tests/run_tests.py ci --coverage
- name: Upload Coverage
uses: codecov/codecov-action@v3
with:
file: tests/reports/coverage.xml-
Selenium WebDriver Issues
# Install drivers pip install webdriver-manager export CHROME_DRIVER_VERSION=latest
-
Import Errors
# Check PYTHONPATH export PYTHONPATH="${PYTHONPATH}:$(pwd)"
-
Permission Errors
# Create necessary directories mkdir -p tests/reports api_jobs web_jobs -
Timeout Issues
# Increase timeout python tests/run_tests.py all --args="--timeout=600"
Run tests with verbose output and debugging:
python tests/run_tests.py unit --verbose --args="-s --pdb"When adding new tests:
- Choose appropriate test category
- Use existing fixtures and utilities
- Follow naming conventions
- Add proper documentation
- Update this README if needed
This test infrastructure is part of the Integral Philosophy Publishing System and follows the same licensing terms.