Testing
This document provides comprehensive guidance for testing the PyLint Sort Functions plugin using all available testing approaches, from unit tests to integration testing with the Docker validation system.
Overview
The plugin provides multiple testing approaches for different use cases:
- Unit Testing (
tests/test_*.py
) Fast, isolated tests using pytest and PyLint’s CheckerTestCase framework
- Integration Testing (
tests/integration/
) End-to-end tests validating CLI functionality and cross-module behavior using pytest
- Plugin Integration Testing
Testing the plugin with real PyLint execution in production scenarios
- Documentation Validation (
test-validation/
) Docker-based system for validating all configuration examples against real frameworks
- Framework Integration Testing
Testing decorator exclusions with real framework code (Flask, Django, FastAPI, etc.)
Quick Start
# Run all tests (unit + integration)
make test-all
# Run unit tests only
make test
# Run integration tests only
make test-integration
# Run with coverage
make coverage
# Test plugin with PyLint
make test-plugin
# Validate all documentation examples
make test-documentation
Unit Testing
PyLint Testing Framework
The plugin uses PyLint’s CheckerTestCase
framework for unit testing:
from pylint.testutils import CheckerTestCase
from pylint_sort_functions.checker import FunctionSortChecker
class TestFunctionSortChecker(CheckerTestCase):
CHECKER_CLASS = FunctionSortChecker
def test_unsorted_functions(self):
node = astroid.extract_node("""
def zebra_function(): #@
pass
def alpha_function():
pass
""")
with self.assertAddsMessages(
pylint.testutils.MessageTest(
msg_id="W9001",
node=node,
)
):
self.checker.visit_module(node)
Test Structure
Tests are organized in tests/
directory with clear separation between unit and integration tests:
tests/
├── integration/ # Integration tests (pytest)
│ ├── test_privacy_cli_integration.py # CLI integration tests
│ ├── test_privacy_fixer_integration.py # Privacy fixer API tests
│ └── test_privacy_fixer_simple.py # Simplified CLI tests
├── files/ # Test data files
│ ├── classes/ # Class test cases
│ ├── import_analysis/ # Import analysis test data
│ └── modules/ # Module test cases
├── test_auto_fix.py # Auto-fix functionality
├── test_checker.py # Main checker functionality
├── test_cli.py # CLI tool unit tests
├── test_coverage_gaps.py # Coverage gap validation
├── test_decorator_exclusions.py # Decorator exclusion testing
├── test_init.py # Plugin initialization tests
├── test_privacy_fixer.py # Privacy fixer unit tests
├── test_privacy_integration.py # Privacy integration tests
└── test_utils.py # Utility function tests
Running Unit Tests
# Run all tests
pytest tests/
# Run specific test file
pytest tests/test_checker.py
# Run with coverage
coverage run -m pytest tests/
coverage report -m
# Using make targets
make test
make coverage
The project enforces 100% test coverage of source code in the src/
directory.
Coverage Configuration
Test coverage is configured to measure only source code quality, not test file execution:
# pyproject.toml
[tool.coverage.run]
source = ["src"]
omit = ["tests/*"]
[tool.coverage.report]
fail_under = 100
Rationale: Coverage measures how well tests exercise source code, following industry standard practices. Test files themselves are excluded from coverage measurement because:
Logical Purpose: The goal is measuring source code quality, not test execution completeness
Meaningful Metrics: Focuses coverage reports on actionable insights about production code
Industry Standard: Most Python projects exclude test directories from coverage measurement
Cleaner Reports: Eliminates noise from incomplete integration test execution
Coverage Scope: Only files in src/pylint_sort_functions/
are measured, ensuring 100% coverage reflects comprehensive testing of the actual plugin code.
Coverage Strategy
The project maintains 100% test coverage through a pragmatic approach combining targeted testing with strategic exclusions:
Targeted Testing: Focus on business logic, API contracts, and user-facing functionality
Strategic Exclusions: Use # pragma: no cover
for code that’s genuinely hard to test or low business value
Pragma Usage Guidelines
# pragma: no cover
is used appropriately for:
def example_function():
# Defensive error handling
try:
content = parse_file(file_path)
except (SyntaxError, UnicodeDecodeError): # pragma: no cover
return False # pragma: no cover
# Broad exception fallbacks for robustness
try:
return perform_analysis(func)
except Exception: # pylint: disable=broad-exception-caught # pragma: no cover
return self._fallback_heuristics(func) # pragma: no cover
# Edge case fallbacks for malformed input
if not module_parts:
return fallback_pattern_match(pattern) # pragma: no cover
# Unimplemented placeholder methods
def _has_dynamic_references(self, candidate):
# TODO: Implement dynamic reference detection
return False # pragma: no cover
When NOT to use pragma:
Core business logic and user-facing APIs
Configuration and plugin registration code
Data processing and validation logic
Public method implementations
Coverage Achievement Strategy:
Write comprehensive tests for all business logic
Add pragmas judiciously for defensive/error handling code
Validate test quality - ensure tests cover realistic scenarios
Document exclusions - pragma comments should be self-explanatory
This approach achieves 100% coverage efficiently while maintaining focus on valuable test coverage that prevents regressions and validates functionality.
Integration Testing
Integration tests validate end-to-end functionality and CLI behavior. These tests are located in tests/integration/
and use pytest exclusively with a comprehensive fixture-based architecture.
Test Types
- CLI Integration Tests
Test command-line interface functionality with real file systems
- Privacy Fixer Integration
Test privacy detection and fixing workflows. The privacy fixer implementation is complete with full cross-module import analysis and comprehensive CLI support
- Method Categorization Integration
Test framework presets and custom categorization features end-to-end
- Configuration Validation Integration
Test configuration handling, error recovery, and edge cases
- Cross-Module Testing
Test functionality across multiple Python modules and packages
Fixture-Based Integration Testing
The integration testing system uses a comprehensive set of pytest fixtures to eliminate code duplication and provide consistent test infrastructure.
Core Fixtures
test_project
FixtureCreates a temporary Python project structure for testing:
@pytest.fixture def test_project(tmp_path: Any) -> Generator[Path, None, None]: """Create temporary Python project with src/ directory structure."""
Usage:
def test_feature(test_project): # test_project is a Path to temporary project root src_dir = test_project / "src" assert src_dir.exists()
file_creator
FixtureFactory fixture for creating test files within the project:
def test_with_files(file_creator): # Create test files with content module_path = file_creator("src/module.py", """ def zebra_function(): pass def alpha_function(): pass """) assert module_path.exists()
config_writer
FixtureFactory fixture for creating PyLint configuration files:
def test_with_config(config_writer): # Create .pylintrc configuration config_path = config_writer("pylintrc", """ [MASTER] load-plugins = pylint_sort_functions [function-sort] enable-method-categories = yes framework-preset = pytest category-sorting = declaration """)
pylint_runner
FixtureFactory fixture for running PyLint with the plugin loaded:
def test_pylint_execution(pylint_runner): returncode, stdout, stderr = pylint_runner( ["src/module.py"], extra_args=["--enable=unsorted-methods"] ) assert returncode == 0 assert "unsorted-methods" not in stdout
cli_runner
FixtureFactory fixture for running CLI commands in the test project:
def test_cli_functionality(cli_runner): returncode, stdout, stderr = cli_runner( ["--dry-run", "src/module.py"] ) assert returncode == 0
sample_test_class
FixtureProvides sample test class code for framework preset testing:
def test_framework_preset(sample_test_class, file_creator): # Get pytest-style test class code pytest_code = sample_test_class["pytest"] file_creator("src/test_example.py", pytest_code)
IntegrationTestHelper Class
The IntegrationTestHelper
class provides utilities for complex test scenarios:
from tests.integration.conftest import IntegrationTestHelper
def test_complex_scenario(file_creator):
# Create import chain for cross-module testing
modules = IntegrationTestHelper.create_import_chain(file_creator)
# modules contains {"module_a": path, "module_b": path, "module_c": path}
assert len(modules) == 3
Available Helper Methods:
create_import_chain(file_creator)
Creates modules with import dependencies for testing cross-module functionality
create_multi_module_project(file_creator, num_modules=3)
Creates a project with multiple interdependent modules for scalability testing
Framework Preset Testing Patterns
Integration tests include comprehensive framework preset testing with proper configuration patterns:
def test_pytest_framework_preset(
pylint_runner, file_creator, config_writer, sample_test_class
):
"""Test pytest framework preset with correct configuration."""
# Create test file with pytest-style class
file_creator("src/test_example.py", sample_test_class["pytest"])
# CRITICAL: Include category-sorting = declaration
config_content = """[MASTER]
load-plugins = pylint_sort_functions
[function-sort]
enable-method-categories = yes
framework-preset = pytest
category-sorting = declaration
"""
config_writer("pylintrc", config_content)
# Run PyLint - should accept conventional pytest ordering
returncode, stdout, stderr = pylint_runner(
["src/test_example.py"],
extra_args=["--enable=unsorted-methods"]
)
assert "unsorted-methods" not in stdout
Framework Preset Configuration Requirements
When testing framework presets, always include category-sorting = declaration
:
pytest preset: Test fixtures → test methods → helpers → private methods
unittest preset: setUp/tearDown → test methods → helpers → private methods
pyqt preset: __init__ → properties → event handlers → public → private methods
Best Practices for Integration Tests
Use Fixtures Consistently: Always use provided fixtures rather than manual setup
Framework Configuration: Include proper
category-sorting
setting for presetsTest Real Scenarios: Use realistic code examples in test cases
Validate Output: Check both return codes and output content
Performance Awareness: Integration tests should complete quickly (<5 seconds total)
Running Integration Tests
# Run all integration tests
make test-integration
# Run specific integration test file
pytest tests/integration/test_method_categorization_integration.py -v
# Run integration tests with verbose output
pytest tests/integration/ -v
# Run all tests (unit + integration)
make test-all
# Run with fixture debugging (if needed)
pytest tests/integration/ -v -s --tb=short
Current Status: ✅ All integration tests are passing successfully! The privacy fixer implementation is complete with cross-module import analysis, comprehensive CLI integration, and 100% test coverage. Integration tests complete in approximately 4 seconds, demonstrating excellent performance.
Integration Test Success Metrics:
Test Count: Multiple integration test files covering all features
Success Rate: 100% (all tests passing)
Execution Time: ~4.10 seconds for full suite
Coverage Areas: CLI integration, privacy fixer workflows, framework presets, method categorization
Test Categories: API integration, command-line interface, configuration validation
Plugin Integration Testing
Testing with PyLint
Test the plugin with real PyLint execution:
# Basic plugin testing
pylint --load-plugins=pylint_sort_functions src/
# Enable only our messages
pylint --load-plugins=pylint_sort_functions \
--disable=all \
--enable=unsorted-functions,unsorted-methods,mixed-function-visibility \
src/
# Using make targets
make test-plugin # Production-ready testing
make test-plugin-strict # Development testing (shows all issues)
make self-check # Same as test-plugin
Configuration Testing
Test different configuration approaches:
# Test with .pylintrc
echo "[MASTER]\nload-plugins = pylint_sort_functions" > .test-pylintrc
pylint --rcfile=.test-pylintrc src/
# Test with pyproject.toml
pylint src/ # Uses existing pyproject.toml configuration
CLI Tool Testing
The standalone CLI tool provides auto-fix functionality:
# Dry-run (show what would be changed)
python -m pylint_sort_functions.cli --dry-run src/
# Apply fixes
python -m pylint_sort_functions.cli --fix src/
# With decorator exclusions (CLI-only feature)
python -m pylint_sort_functions.cli --fix \
--ignore-decorators "@app.route" src/
See Auto-Fix Tool Reference for complete CLI documentation.
Docker Validation System
The project includes a comprehensive Docker-based validation system for testing all configuration examples in the documentation. This system ensures that all documented PyLint configurations actually work correctly with the plugin implementation.
See Docker Validation System for complete documentation of the Docker validation system, including:
Architecture and Components: Containerized testing environment details
Running Validation Tests: Using
make test-documentation
and manual controlAPI Endpoints: REST API for configuration testing
Configuration Extraction Process: How examples are extracted from documentation
Validation Reports: Understanding test results and metrics
Troubleshooting: Common issues and solutions
Quick usage:
# Run complete validation workflow
make test-documentation
Framework Integration Testing
Testing Decorator Exclusions
The Docker validation system includes comprehensive framework testing:
Flask Example (test-validation/test-projects/flask-project/
):
# These should be excluded from sorting due to @app.route
@app.route('/users/<int:user_id>') # More specific route
def get_user(user_id):
pass
@app.route('/users') # Less specific route
def list_users():
pass
# These regular functions should still trigger violations
def zebra_helper(): # Should come after alpha_helper
pass
def alpha_helper():
pass
- Expected Behavior:
Decorated functions (
get_user
,list_users
) should be excluded from sortingRegular functions (
zebra_helper
,alpha_helper
) should triggerW9001: unsorted-functions
- Current Reality:
PyLint Plugin: Decorator exclusion doesn’t work (generates config errors)
CLI Tool: Decorator exclusion works correctly with
--ignore-decorators
Testing Custom Frameworks
To test decorator exclusions with your own framework:
Create Test Project:
test-validation/test-projects/myframework-project/ ├── src/ │ └── framework_code.py ├── .pylintrc # or pyproject.toml └── expected_results.json
Add Configuration:
[MASTER] load-plugins = pylint_sort_functions [MESSAGES CONTROL] enable = unsorted-functions,unsorted-methods [PYLINT_SORT_FUNCTIONS] ignore-decorators = @myframework.route,@myframework.command
Test in Container:
make run-docker-container curl -X POST http://localhost:8080/test/myframework-project
Continuous Integration Testing
GitHub Actions Integration
The validation system integrates with CI/CD:
# .github/workflows/validate-docs.yml
name: Documentation Validation
on: [push, pull_request]
jobs:
validate-docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build validation container
run: make build-docker-image
- name: Run documentation tests
run: make test-documentation
- name: Upload validation report
uses: actions/upload-artifact@v3
with:
name: validation-report
path: test-validation/reports/
Pre-commit Integration
Validation tests can run in pre-commit hooks:
# .pre-commit-config.yaml
repos:
- repo: local
hooks:
- id: validate-docs
name: Validate documentation examples
entry: make test-documentation
language: system
pass_filenames: false
Performance Testing
Load Testing the API
Test the validation API under load:
# Install hey (HTTP load testing tool)
go install github.com/rakyll/hey@latest
# Load test health endpoint
hey -n 1000 -c 10 http://localhost:8080/health
# Load test project testing
hey -n 100 -c 5 -m POST http://localhost:8080/test/minimal-project
Container Resource Usage
Monitor container performance:
# View container resource usage
docker stats pylint-validation-container
# View container logs
docker logs pylint-validation-container
# Execute commands in container
docker exec -it pylint-validation-container bash
Benchmark Plugin Performance
# Time plugin execution
time pylint --load-plugins=pylint_sort_functions large_project/
# Profile with Python profiler
python -m cProfile -o profile.stats -c "
import subprocess
subprocess.run(['pylint', '--load-plugins=pylint_sort_functions', 'src/'])
"
Troubleshooting
Common Issues
Docker Build Failures
# Clear Docker cache
docker system prune -f
# Rebuild without cache
docker build --no-cache -t pylint-sort-functions-validation .
Container Won’t Start
# Check container logs
docker logs pylint-validation-container
# Check if port is in use
lsof -i :8080
# Use different port
docker run -p 8081:8080 pylint-sort-functions-validation
Plugin Not Found in Container
# Verify plugin installation
docker exec pylint-validation-container pylint --list-extensions
# Check Python path
docker exec pylint-validation-container python -c "
import pylint_sort_functions; print(pylint_sort_functions.__file__)
"
Test Failures
# Run tests with verbose output
pytest tests/ -v -s
# Run specific failing test
pytest tests/test_checker.py::TestFunctionSortChecker::test_specific_case -v
# Debug with pdb
pytest tests/ --pdb
Debug Mode
Enable debug output in various components:
# PyLint debug output
pylint --load-plugins=pylint_sort_functions --verbose src/
# API debug logs
docker logs pylint-validation-container
# Test runner debug
python test-validation/test-runner.py --verbose
Contributing to Tests
Adding New Test Cases
Unit Tests: Add to appropriate file in
tests/
Integration Tests: Add new test files to
tests/integration/
using fixture patternsFramework Tests: Create framework-specific test projects in
test-validation/test-projects/
Adding New Integration Tests
When adding new integration tests, follow the fixture-based pattern:
#!/usr/bin/env python3
"""
Integration tests for new feature.
Description of what the tests validate.
"""
from typing import Any
class TestNewFeature:
"""Integration tests for new feature functionality."""
def test_basic_functionality(
self, pylint_runner: Any, file_creator: Any, config_writer: Any
) -> None:
"""Test basic functionality with proper fixtures."""
# Create test file
file_creator("src/test_module.py", """
def function_to_test():
pass
""")
# Create configuration
config_content = """[MASTER]
load-plugins = pylint_sort_functions
[function-sort]
enable-new-feature = yes
"""
config_writer("pylintrc", config_content)
# Test with PyLint
returncode, stdout, stderr = pylint_runner(
["src/test_module.py"],
extra_args=["--enable=new-feature-message"]
)
assert returncode == 0
assert "expected-behavior" in stdout
Adding Framework Preset Tests
For framework preset testing, always use the sample_test_class
fixture and proper configuration:
def test_new_framework_preset(
self,
pylint_runner: Any,
file_creator: Any,
config_writer: Any,
sample_test_class: Any
) -> None:
"""Test new framework preset."""
# Use sample test class or create framework-specific code
file_creator("src/framework_code.py", sample_test_class["new_framework"])
# CRITICAL: Include category-sorting = declaration for framework presets
config_content = """[MASTER]
load-plugins = pylint_sort_functions
[function-sort]
enable-method-categories = yes
framework-preset = new_framework
category-sorting = declaration
"""
config_writer("pylintrc", config_content)
# Validate framework preset behavior
returncode, stdout, stderr = pylint_runner(
["src/framework_code.py"],
extra_args=["--enable=unsorted-methods"]
)
assert "unsorted-methods" not in stdout
Extending IntegrationTestHelper
To add new helper methods to IntegrationTestHelper
:
# In tests/integration/conftest.py
class IntegrationTestHelper:
"""Helper class with utilities for integration testing."""
@staticmethod
def create_new_test_scenario(
file_creator: Callable[[str, str], Path],
scenario_type: str
) -> dict[str, Path]:
"""Create a new type of test scenario."""
# Implementation here
return {"file1": path1, "file2": path2}
Test Guidelines
100% Coverage Required: All new code must include tests
PyLint Framework: Use
CheckerTestCase
for plugin testsFixture Consistency: Always use provided fixtures for integration tests
Framework Configuration: Include proper
category-sorting
setting for preset testsReal Examples: Use realistic code in test cases
Edge Cases: Test boundary conditions and error cases
Performance: Keep integration tests fast (<5 seconds total runtime)
Documentation: Update this guide when adding new testing approaches
See Also
Developer Guide - Plugin development and architecture
Auto-Fix Tool Reference - Command-line tool usage
Documentation Validation System - Detailed validation system architecture
User Guide - User guide with configuration examples