Execution Framework
Execution Framework
The testing-api execution framework provides a standardized environment for running automated validation suites against the Supervised AI platform. It is designed to be triggered via CLI, integrated into CI/CD pipelines, or used as a library for custom test orchestration.
Prerequisites
Before executing tests, ensure the following requirements are met:
- Python: version 3.9 or higher.
- Dependencies: All requirements installed via
pip install -r requirements.txt. - Connectivity: Network access to the Supervised AI API endpoints.
Configuration
The framework utilizes environment variables for configuration. You must define these in a .env file or export them to your shell environment before execution.
| Variable | Type | Description |
| :--- | :--- | :--- |
| SUPERVISED_AI_BASE_URL | String | The root URL of the API (e.g., https://api.supervised.ai/v1). |
| API_KEY | String | Your platform authentication token. |
| TEST_ENVIRONMENT | String | Environment context: staging, production, or development. |
| LOG_LEVEL | String | Granularity of logs: DEBUG, INFO, WARNING, ERROR. |
Running Tests
The framework is built on top of pytest. You can execute tests globally or target specific modules and markers.
Standard Execution
To run the entire test suite with default configurations:
pytest
Targeted Execution
To run tests for a specific feature or module:
pytest tests/api/v1/inference/
Running by Tags (Markers)
Tests are categorized using Pytest markers. Use the -m flag to filter execution:
# Run only critical smoke tests
pytest -m smoke
# Run all tests except those marked as long_running
pytest -m "not long_running"
Command Line Arguments
The framework supports several custom flags to modify runtime behavior:
| Flag | Description |
| :--- | :--- |
| --env | Overrides the TEST_ENVIRONMENT variable. |
| --report | Generates an HTML report in the /reports directory. |
| --fail-fast | Stops the execution immediately after the first failure. |
Example usage with flags:
pytest --env production -m "regression" --report
Programmatic Usage
While the CLI is the primary interface, you can trigger test execution from within other Python services using the internal TestRunner wrapper.
from testing_api.core.runner import TestRunner
# Initialize the runner with specific configuration
runner = TestRunner(
environment="staging",
markers=["inference", "performance"],
output_dir="./custom_logs"
)
# Execute and capture results
results = runner.run_suite()
print(f"Passed: {results.passed}, Failed: {results.failed}")
Test Reporting
By default, execution summaries are printed to stdout. Detailed logs and artifacts are handled as follows:
- JUnit XML: Generated for CI/CD integration at
reports/results.xml. - HTML Reports: If
--reportis passed, a visual summary is generated atreports/report.html. - JSON Logs: Structured logs for programmatic ingestion are stored in
logs/execution.json.
CI/CD Integration
To integrate this framework into a GitHub Action or GitLab CI pipeline, use the following template:
# GitHub Actions Example
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Install dependencies
run: pip install .
- name: Run Automated Tests
env:
API_KEY: ${{ secrets.SUPERVISED_AI_KEY }}
SUPERVISED_AI_BASE_URL: "https://api.supervised.ai"
run: pytest --junitxml=reports/results.xml