Automated Testing Suites
Overview
The testing-api includes a comprehensive suite of automated tests designed to validate the integrity, performance, and schema consistency of the Supervised AI platform. These tests ensure that any changes to the API structure do not break downstream dependencies and that all endpoints adhere to the official specification.
Prerequisites
Before executing the testing suites, ensure your environment meets the following requirements:
- Runtime: Python 3.8+ (or Node.js 16+, depending on your specific implementation environment).
- Authentication: Valid API credentials for the Supervised AI platform.
- Environment Variables: A
.envfile configured with the necessary target URLs and tokens.
Running the Tests
The test suite is categorized into three main levels: Unit, Integration, and End-to-End (E2E).
1. Standard Execution
To run the entire suite against the default environment, use the following command:
# Using pytest (Standard for this structure)
pytest tests/
2. Targeted Testing
You can run specific test blocks based on the component you are developing or validating.
Validate specific endpoints:
pytest tests/api/v1/test_models.py
Run tests by severity or tag:
pytest -m "smoke"
3. Schema Validation
One of the primary functions of this API structure is ensuring JSON schema consistency. To run only the schema validation suite:
pytest tests/schema/
Configuration
The testing suite behavior is controlled via environment variables. These allow you to switch between local, staging, and production environments.
| Variable | Description | Default |
| :--- | :--- | :--- |
| BASE_URL | The target API root URL. | https://api.supervised.ai/v1 |
| API_KEY | Your platform access token. | None |
| TIMEOUT | Request timeout in seconds. | 30 |
| STRICT_MODE | If true, fails on minor schema warnings. | false |
To apply these, export them to your shell or include them in your .env file:
export BASE_URL="https://staging.api.supervised.ai/v1"
export API_KEY="your_test_token_here"
Test Results and Reporting
Upon completion, the suite generates a summary in the console. For detailed debugging, the system produces a structured log file and an optional HTML report.
- Console Summary: Provides a pass/fail count and execution time.
- JUnit XML: Generated for CI/CD integration.
- HTML Reports: (If configured) found in the
reports/directory.
# Generating an HTML report
pytest --html=reports/report.html
CI/CD Integration
This suite is designed to be integrated into your deployment pipeline. A standard integration looks like this:
- Build: Spin up a temporary environment or point to Staging.
- Test: Execute
pytest tests/ --junitxml=result.xml. - Evaluate: The pipeline should fail if any critical tests in the
testing-apisuite do not pass.