Validation Workflow
Validation Workflow
The validation workflow ensures that any new features or endpoints added to the Supervised AI platform adhere to the official API schema and performance benchmarks. Follow these steps to validate your implementation before proposing a pull request.
1. Define the Feature Specification
Before running automated validation, ensure your new feature has a corresponding JSON schema definition. The testing-api library uses these schemas to verify input/output integrity.
- Path:
/schemas/features/{feature_name}.json - Requirement: Define all required fields, data types, and constraint patterns (e.g., regex for IDs).
2. Environment Configuration
Set up your environment to point to your local or staging instance of the API.
# Set the target API base URL
export SUPERVISED_AI_TEST_URL="http://localhost:8080"
# Provide your authentication token for protected routes
export SUPERVISED_AI_AUTH_TOKEN="your_test_token"
3. Initialize the Validator
Use the ValidationEngine to load the official structure and compare it against your live endpoint.
from testing_api import ValidationEngine
# Initialize engine with the official structure
engine = ValidationEngine(version="v1")
# Target the specific feature you are developing
report = engine.validate_feature(
feature_id="new-model-inference",
endpoint="/api/v1/predict"
)
4. Running Validation Tests
The validation suite performs three primary checks: Schema Compliance, Data Integrity, and Response Latency.
Run the full suite via the CLI for detailed output:
python -m testing_api.validate --feature "new-model-inference" --verbose
Validation Levels:
| Level | Description | Criteria for Success |
| :--- | :--- | :--- |
| Schema | Structural check of JSON responses. | Must match official .json schema exactly. |
| Type | Strict type checking for AI model outputs. | Float precision and tensor shapes must align. |
| Header | Verification of security and metadata headers. | X-Request-ID and Content-Type must be present. |
5. Interpreting Results
The workflow generates a validation_report.json in the outputs/ directory.
- Success: If all checks pass, the CLI will exit with code
0. - Failure: A list of
ValidationErrorswill be provided, pinpointing the exact field or header that deviated from the official structure.
Example Error Output:
{
"status": "failed",
"errors": [
{
"field": "output.confidence_score",
"error_type": "TypeMismatch",
"expected": "float",
"received": "string"
}
]
}
6. Final Compliance Check
Once technical validation passes, ensure your feature meets the Supervised AI "Definition of Done":
- Documentation: All new parameters are documented in the
/docsdirectory. - Error Handling: All standard error codes (400, 401, 429, 500) return the official error object structure.
- Performance: Response time for the validation payload is within the 200ms threshold for standard inference.