Supervised AI Integration
Platform Architecture Role
The testing-api serves as the standardized communication bridge between the Supervised AI Orchestrator and individual model environments. By adhering to this structure, developers ensure that their testing modules are compatible with the platform's automated evaluation pipelines, metric aggregators, and observability dashboards.
The integration follows a provider-consumer pattern:
- Supervised AI Platform (Consumer): Dispatches test payloads, triggers evaluation suites, and collects performance telemetry.
- Testing API (Provider): Receives standardized requests, executes the underlying model logic, and returns formatted evaluations based on the platform's schema requirements.
Interface Specifications
To integrate with the Supervised AI platform, the API must implement a series of standardized endpoints. These endpoints allow the platform to perform health checks, execute tests, and retrieve metadata about the testing environment.
Test Execution Endpoint
This is the primary interface used by the platform to send inference requests or validation sets.
- Endpoint:
/v1/execute-test - Method:
POST - Payload Type:
TestPayload
Request Schema:
{
"test_id": "string",
"model_config": {
"version": "string",
"parameters": "object"
},
"input_data": "array",
"metadata": "object"
}
Response Schema:
{
"status": "success | failure",
"results": [
{
"input_id": "string",
"output": "any",
"metrics": {
"latency": "float",
"tokens": "int"
}
}
],
"error": "string (optional)"
}
Integration Workflow
1. Registration
When a new model is deployed within the Supervised AI ecosystem, the platform registers the service URL of the testing-api instance. The platform verifies the integration by hitting the /health or /status endpoint to confirm the API structure matches the expected versioning.
2. Handshake and Configuration
The platform utilizes the metadata field in the API structure to pass environment variables and authentication tokens required for the test run. This ensures that sensitive credentials are not hardcoded within your testing implementation.
3. Telemetry Feedback
As tests are executed, the testing-api sends real-time feedback to the Supervised AI Platform. This interaction enables:
- Progress Tracking: Monitoring the completion percentage of large-scale batch tests.
- Resource Management: Dynamic scaling of testing infrastructure based on the load reported by the API.
- Metric Normalization: Standardizing metrics like "accuracy" or "hallucination scores" so they can be compared across different models in the platform UI.
Usage Example
To implement a basic integration using this structure, your service should wrap its model logic within the standardized response handler provided by this package:
from testing_api import SupervisedIntegration, TestResponse
# Initialize the integration helper
integration = SupervisedIntegration(api_key="PLATFORM_SECRET")
def handle_test_request(request_data):
# 1. Platform sends data to your testing-api
inputs = request_data.get("input_data")
# 2. Execute your model logic
results = my_ai_model.predict(inputs)
# 3. Format and return using the official platform structure
return TestResponse(
status="success",
results=results,
metrics={"latency": 120.5}
).to_json()
Security and Authentication
The interaction between the platform and the testing-api is secured via Mutual TLS (mTLS) or Header-based API Key validation. Ensure that your implementation of the testing-api structure includes a middleware layer to validate the X-Supervised-AI-Signature header to prevent unauthorized test injections.