Architectural Principles
Core Design Philosophy
The testing-api is designed to provide a unified, predictable interface for interacting with the Supervised AI platform’s testing infrastructure. Its primary goal is to abstract the complexities of diverse AI model architectures into a standardized communication layer, ensuring that testing scripts and integration tools remain decoupled from underlying model implementations.
The architecture is guided by three main pillars: Predictability, Extensibility, and Machine-Readability.
Standardized Response Patterns
To ensure seamless integration with automated testing suites, the API adheres to a strict response structure. Regardless of the internal logic or the specific model being tested, all responses follow a common envelope pattern.
Success Response Structure
Successful requests return a 200 OK status code with a JSON body containing the status, data, and metadata objects.
{
"status": "success",
"data": {
"result": "string | object",
"score": "float",
"latency_ms": "integer"
},
"metadata": {
"request_id": "uuid",
"timestamp": "iso-8601",
"version": "v1.x"
}
}
Error Handling
Errors are categorized by standard HTTP status codes (4xx/5xx) and return a standardized error object to allow for automated retry logic or debugging.
{
"status": "error",
"error": {
"code": "string_identifier",
"message": "Human-readable explanation",
"details": {}
}
}
Interface Abstraction
The API treats every model as a "Black Box" via a common interface. This allows users to swap between different model versions or architectures (e.g., NLP, Computer Vision) without changing their client-side testing implementation.
Key Principles:
- Statelessness: Each request contains all the information necessary for processing. The API does not persist session state between test calls, ensuring test isolation.
- Uniform Resource Naming: Endpoints are structured by functionality (e.g.,
/v1/predict,/v1/validate) rather than by model name, providing a consistent entry point for all supervised tasks. - Type Safety: The API enforces strict schema validation for inputs. Users are expected to provide data in the format defined by the specific test suite configuration.
Data Interaction Guidelines
Input Requirements
All write/test operations require an application/json payload. The API utilizes strict schema enforcement to ensure that invalid data is caught at the gateway level before reaching the compute resources.
| Field | Type | Description |
| :--- | :--- | :--- |
| input_data | Object/Array | The primary payload to be processed by the AI model. |
| parameters | Object | (Optional) Configuration flags such as temperature, threshold, or top_k. |
| context | Object | (Optional) Metadata or environmental variables required for the test case. |
Output Types
The API primarily outputs structured JSON. While internal components may handle binary data or streams, the public testing interface flattens these into serializable objects or URI references to ensure compatibility with standard HTTP clients and testing frameworks.
Security and Authentication
While this is a testing interface, security is baked into the architectural design:
- Bearer Token Auth: Users must authenticate via an
Authorization: Bearer <token>header. - Internal Service Mesh: Note that while the API is accessible via standard protocols, it acts as a gateway to internal microservices. Internal routing and service discovery are handled transparently, and users should only interact with the public-facing endpoints.