REST API Reference
Overview
The Testing API provides a programmatic interface to manage, execute, and analyze tests within the Supervised AI platform. This API allows developers to automate quality assurance workflows for machine learning models, ensuring performance consistency across different datasets and environments.
Base URL
All requests are made to the following base URL:
https://api.supervised.ai/v1/testing
Authentication
The API uses Bearer Token authentication. Include your API key in the Authorization header of all requests.
Authorization: Bearer <YOUR_API_KEY>
Tests
Create a Test Run
Initiates a new testing process for a specific model against a designated dataset.
Method: POST
Endpoint: /runs
Request Body
| Field | Type | Description |
| :--- | :--- | :--- |
| model_id | string | The unique identifier of the model to be tested. |
| dataset_id | string | The ID of the validation or test dataset. |
| config | object | Configuration parameters (e.g., thresholds, metrics to calculate). |
| tags | array | Optional metadata tags for filtering results later. |
Example Usage
{
"model_id": "mod-99283",
"dataset_id": "ds-4412",
"config": {
"metrics": ["accuracy", "f1_score", "latency"],
"threshold": 0.85
},
"tags": ["production-candidate", "sprint-4"]
}
Get Test Status
Retrieves the current state and progress of a specific test run.
Method: GET
Endpoint: /runs/{run_id}
Path Parameters
| Parameter | Type | Description |
| :--- | :--- | :--- |
| run_id | string | The unique ID returned during test creation. |
Response Schema
| Field | Type | Description |
| :--- | :--- | :--- |
| status | string | Current state: queued, running, completed, or failed. |
| progress | float | Percentage of completion (0.0 to 1.0). |
| created_at | timestamp | ISO 8601 timestamp of initiation. |
Results
Fetch Test Results
Retrieve the detailed performance metrics and evaluation logs once a test run has reached the completed state.
Method: GET
Endpoint: /runs/{run_id}/results
Query Parameters
| Parameter | Type | Description |
| :--- | :--- | :--- |
| include_logs | boolean | Whether to include raw execution logs. Defaults to false. |
Example Response
{
"run_id": "run-abc-123",
"summary": {
"accuracy": 0.92,
"f1_score": 0.89,
"latency_ms": 142
},
"status": "passed",
"details": "All metrics within defined thresholds."
}
Datasets
List Available Datasets
Returns a list of datasets currently registered for testing.
Method: GET
Endpoint: /datasets
Response Schema
| Field | Type | Description |
| :--- | :--- | :--- |
| datasets | array | A list of dataset objects. |
| total_count | integer | Total number of datasets available. |
Error Handling
The Testing API uses standard HTTP response codes to indicate the success or failure of an API request.
| Code | Description |
| :--- | :--- |
| 200 | OK: Request successful. |
| 201 | Created: Resource successfully created. |
| 400 | Bad Request: Invalid parameters or body format. |
| 401 | Unauthorized: Missing or invalid API key. |
| 404 | Not Found: The requested resource (Run/Dataset) does not exist. |
| 500 | Internal Error: An error occurred on the Supervised AI platform. |
Error Response Body
{
"error": {
"code": "invalid_parameter",
"message": "The field 'model_id' is required but was not provided."
}
}