Error Handling Protocols
Error Handling Protocols
The Testing API utilizes standard HTTP status codes combined with a structured JSON error response body. This ensures that failures in supervised learning workflows—ranging from data validation to model inference—are predictable and easy to debug.
Error Response Schema
All failed requests return a consistent JSON object. Use the code field for programmatic handling and the message field for logging or user display.
{
"status": "error",
"error": {
"code": "STRING_IDENTIFIER",
"message": "A descriptive error message.",
"context": {
"field": "Optional metadata or payload snippets",
"suggestion": "How to resolve the issue"
}
}
}
Standard HTTP Status Codes
| Status Code | Meaning | Common Cause |
| :--- | :--- | :--- |
| 400 Bad Request | Invalid Syntax | The request payload is malformed or missing required parameters. |
| 401 Unauthorized | Authentication Failed | Missing or invalid API key/Token. |
| 403 Forbidden | Permission Denied | The authenticated user does not have access to the specific model or dataset. |
| 404 Not Found | Resource Missing | The requested test_id, model_id, or endpoint does not exist. |
| 422 Unprocessable Entity | Validation Error | The JSON is valid, but the data fails supervised learning constraints (e.g., label mismatch). |
| 429 Too Many Requests | Rate Limiting | The request frequency exceeds the allocated quota for the testing tier. |
| 500 Internal Server Error | Platform Error | An unexpected failure within the Supervised AI infrastructure. |
Supervised Learning Specific Error Codes
In addition to HTTP status codes, the API provides domain-specific error codes to assist in troubleshooting machine learning pipelines.
DATASET_SCHEMA_MISMATCH
- Cause: The input data for testing does not match the features or dimensions expected by the model.
- Resolution: Verify that the column names and data types in your test set match the training schema.
LABEL_OOD_ERROR
- Cause: "Out of Distribution" labels detected. The testing API found labels in the test set that were not present in the supervised training set.
- Resolution: Filter the test set to include only supported classes or update the model configuration.
INFERENCE_TIMEOUT
- Cause: The model took too long to return a prediction, exceeding the testing threshold.
- Resolution: Reduce the batch size of the testing request or optimize the model's forward pass.
MODEL_NOT_READY
- Cause: Attempting to run a test against a model that is still in the
TRAININGorDEPLOYINGstate. - Resolution: Poll the model status endpoint until the state is
ACTIVEbefore triggering tests.
Troubleshooting Steps
When an error is encountered, follow these protocols to resolve the issue:
- Validate Request Schema: Use the
contextfield in the error response to identify which field failed validation. Cross-reference your payload with the [API Reference Documentation]. - Check Model State: Ensure the target model is currently
ACTIVE. Testing requests againstARCHIVEDorFAILEDmodels will return a404or422. - Inspect Batch Size: For
504 Gateway TimeoutorINFERENCE_TIMEOUTerrors, reduce the number of records in yourtest_payloadand retry. - Verify Permissions: Ensure your API key has
scope:test_readandscope:test_writepermissions for the specific project ID. - Log Trace IDs: For
500 Internal Server Errorresponses, include thex-request-idheader in your support ticket to help the Supervised AI team locate the logs.
Usage Example: Handling a Validation Error
import requests
response = requests.post(
"https://api.supervised.ai/v1/testing/run",
json={"model_id": "mod_123", "data": []} # Missing data
)
if response.status_code == 422:
error_data = response.json().get('error')
print(f"Error [{error_data['code']}]: {error_data['message']}")
# Logical check for specific codes
if error_data['code'] == 'DATASET_EMPTY':
# Implement fallback or alert
pass