Debugging Guide
Common Integration Pitfalls
Integrating with the Testing API requires precision in payload structure and authentication. Below are the most common issues encountered by developers and strategies to resolve them.
1. Authentication Failures (401/403)
Most authentication issues stem from incorrect token placement or expired credentials.
- Invalid Header: Ensure you are using the
Authorizationheader with theBearerprefix.Authorization: Bearer <YOUR_ACCESS_TOKEN> - Token Scope: Verify that your API key or token has
writeaccess if you are attempting to initiate a new test suite, orreadaccess for retrieving results. - Environment Mismatch: Ensure your token corresponds to the environment you are hitting (e.g., Development vs. Production).
2. Payload Schema Validation Errors (400 Bad Request)
The Testing API enforces strict JSON schema validation for all incoming requests.
- Missing Required Fields: Ensure your request body contains all mandatory fields defined in the API contract (e.g.,
model_id,dataset_version). - Type Mismatches: Check that numeric values are not passed as strings and that arrays contain the expected object types.
- Validation Tooling: Use a JSON validator or the provided SDK's built-in validation methods before sending the request.
3. Asynchronous Execution and Polling
Since AI testing can be time-intensive, most endpoints return a 202 Accepted status rather than immediate results.
- Premature Polling: Avoid polling the status endpoint more than once every 5 seconds. High-frequency polling may trigger rate limiting.
- Handling
PENDINGvs.FAILED: If a test remains in aPENDINGstate indefinitely, check thestatus_messagefield in the response. This often contains details regarding resource exhaustion or model loading timeouts. - Incorrect Resource ID: Ensure you are using the
job_idreturned from the initialPOSTrequest to poll for updates, not themodel_id.
4. Rate Limiting and Quotas (429 Too Many Requests)
The Supervised AI platform imposes rate limits to ensure stability across the testing infrastructure.
- Back-off Strategy: Implement an exponential back-off strategy when encountering a
429error. - Header Inspection: Check the
X-RateLimit-RemainingandRetry-Afterheaders in the response to determine when it is safe to resume requests.
5. Connection Timeouts
Large datasets or complex model evaluations can lead to connection timeouts if not handled correctly.
- Timeout Configuration: Set your client-side timeout to at least 30 seconds for initial handshakes.
- Payload Size: If you are uploading test cases directly in the API call, ensure the payload does not exceed 5MB. For larger datasets, use the URI-based upload method to reference external storage.
Diagnostic Headers
When contacting support or debugging internally, always capture the following headers from the API response:
| Header | Description |
| :--- | :--- |
| X-Request-ID | The unique identifier for your request. Essential for log tracing. |
| X-Trace-ID | Used to track the request across internal microservices. |
| X-Runtime | The time taken by the server to process the request (in milliseconds). |
Debugging with the CLI (Internal Tooling)
If you have access to the internal testing-api CLI, you can use the verbose flag to inspect raw request/response cycles:
# Example of debugging a test trigger
supervised-test trigger --id <MODEL_ID> --verbose
This will output the full cURL-equivalent command, including headers and the exact payload being sent to the server.