Data Flow Models
Overview of Data Flow
The testing-api serves as the structural contract between your test suites and the Supervised AI platform. It ensures that requests sent to simulated environments adhere to the platform's schema and that responses from testing mocks are predictable and type-safe.
The flow is designed to intercept standard API calls and route them through a validation layer before they reach the mock execution engine.
The Request Lifecycle
The following steps outline the journey of a data packet from your local testing environment to the mock service:
- Client Initiation: The user initiates a request using the
testing-apiclient interfaces. - Schema Validation: The request object is validated against the official Supervised AI platform models to ensure compatibility with production endpoints.
- Context Injection: Testing metadata (e.g., mock latency, specific error triggers, or model versioning) is appended to the request headers or payload.
- Mock Resolution: The request is routed to the corresponding testing mock based on the endpoint definition.
- Response Synthesis: The mock service returns a structured response that mimics actual AI platform behavior, including confidence scores, token usage, and metadata.
Component Interaction
1. Client to API Layer
The user interacts primarily with the public interfaces. This layer abstracts the HTTP/S complexity, providing a typed method for every supported platform capability.
import { TestingClient, MockConfig } from 'testing-api';
// Initializing the client with a specific testing context
const client = new TestingClient({
environment: 'sandbox',
version: 'v1.2'
});
2. Validation and Transformation
Before the data leaves the client, it is transformed into a standardized internal representation. This ensures that even if the testing structure changes, the user's public-facing code remains stable.
| Phase | Input Type | Description |
| :--- | :--- | :--- |
| Input | RawRequestPayload | The raw data provided by the user (JSON/Object). |
| Validation | ValidatedModel | Verification against the Supervised AI schema. |
| Output | MockRequest | A decorated object ready for the mock engine. |
3. Mock Service Execution
The testing-api routes the validated request to a mock handler. This handler processes the input and selects an appropriate response model based on the testing scenario configured.
// Example of the data flow in a completion test
const response = await client.completions.create({
model: "supervised-pro-1",
prompt: "Analyze this dataset.",
mockOptions: {
simulateError: false,
delay: 200 // ms
}
});
// Response flows back as a structured TestingResponse type
console.log(response.data.choices[0].text);
Data Model Specifications
Request Flow Model
When sending data to the API, the structure follows a strict hierarchy to ensure the Supervised AI platform can parse the intent:
- Identity Layer: Contains API keys and environment identifiers.
- Config Layer: Model parameters (temperature, max tokens, etc.).
- Payload Layer: The actual content/data for the AI to process.
Response Flow Model
The return data flow provides more than just the AI output; it includes testing-specific telemetry:
- Status: Standardized success/failure codes.
- Payload: The simulated AI response.
- Diagnostics: Metadata regarding the mock execution (e.g.,
mock_id,process_time).
Error Propagation
In the event of a validation failure or a triggered mock error, the data flow is interrupted, and a TestingAPIException is propagated back to the client. This object includes the specific point of failure in the lifecycle (Validation, Network Simulation, or Mock Logic).