System Topology
System Topology
The testing-api acts as the primary orchestration layer between external testing suites and the Supervised AI core infrastructure. It facilitates the ingestion of test datasets, triggers model evaluations, and aggregates performance metrics for reporting.
Architectural Overview
The system follows a hub-and-spoke model where the Testing API serves as the central gateway. It abstracts the complexities of the underlying Supervised AI inference engines and evaluation protocols, providing a unified interface for developers.
- Client Application/SDK: The user-facing component that sends test configurations and data payloads.
- Testing API (This Service): Validates requests, manages test lifecycles, and interfaces with internal evaluation services.
- Supervised AI Core Services: High-performance engines that execute model inference and compute evaluation metrics (e.g., accuracy, latency, drift).
- Results Store: A persistent layer where test outputs and historical benchmarks are stored for comparative analysis.
Component Interactions
The following diagram illustrates the high-level data flow during a standard testing lifecycle:
1. Testing API Interface (Public)
The public surface area is designed for stateless communication. Users interact primarily with execution endpoints to submit batches of test cases.
- Input: JSON-based test configurations including model identifiers, dataset references, and evaluation parameters.
- Output: Execution IDs and real-time status objects.
2. Core Service Integration (Internal)
While the testing-api handles the "how" of the test, the Supervised AI Core handles the "what." The API translates user-defined test logic into execution tasks optimized for the platform's infrastructure.
3. Reporting & Telemetry
Upon completion of a test cycle, the API aggregates raw data from the Core services into structured reports. These are accessible via the results endpoints or exported to external logging providers.
Usage Example: Initializing a Test Run
To initiate a test through the system topology, users interact with the TestRunner interface. This triggers the workflow across the components described above.
import { TestingAPI } from '@supervised-ai/testing-api';
const client = new TestingAPI({
apiKey: process.env.SUPERVISED_AI_KEY,
environment: 'production'
});
// Defining the test configuration for the topology to process
const testConfig = {
modelId: "gpt-4-eval-v1",
dataset: "validation-set-alpha",
metrics: ["accuracy", "latency", "f1-score"],
parameters: {
temperature: 0.7,
maxTokens: 500
}
};
// The API routes this to Core Services and returns a tracking ID
const run = await client.runTest(testConfig);
console.log(`Test initiated. Track status at: ${run.statusUrl}`);
Data Schema
| Object | Field | Type | Description |
| :--- | :--- | :--- | :--- |
| Request | modelId | string | The unique identifier of the model within Supervised AI. |
| | dataset | string \| Array | The source data to be used for the test run. |
| Response | testId | uuid | A unique identifier for the specific test execution. |
| | status | enum | The current state (e.g., QUEUED, RUNNING, COMPLETED, FAILED). |
| | results | object | aggregated metrics returned from Core Services. |