Cognitive Orchestration
The HX47 Runtime uses a distributed orchestration model to bridge the gap between deterministic IoT protocols and stochastic AI reasoning.
The gRPC Bridgeâ
Communication between the TS Kernel and the Python Sidecar is strictly typed via gRPC. This ensures sub-millisecond latency for event observation and robust error handling.
Proto Definitionâ
The interface is defined in hx47_runtime.proto:
service CognitiveRuntime {
rpc Orchestrate(stream RuntimeEvent) returns (stream CognitionProposal);
}
Inference Pipelineâ
The Python sidecar implements a multi-stage inference pipeline:
- Context Window Hydration: Ingests the recent graph state and device telemetry.
- Model Selection: Chooses between Primary (Complex reasoning) and Lightweight (Fast response) models via OpenRouter.
- Autonomous Proposal: Generates a set of possible actions based on the tenant's intent.
Safety Gatewayâ
Every cognitive proposal is intercepted by the Safety Gateway in the TS Kernel before being converted into a signed HxTP command.
Validation Stepsâ
- Manifest Check: Does the proposed action exist in the device's capability manifest?
- Rate Limiting: Has the device exceeded its command budget for the current window?
- Tenant Lockdown: Is the tenant's account currently suspended?
Configurationâ
The orchestration is controlled via these environment variables in .env.prod:
HX47_MODEL_PRIMARY="meta-llama/llama-3.2-3b-instruct:free"
HX47_MODEL_LIGHTWEIGHT="google/gemma-2-9b-it:free"
HX47_MODEL_ORCHESTRATION="openrouter/free"