Binary Interpreted Abstract Syntax
A universal LLM data gateway with multi-format support.
Reduce token usage by 44-52% while maintaining 100% semantic accuracy.
BIAS is a production-ready encoding format that transforms verbose structured data into ultra-compact, LLM-friendly representations. With 8 complete format adapters (JSON, YAML, TOML, HTML, Markdown, XML, CSV, JSON-RPC), comprehensive test coverage, and proven performance, BIAS has evolved from a JSON optimizer into comprehensive LLM infrastructure with automatic format detection and conversion.
Production-ready adapters for JSON, YAML, TOML, HTML, Markdown, XML, CSV, and JSON-RPC. Automatic detection with confidence-based format recognition (0.0-1.0 scale). All adapters tested and optimized for production use.
**2.7µs** JSON simple roundtrip, **10.3µs** nested. Even complex documents process in microseconds: Markdown (6.5µs), HTML (8.8µs), TOML (13.8µs). Measured with criterion.rs on real hardware (Nov 25, 2025).
Perfect round-trip conversion guaranteed. Every byte, every structure, every semantic meaning preserved with deterministic encoding.
44-52% average token reduction vs JSON. That's $316/month savings per 1M API calls. Scale to 100M calls? Save $37,920 annually.
Comprehensive test coverage with 100% success rate, DoS protection (max depth 128, max entities 100K), reserved namespace (_bias_*), and validation across Gemini, Claude, GPT, and Llama models.
gRPC server with Python, TypeScript, and JavaScript bindings. Simple API, drop-in replacement for your existing data pipeline.
BIAS transforms structured data through a multi-stage pipeline optimized for transformer models.
1. Input (JSON/YAML/TOML/HTML/Markdown) ↓ 2. Automatic Format Detection (confidence-based) ↓ 3. Parse → Canonical Graph Representation ↓ 4. Graph → BIAS Encoding (low-entropy, rigid sequence) ↓ 5. Optimized Output (44-52% smaller) Reverse: BIAS → Graph → Original Format (100% lossless)
{
"user": {
"id": 12345,
"name": "Alice Smith",
"email": "alice@example.com",
"preferences": {
"theme": "dark",
"notifications": true
}
},
"posts": [
{"id": 1, "title": "Hello World", "views": 1523},
{"id": 2, "title": "BIAS Guide", "views": 892}
]
}
[Compact binary-like encoding - not human readable] Actual BIAS output is a highly compressed graph representation optimized for transformer model token efficiency, not display.
BIAS uses a proprietary graph-based encoding that preserves 100% semantic accuracy while achieving 44-52% token reduction across all major LLM providers.
November 2025 - Multi-Format Adapter Architecture
BIAS reduces token usage for every supported format, not just JSON. Consistent savings across all 8 adapters with 100% semantic accuracy.
Measured with Claude 3.5 Sonnet tokenizer across comprehensive test corpus. All formats achieve lossless round-trip conversion with 100% semantic accuracy.
Cost savings at scale
| Usage Volume | JSON Cost | BIAS Cost | Annual Savings |
|---|---|---|---|
| 1M calls/month | $600 | $284 | $3,792 |
| 10M calls/month | $6,000 | $2,840 | $37,920 |
| 100M calls/month | $60,000 | $28,400 | $379,200 |
* Based on average token prices across major LLM providers (GPT-4, Claude, Gemini)
Start using BIAS today and join the teams already saving thousands on their AI infrastructure.