Skip to content
forked from toon-format/toon

πŸŽ’ Token-Oriented Object Notation (TOON) – JSON for LLM prompts at half the tokens. Spec, benchmarks & TypeScript implementation.

License

Notifications You must be signed in to change notification settings

rar2731991/toon

Β 
Β 

Repository files navigation

TOON logo with step‑by‑step guide

Token-Oriented Object Notation (TOON)

CI npm version SPEC v1.4 npm downloads (total) License: MIT

Token-Oriented Object Notation is a compact, human-readable serialization format designed for passing structured data to Large Language Models with significantly reduced token usage. It's intended for LLM input as a lossless, drop-in representation of JSON data.

TOON's sweet spot is uniform arrays of objects – multiple fields per row, same structure across items. It borrows YAML's indentation-based structure for nested objects and CSV's tabular format for uniform data rows, then optimizes both for token efficiency in LLM contexts. For deeply nested or non-uniform data, JSON may be more efficient.

TOON achieves CSV-like compactness while adding explicit structure that helps LLMs parse and validate data reliably.

Tip

Think of TOON as a translation layer: use JSON programmatically, convert to TOON for LLM input.

Table of Contents

Why TOON?

AI is becoming cheaper and more accessible, but larger context windows allow for larger data inputs as well. LLM tokens still cost money – and standard JSON is verbose and token-expensive:

{
  "users": [
    { "id": 1, "name": "Alice", "role": "admin" },
    { "id": 2, "name": "Bob", "role": "user" }
  ]
}

TOON conveys the same information with fewer tokens:

users[2]{id,name,role}:
  1,Alice,admin
  2,Bob,user
Why create a new format?

For small payloads, JSON/CSV/YAML work fine. TOON's value emerges at scale: when you're making hundreds of LLM calls with uniform tabular data, eliminating repeated keys compounds savings significantly. If token costs matter to your use case, TOON reduces them. If not, stick with what works.

When NOT to use TOON

TOON excels with uniform arrays of objects, but there are cases where other formats are better:

  • Deeply nested or non-uniform structures (tabular eligibility β‰ˆ 0%): JSON-compact often uses fewer tokens. Example: complex configuration objects with many nested levels.
  • Semi-uniform arrays (~40–60% tabular eligibility): Token savings diminish. Prefer JSON if your pipelines already rely on it.
  • Flat CSV use-cases: CSV is smaller than TOON for pure tabular data. TOON adds minimal overhead (~5-10%) to provide structure (length markers, field headers, delimiter scoping) that improves LLM reliability.

See benchmarks for concrete comparisons across different data structures.

Key Features

  • πŸ’Έ Token-efficient: typically 30–60% fewer tokens than JSON1
  • 🀿 LLM-friendly guardrails: explicit lengths and fields enable validation
  • 🍱 Minimal syntax: removes redundant punctuation (braces, brackets, most quotes)
  • πŸ“ Indentation-based structure: like YAML, uses whitespace instead of braces
  • 🧺 Tabular arrays: declare keys once, stream data as rows

Benchmarks

Tip

Try the interactive Format Tokenization Playground to compare token usage across CSV, JSON, YAML, and TOON with your own data.

Benchmarks are organized into two tracks to ensure fair comparisons:

  • Mixed-Structure Track: Datasets with nested or semi-uniform structures (TOON vs JSON, YAML, XML). CSV excluded as it cannot properly represent these structures.
  • Flat-Only Track: Datasets with flat tabular structures where CSV is applicable (CSV vs TOON vs JSON, YAML, XML).

Token Efficiency

Token counts are measured using the GPT-5 o200k_base tokenizer via gpt-tokenizer. Savings are calculated against formatted JSON (2-space indentation) as the primary baseline, with additional comparisons to compact JSON (minified), YAML, and XML. Actual savings vary by model and tokenizer.

The benchmarks test datasets across different structural patterns (uniform, semi-uniform, nested, deeply nested) to show where TOON excels and where other formats may be better.

Mixed-Structure Track

Datasets with nested or semi-uniform structures. CSV excluded as it cannot properly represent these structures.

πŸ›’ E-commerce orders with nested structures  β”Š  Tabular: 33%
   β”‚
   TOON                β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘    72,743 tokens
   β”œβ”€ vs JSON          (βˆ’33.1%)               108,731 tokens
   β”œβ”€ vs JSON compact  (+5.5%)                 68,936 tokens
   β”œβ”€ vs YAML          (βˆ’14.1%)                84,724 tokens
   └─ vs XML           (βˆ’40.5%)               122,313 tokens

🧾 Semi-uniform event logs  β”Š  Tabular: 50%
   β”‚
   TOON                β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘   153,223 tokens
   β”œβ”€ vs JSON          (βˆ’15.0%)               180,196 tokens
   β”œβ”€ vs JSON compact  (+19.9%)               127,740 tokens
   β”œβ”€ vs YAML          (βˆ’0.8%)                154,514 tokens
   └─ vs XML           (βˆ’25.2%)               204,800 tokens

🧩 Deeply nested configuration  β”Š  Tabular: 0%
   β”‚
   TOON                β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘       631 tokens
   β”œβ”€ vs JSON          (βˆ’31.3%)                   919 tokens
   β”œβ”€ vs JSON compact  (+11.9%)                   564 tokens
   β”œβ”€ vs YAML          (βˆ’6.2%)                    673 tokens
   └─ vs XML           (βˆ’37.4%)                 1,008 tokens

──────────────────────────────────── Total ────────────────────────────────────
   TOON                β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘   226,597 tokens
   β”œβ”€ vs JSON          (βˆ’21.8%)               289,846 tokens
   β”œβ”€ vs JSON compact  (+14.9%)               197,240 tokens
   β”œβ”€ vs YAML          (βˆ’5.5%)                239,911 tokens
   └─ vs XML           (βˆ’30.9%)               328,121 tokens

Flat-Only Track

Datasets with flat tabular structures where CSV is applicable.

πŸ‘₯ Uniform employee records  β”Š  Tabular: 100%
   β”‚
   CSV                 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘    46,956 tokens
   TOON                β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ    49,827 tokens   (+6.1% vs CSV)
   β”œβ”€ vs JSON          (βˆ’60.7%)               126,854 tokens
   β”œβ”€ vs JSON compact  (βˆ’36.8%)                78,850 tokens
   β”œβ”€ vs YAML          (βˆ’50.0%)                99,701 tokens
   └─ vs XML           (βˆ’66.0%)               146,440 tokens

πŸ“ˆ Time-series analytics data  β”Š  Tabular: 100%
   β”‚
   CSV                 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘     8,396 tokens
   TOON                β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ     9,128 tokens   (+8.7% vs CSV)
   β”œβ”€ vs JSON          (βˆ’59.0%)                22,258 tokens
   β”œβ”€ vs JSON compact  (βˆ’35.8%)                14,224 tokens
   β”œβ”€ vs YAML          (βˆ’48.9%)                17,871 tokens
   └─ vs XML           (βˆ’65.7%)                26,629 tokens

⭐ Top 100 GitHub repositories  β”Š  Tabular: 100%
   β”‚
   CSV                 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘     8,513 tokens
   TOON                β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ     8,745 tokens   (+2.7% vs CSV)
   β”œβ”€ vs JSON          (βˆ’42.3%)                15,145 tokens
   β”œβ”€ vs JSON compact  (βˆ’23.7%)                11,455 tokens
   β”œβ”€ vs YAML          (βˆ’33.4%)                13,129 tokens
   └─ vs XML           (βˆ’48.8%)                17,095 tokens

──────────────────────────────────── Total ────────────────────────────────────
   CSV                 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘    63,865 tokens
   TOON                β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ    67,700 tokens   (+6.0% vs CSV)
   β”œβ”€ vs JSON          (βˆ’58.8%)               164,257 tokens
   β”œβ”€ vs JSON compact  (βˆ’35.2%)               104,529 tokens
   β”œβ”€ vs YAML          (βˆ’48.2%)               130,701 tokens
   └─ vs XML           (βˆ’64.4%)               190,164 tokens
View detailed examples

πŸ“ˆ Time-series analytics data

Savings: 13,130 tokens (59.0% reduction vs JSON)

JSON (22,258 tokens):

{
  "metrics": [
    {
      "date": "2025-01-01",
      "views": 7708,
      "clicks": 595,
      "conversions": 69,
      "revenue": 15369.93,
      "bounceRate": 0.35
    },
    {
      "date": "2025-01-02",
      "views": 5894,
      "clicks": 381,
      "conversions": 21,
      "revenue": 2112.12,
      "bounceRate": 0.3
    },
    {
      "date": "2025-01-03",
      "views": 6835,
      "clicks": 422,
      "conversions": 35,
      "revenue": 4525.73,
      "bounceRate": 0.5
    },
    {
      "date": "2025-01-04",
      "views": 5325,
      "clicks": 305,
      "conversions": 22,
      "revenue": 2445.3,
      "bounceRate": 0.44
    },
    {
      "date": "2025-01-05",
      "views": 2974,
      "clicks": 61,
      "conversions": 6,
      "revenue": 956.57,
      "bounceRate": 0.47
    }
  ]
}

TOON (9,128 tokens):

metrics[5]{date,views,clicks,conversions,revenue,bounceRate}:
  2025-01-01,7708,595,69,15369.93,0.35
  2025-01-02,5894,381,21,2112.12,0.3
  2025-01-03,6835,422,35,4525.73,0.5
  2025-01-04,5325,305,22,2445.3,0.44
  2025-01-05,2974,61,6,956.57,0.47

⭐ Top 100 GitHub repositories

Savings: 6,400 tokens (42.3% reduction vs JSON)

JSON (15,145 tokens):

{
  "repositories": [
    {
      "id": 28457823,
      "name": "freeCodeCamp",
      "repo": "freeCodeCamp/freeCodeCamp",
      "description": "freeCodeCamp.org's open-source codebase and curriculum. Learn math, programming,…",
      "createdAt": "2014-12-24T17:49:19Z",
      "updatedAt": "2025-10-28T11:58:08Z",
      "pushedAt": "2025-10-28T10:17:16Z",
      "stars": 430886,
      "watchers": 8583,
      "forks": 42146,
      "defaultBranch": "main"
    },
    {
      "id": 132750724,
      "name": "build-your-own-x",
      "repo": "codecrafters-io/build-your-own-x",
      "description": "Master programming by recreating your favorite technologies from scratch.",
      "createdAt": "2018-05-09T12:03:18Z",
      "updatedAt": "2025-10-28T12:37:11Z",
      "pushedAt": "2025-10-10T18:45:01Z",
      "stars": 430877,
      "watchers": 6332,
      "forks": 40453,
      "defaultBranch": "master"
    },
    {
      "id": 21737465,
      "name": "awesome",
      "repo": "sindresorhus/awesome",
      "description": "😎 Awesome lists about all kinds of interesting topics",
      "createdAt": "2014-07-11T13:42:37Z",
      "updatedAt": "2025-10-28T12:40:21Z",
      "pushedAt": "2025-10-27T17:57:31Z",
      "stars": 410052,
      "watchers": 8017,
      "forks": 32029,
      "defaultBranch": "main"
    }
  ]
}

TOON (8,745 tokens):

repositories[3]{id,name,repo,description,createdAt,updatedAt,pushedAt,stars,watchers,forks,defaultBranch}:
  28457823,freeCodeCamp,freeCodeCamp/freeCodeCamp,"freeCodeCamp.org's open-source codebase and curriculum. Learn math, programming,…","2014-12-24T17:49:19Z","2025-10-28T11:58:08Z","2025-10-28T10:17:16Z",430886,8583,42146,main
  132750724,build-your-own-x,codecrafters-io/build-your-own-x,Master programming by recreating your favorite technologies from scratch.,"2018-05-09T12:03:18Z","2025-10-28T12:37:11Z","2025-10-10T18:45:01Z",430877,6332,40453,master
  21737465,awesome,sindresorhus/awesome,😎 Awesome lists about all kinds of interesting topics,"2014-07-11T13:42:37Z","2025-10-28T12:40:21Z","2025-10-27T17:57:31Z",410052,8017,32029,main

Retrieval Accuracy

Benchmarks test LLM comprehension across different input formats using 201 data retrieval questions on 4 models.

View Dataset Catalog

Dataset Catalog

Dataset Rows Structure CSV Support Eligibility
Uniform employee records 100 uniform βœ“ 100%
E-commerce orders with nested structures 50 nested βœ— 33%
Time-series analytics data 60 uniform βœ“ 100%
Top 100 GitHub repositories 100 uniform βœ“ 100%
Semi-uniform event logs 75 semi-uniform βœ— 50%
Deeply nested configuration 11 deep βœ— 0%

Structure classes:

  • uniform: All objects have identical fields with primitive values
  • semi-uniform: Mix of uniform and non-uniform structures
  • nested: Objects with nested structures (nested objects or arrays)
  • deep: Highly nested with minimal tabular eligibility

CSV Support: βœ“ (supported), βœ— (not supported – would require lossy flattening)

Eligibility: Percentage of arrays that qualify for TOON's tabular format (uniform objects with primitive values)

Efficiency Ranking (Accuracy per 1K Tokens)

Each format's overall performance, balancing accuracy against token cost:

TOON           β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“   15.6  β”‚  68.7% acc  β”‚  4,389 tokens
CSV            β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“   15.3  β”‚  62.3% acc  β”‚  4,080 tokens
JSON compact   β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–‘β–‘β–‘   13.5  β”‚  67.2% acc  β”‚  4,982 tokens
YAML           β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–‘β–‘β–‘β–‘β–‘β–‘   11.2  β”‚  66.7% acc  β”‚  5,976 tokens
JSON           β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    9.0  β”‚  65.7% acc  β”‚  7,260 tokens
XML            β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    8.1  β”‚  66.8% acc  β”‚  8,251 tokens

TOON achieves 68.7% accuracy (vs JSON's 65.7%) while using 39.5% fewer tokens.

Per-Model Accuracy

Accuracy across 4 LLMs on 201 data retrieval questions:

gpt-5-nano
β†’ TOON           β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘    88.6% (178/201)
  JSON compact   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘    88.1% (177/201)
  CSV            β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘    88.0% (88/100)
  YAML           β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘    84.6% (170/201)
  XML            β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘    81.6% (164/201)
  JSON           β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘    80.1% (161/201)

claude-haiku-4-5-20251001
  YAML           β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    52.2% (105/201)
β†’ TOON           β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    50.7% (102/201)
  JSON           β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    50.2% (101/201)
  JSON compact   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    49.8% (100/201)
  XML            β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    49.3% (99/201)
  CSV            β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    39.0% (39/100)

gemini-2.5-flash
  XML            β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘    86.1% (173/201)
β†’ TOON           β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘    84.1% (169/201)
  CSV            β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘    82.0% (82/100)
  JSON compact   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘    81.1% (163/201)
  YAML           β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘    81.1% (163/201)
  JSON           β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘    81.1% (163/201)

grok-4-fast-non-reasoning
β†’ TOON           β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    51.2% (103/201)
  JSON           β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    51.2% (103/201)
  XML            β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    50.2% (101/201)
  JSON compact   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    49.8% (100/201)
  YAML           β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    48.8% (98/201)
  CSV            β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘    40.0% (40/100)

Key tradeoff: TOON achieves 68.7% accuracy (vs JSON's 65.7%) while using 39.5% fewer tokens on these datasets.

Performance by dataset and model

Performance by Dataset

Uniform employee records
Format Accuracy Tokens Correct/Total
toon 65.6% 2,483 105/160
csv 62.5% 2,337 100/160
json-compact 66.3% 3,943 106/160
yaml 63.7% 4,969 102/160
xml 67.5% 7,314 108/160
json-pretty 62.5% 6,347 100/160
E-commerce orders with nested structures
Format Accuracy Tokens Correct/Total
toon 75.6% 7,197 121/160
json-compact 70.6% 6,784 113/160
yaml 71.9% 8,334 115/160
json-pretty 68.8% 10,700 110/160
xml 71.9% 12,013 115/160
Time-series analytics data
Format Accuracy Tokens Correct/Total
csv 63.8% 1,391 74/116
toon 66.4% 1,513 77/116
json-compact 61.2% 2,339 71/116
yaml 65.5% 2,936 76/116
json-pretty 64.7% 3,663 75/116
xml 65.5% 4,374 76/116
Top 100 GitHub repositories
Format Accuracy Tokens Correct/Total
toon 63.7% 8,745 79/124
csv 60.5% 8,513 75/124
json-compact 56.5% 11,455 70/124
yaml 53.2% 13,129 66/124
json-pretty 53.2% 15,145 66/124
xml 53.2% 17,095 66/124
Semi-uniform event logs
Format Accuracy Tokens Correct/Total
json-compact 55.0% 4,809 66/120
yaml 52.5% 5,814 63/120
json-pretty 52.5% 6,784 63/120
toon 45.8% 5,764 55/120
xml 50.8% 7,699 61/120
Deeply nested configuration
Format Accuracy Tokens Correct/Total
json-compact 91.9% 564 114/124
toon 92.7% 631 115/124
yaml 91.9% 673 114/124
json-pretty 91.9% 919 114/124
xml 89.5% 1,008 111/124

Performance by Model

gpt-5-nano
Format Accuracy Correct/Total
toon 88.6% 178/201
json-compact 88.1% 177/201
csv 88.0% 88/100
yaml 84.6% 170/201
xml 81.6% 164/201
json-pretty 80.1% 161/201
claude-haiku-4-5-20251001
Format Accuracy Correct/Total
yaml 52.2% 105/201
toon 50.7% 102/201
json-pretty 50.2% 101/201
json-compact 49.8% 100/201
xml 49.3% 99/201
csv 39.0% 39/100
gemini-2.5-flash
Format Accuracy Correct/Total
xml 86.1% 173/201
toon 84.1% 169/201
csv 82.0% 82/100
json-compact 81.1% 163/201
yaml 81.1% 163/201
json-pretty 81.1% 163/201
grok-4-fast-non-reasoning
Format Accuracy Correct/Total
toon 51.2% 103/201
json-pretty 51.2% 103/201
xml 50.2% 101/201
json-compact 49.8% 100/201
yaml 48.8% 98/201
csv 40.0% 40/100
How the benchmark works

What's Being Measured

This benchmark tests LLM comprehension and data retrieval accuracy across different input formats. Each LLM receives formatted data and must answer questions about it (this does not test model's ability to generate TOON output).

Datasets Tested

Six datasets designed to test different structural patterns:

  1. Tabular (100 employee records): Uniform objects with identical fields – optimal for TOON's tabular format.
  2. Nested (50 e-commerce orders): Complex structures with nested customer objects and item arrays.
  3. Analytics (60 days of metrics): Time-series data with dates and numeric values.
  4. GitHub (100 repositories): Real-world data from top GitHub repos by stars.
  5. Event Logs (75 logs): Semi-uniform data with ~50% flat logs and ~50% with nested error objects.
  6. Nested Config (1 configuration): Deeply nested configuration with minimal tabular eligibility.

Question Types

201 questions are generated dynamically across three categories:

  • Field retrieval (36%): Direct value lookups or values that can be read straight off a record (including booleans and simple counts such as array lengths)

    • Example: "What is Alice's salary?" β†’ 75000
    • Example: "How many items are in order ORD-0042?" β†’ 3
    • Example: "What is the customer name for order ORD-0042?" β†’ John Doe
  • Aggregation (37%): Dataset-level totals and averages plus single-condition filters (counts, sums, min/max comparisons)

    • Example: "How many employees work in Engineering?" β†’ 17
    • Example: "What is the total revenue across all orders?" β†’ 45123.50
    • Example: "How many employees have salary > 80000?" β†’ 23
  • Filtering (27%): Multi-condition queries requiring compound logic (AND constraints across fields)

    • Example: "How many employees in Sales have salary > 80000?" β†’ 5
    • Example: "How many active employees have more than 10 years of experience?" β†’ 8

Evaluation Process

  1. Format conversion: Each dataset is converted to all 6 formats (TOON, JSON compact, XML, YAML, JSON, CSV).
  2. Query LLM: Each model receives formatted data + question in a prompt and extracts the answer.
  3. Validate with LLM-as-judge: gpt-5-nano validates if the answer is semantically correct (e.g., 50000 = $50,000, Engineering = engineering, 2025-01-01 = January 1, 2025).

Models & Configuration

  • Models tested: gpt-5-nano, claude-haiku-4-5-20251001, gemini-2.5-flash, grok-4-fast-non-reasoning
  • Token counting: Using gpt-tokenizer with o200k_base encoding (GPT-5 tokenizer)
  • Temperature: Not set (models use their defaults)
  • Total evaluations: 201 questions Γ— 6 formats Γ— 4 models = 4,824 LLM calls

Installation & Quick Start

# npm
npm install @toon-format/toon

# pnpm
pnpm add @toon-format/toon

# yarn
yarn add @toon-format/toon

Example usage:

import { encode } from '@toon-format/toon'

const data = {
  users: [
    { id: 1, name: 'Alice', role: 'admin' },
    { id: 2, name: 'Bob', role: 'user' }
  ]
}

console.log(encode(data))
// users[2]{id,name,role}:
//   1,Alice,admin
//   2,Bob,user

CLI

Command-line tool for converting between JSON and TOON formats.

Usage

npx @toon-format/cli [options] [input]

Standard input: Omit the input argument or use - to read from stdin. This enables piping data directly from other commands.

Auto-detection: The CLI automatically detects the operation based on file extension (.json β†’ encode, .toon β†’ decode). When reading from stdin, use --encode or --decode flags to specify the operation (defaults to encode).

# Encode JSON to TOON (auto-detected)
npx @toon-format/cli input.json -o output.toon

# Decode TOON to JSON (auto-detected)
npx @toon-format/cli data.toon -o output.json

# Output to stdout
npx @toon-format/cli input.json

# Pipe from stdin (no argument needed)
cat data.json | npx @toon-format/cli
echo '{"name": "Ada"}' | npx @toon-format/cli

# Explicit stdin with hyphen (equivalent to above)
cat data.json | npx @toon-format/cli -

# Decode from stdin
cat data.toon | npx @toon-format/cli --decode

Options

Option Description
-o, --output <file> Output file path (prints to stdout if omitted)
-e, --encode Force encode mode (overrides auto-detection)
-d, --decode Force decode mode (overrides auto-detection)
--delimiter <char> Array delimiter: , (comma), \t (tab), | (pipe)
--indent <number> Indentation size (default: 2)
--length-marker Add # prefix to array lengths (e.g., items[#3])
--stats Show token count estimates and savings (encode only)
--no-strict Disable strict validation when decoding

Examples

# Show token savings when encoding
npx @toon-format/cli data.json --stats -o output.toon

# Tab-separated output (often more token-efficient)
npx @toon-format/cli data.json --delimiter "\t" -o output.toon

# Pipe-separated with length markers
npx @toon-format/cli data.json --delimiter "|" --length-marker -o output.toon

# Lenient decoding (skip validation)
npx @toon-format/cli data.toon --no-strict -o output.json

# Stdin workflows
echo '{"name": "Ada", "age": 30}' | npx @toon-format/cli --stats
cat large-dataset.json | npx @toon-format/cli --delimiter "\t" > output.toon

Format Overview

Note

For precise formatting rules and implementation details, see the full specification.

Objects

Simple objects with primitive values:

encode({
  id: 123,
  name: 'Ada',
  active: true
})
id: 123
name: Ada
active: true

Nested objects:

encode({
  user: {
    id: 123,
    name: 'Ada'
  }
})
user:
  id: 123
  name: Ada

Arrays

Tip

TOON includes the array length in brackets (e.g., items[3]). When using comma delimiters (default), the delimiter is implicit. When using tab or pipe delimiters, the delimiter is explicitly shown in the header (e.g., tags[2|] or [2 ]). This encoding helps LLMs identify the delimiter and track the number of elements, reducing errors when generating or validating structured output.

Primitive Arrays (Inline)

encode({
  tags: ['admin', 'ops', 'dev']
})
tags[3]: admin,ops,dev

Arrays of Objects (Tabular)

When all objects share the same primitive fields, TOON uses an efficient tabular format:

encode({
  items: [
    { sku: 'A1', qty: 2, price: 9.99 },
    { sku: 'B2', qty: 1, price: 14.5 }
  ]
})
items[2]{sku,qty,price}:
  A1,2,9.99
  B2,1,14.5

Tabular formatting applies recursively: nested arrays of objects (whether as object properties or inside list items) also use tabular format if they meet the same requirements.

encode({
  items: [
    {
      users: [
        { id: 1, name: 'Ada' },
        { id: 2, name: 'Bob' }
      ],
      status: 'active'
    }
  ]
})
items[1]:
  - users[2]{id,name}:
    1,Ada
    2,Bob
    status: active

Mixed and Non-Uniform Arrays

Arrays that don't meet the tabular requirements use list format:

items[3]:
  - 1
  - a: 1
  - text

When objects appear in list format, the first field is placed on the hyphen line:

items[2]:
  - id: 1
    name: First
  - id: 2
    name: Second
    extra: true

Note

Nested array indentation: When the first field of a list item is an array (primitive, tabular, or nested), its contents are indented two spaces under the header line, and subsequent fields of the same object appear at that same indentation level. This remains unambiguous because list items begin with "- ", tabular arrays declare a fixed row count in their header, and object fields contain ":".

Arrays of Arrays

When you have arrays containing primitive inner arrays:

encode({
  pairs: [
    [1, 2],
    [3, 4]
  ]
})
pairs[2]:
  - [2]: 1,2
  - [2]: 3,4

Empty Arrays and Objects

Empty containers have special representations:

encode({ items: [] }) // items[0]:
encode([]) // [0]:
encode({}) // (empty output)
encode({ config: {} }) // config:

Quoting Rules

TOON quotes strings only when necessary to maximize token efficiency:

  • Inner spaces are allowed; leading or trailing spaces force quotes.
  • Unicode and emoji are safe unquoted.
  • Quotes and control characters are escaped with backslash.

Note

When using alternative delimiters (tab or pipe), the quoting rules adapt automatically. Strings containing the active delimiter will be quoted, while other delimiters remain safe.

Object Keys and Field Names

Keys are unquoted if they match the identifier pattern: start with a letter or underscore, followed by letters, digits, underscores, or dots (e.g., id, userName, user_name, user.name, _private). All other keys must be quoted (e.g., "user name", "order-id", "123", "order:id", "").

String Values

String values are quoted when any of the following is true:

Condition Examples
Empty string ""
Leading or trailing spaces " padded ", " "
Contains active delimiter, colon, quote, backslash, or control chars "a,b" (comma), "a\tb" (tab), "a|b" (pipe), "a:b", "say \"hi\"", "C:\\Users", "line1\\nline2"
Looks like boolean/number/null "true", "false", "null", "42", "-3.14", "1e-6", "05"
Starts with "- " (list-like) "- item"
Looks like structural token "[5]", "{key}", "[3]: x,y"

Examples of unquoted strings: Unicode and emoji are safe (hello πŸ‘‹ world), as are strings with inner spaces (hello world).

Important

Delimiter-aware quoting: Unquoted strings never contain : or the active delimiter. This makes TOON reliably parseable with simple heuristics: split key/value on first : , and split array values on the delimiter declared in the array header. When using tab or pipe delimiters, commas don't need quoting – only the active delimiter triggers quoting for both array values and object values.

Type Conversions

Some non-JSON types are automatically normalized for LLM-safe output:

Input Output
Number (finite) Decimal form, no scientific notation (e.g., -0 β†’ 0, 1e6 β†’ 1000000)
Number (NaN, Β±Infinity) null
BigInt If within safe integer range: converted to number. Otherwise: quoted decimal string (e.g., "9007199254740993")
Date ISO string in quotes (e.g., "2025-01-01T00:00:00.000Z")
undefined null
function null
symbol null

API

encode(value: unknown, options?: EncodeOptions): string

Converts any JSON-serializable value to TOON format.

Parameters:

  • value – Any JSON-serializable value (object, array, primitive, or nested structure). Non-JSON-serializable values (functions, symbols, undefined, non-finite numbers) are converted to null. Dates are converted to ISO strings, and BigInts are emitted as decimal integers (no quotes).
  • options – Optional encoding options:
    • indent?: number – Number of spaces per indentation level (default: 2)
    • delimiter?: ',' | '\t' | '|' – Delimiter for array values and tabular rows (default: ',')
    • lengthMarker?: '#' | false – Optional marker to prefix array lengths (default: false)

Returns:

A TOON-formatted string with no trailing newline or spaces.

Example:

import { encode } from '@toon-format/toon'

const items = [
  { sku: 'A1', qty: 2, price: 9.99 },
  { sku: 'B2', qty: 1, price: 14.5 }
]

encode({ items })

Output:

items[2]{sku,qty,price}:
  A1,2,9.99
  B2,1,14.5

Delimiter Options

The delimiter option allows you to choose between comma (default), tab, or pipe delimiters for array values and tabular rows. Alternative delimiters can provide additional token savings in specific contexts.

Tab Delimiter (\t)

Using tab delimiters instead of commas can reduce token count further, especially for tabular data:

const data = {
  items: [
    { sku: 'A1', name: 'Widget', qty: 2, price: 9.99 },
    { sku: 'B2', name: 'Gadget', qty: 1, price: 14.5 }
  ]
}

encode(data, { delimiter: '\t' })

Output:

items[2	]{sku	name	qty	price}:
  A1	Widget	2	9.99
  B2	Gadget	1	14.5

Benefits:

  • Tabs are single characters and often tokenize more efficiently than commas.
  • Tabs rarely appear in natural text, reducing the need for quote-escaping.
  • The delimiter is explicitly encoded in the array header, making it self-descriptive.

Considerations:

  • Some terminals and editors may collapse or expand tabs visually.
  • String values containing tabs will still require quoting.
Pipe Delimiter (|)

Pipe delimiters offer a middle ground between commas and tabs:

encode(data, { delimiter: '|' })

Output:

items[2|]{sku|name|qty|price}:
  A1|Widget|2|9.99
  B2|Gadget|1|14.5

Length Marker Option

The lengthMarker option adds an optional hash (#) prefix to array lengths to emphasize that the bracketed value represents a count, not an index:

const data = {
  tags: ['reading', 'gaming', 'coding'],
  items: [
    { sku: 'A1', qty: 2, price: 9.99 },
    { sku: 'B2', qty: 1, price: 14.5 },
  ],
}

console.log(
  encode(data, { lengthMarker: '#' })
)
// tags[#3]: reading,gaming,coding
// items[#2]{sku,qty,price}:
//   A1,2,9.99
//   B2,1,14.5

// Custom delimiter with length marker
console.log(
  encode(data, { lengthMarker: '#', delimiter: '|' })
)
// tags[#3|]: reading|gaming|coding
// items[#2|]{sku|qty|price}:
//   A1|2|9.99
//   B2|1|14.5

decode(input: string, options?: DecodeOptions): JsonValue

Converts a TOON-formatted string back to JavaScript values.

Parameters:

  • input – A TOON-formatted string to parse
  • options – Optional decoding options:
    • indent?: number – Expected number of spaces per indentation level (default: 2)
    • strict?: boolean – Enable strict validation (default: true)

Returns:

A JavaScript value (object, array, or primitive) representing the parsed TOON data.

Example:

import { decode } from '@toon-format/toon'

const toon = `
items[2]{sku,qty,price}:
  A1,2,9.99
  B2,1,14.5
`

const data = decode(toon)
// {
//   items: [
//     { sku: 'A1', qty: 2, price: 9.99 },
//     { sku: 'B2', qty: 1, price: 14.5 }
//   ]
// }

Strict Mode:

By default, the decoder validates input strictly:

  • Invalid escape sequences: Throws on "\x", unterminated strings.
  • Syntax errors: Throws on missing colons, malformed headers.
  • Array length mismatches: Throws when declared length doesn't match actual count.
  • Delimiter mismatches: Throws when row delimiters don't match header.

Notes and Limitations

  • Format familiarity and structure matter as much as token count. TOON's tabular format requires arrays of objects with identical keys and primitive values only. When this doesn't hold (due to mixed types, non-uniform objects, or nested structures), TOON switches to list format where JSON can be more efficient at scale.
    • TOON excels at: Uniform arrays of objects (same fields, primitive values), especially large datasets with consistent structure.
    • JSON is better for: Non-uniform data, deeply nested structures, and objects with varying field sets.
    • CSV is more compact for: Flat, uniform tables without nesting. TOON adds structure ([N] length markers, delimiter scoping, deterministic quoting) that improves LLM reliability with minimal token overhead.
  • Token counts vary by tokenizer and model. Benchmarks use a GPT-style tokenizer (cl100k/o200k); actual savings will differ with other models (e.g., SentencePiece).
  • TOON is designed for LLM input where human readability and token efficiency matter. It's not a drop-in replacement for JSON in APIs or storage.

Using TOON in LLM Prompts

TOON works best when you show the format instead of describing it. The structure is self-documenting – models parse it naturally once they see the pattern.

Sending TOON to LLMs (Input)

Wrap your encoded data in a fenced code block (label it ```toon for clarity). The indentation and headers are usually enough – models treat it like familiar YAML or CSV. The explicit length markers ([N]) and field headers ({field1,field2}) help the model track structure, especially for large tables.

Generating TOON from LLMs (Output)

For output, be more explicit. When you want the model to generate TOON:

  • Show the expected header (users[N]{id,name,role}:). The model fills rows instead of repeating keys, reducing generation errors.
  • State the rules: 2-space indent, no trailing spaces, [N] matches row count.

Here's a prompt that works for both reading and generating:

Data is in TOON format (2-space indent, arrays show length and fields).

```toon
users[3]{id,name,role,lastLogin}:
  1,Alice,admin,2025-01-15T10:30:00Z
  2,Bob,user,2025-01-14T15:22:00Z
  3,Charlie,user,2025-01-13T09:45:00Z
```

Task: Return only users with role "user" as TOON. Use the same header. Set [N] to match the row count. Output only the code block.

Tip

For large uniform tables, use encode(data, { delimiter: '\t' }) and tell the model "fields are tab-separated." Tabs often tokenize better than commas and reduce the need for quote-escaping.

Syntax Cheatsheet

Show format examples
// Object
{ id: 1, name: 'Ada' }          β†’ id: 1
                                  name: Ada

// Nested object
{ user: { id: 1 } }             β†’ user:
                                    id: 1

// Primitive array (inline)
{ tags: ['foo', 'bar'] }        β†’ tags[2]: foo,bar

// Tabular array (uniform objects)
{ items: [                      β†’ items[2]{id,qty}:
  { id: 1, qty: 5 },                1,5
  { id: 2, qty: 3 }                 2,3
]}

// Mixed / non-uniform (list)
{ items: [1, { a: 1 }, 'x'] }   β†’ items[3]:
                                    - 1
                                    - a: 1
                                    - x

// Array of arrays
{ pairs: [[1, 2], [3, 4]] }     β†’ pairs[2]:
                                    - [2]: 1,2
                                    - [2]: 3,4

// Root array
['x', 'y']                      β†’ [2]: x,y

// Empty containers
{}                              β†’ (empty output)
{ items: [] }                   β†’ items[0]:

// Special quoting
{ note: 'hello, world' }        β†’ note: "hello, world"
{ items: ['true', true] }       β†’ items[2]: "true",true

Other Implementations

Note

When implementing TOON in other languages, please follow the specification (currently v1.4) to ensure compatibility across implementations. The conformance tests provide language-agnostic test fixtures that validate implementations across any language.

Official Implementations

Community Implementations

License

MIT License Β© 2025-PRESENT Johann Schopplich

Footnotes

  1. For flat tabular data, CSV is more compact. TOON adds minimal overhead to provide explicit structure and validation that improves LLM reliability. ↩

About

πŸŽ’ Token-Oriented Object Notation (TOON) – JSON for LLM prompts at half the tokens. Spec, benchmarks & TypeScript implementation.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • TypeScript 99.9%
  • JavaScript 0.1%