TL;DR: COON empowers Agentic Coding by allowing LLMs to generate compressed code directly, saving 30-70% on output tokens and generation time.
In the era of AI agents, code generation speed and cost are bottlenecks. COON flips the script: instead of just compressing code input, it enables LLMs to output compressed code.
You give the LLM a base prompt, it generates concise COON syntax, and you decompress it locally.
π Turbocharged Generation - LLMs write up to 3x faster by generating fewer tokens. π° Massive Cost Savings - Pay for 70% fewer output tokens. π§ Larger Context Window - Fit more logic into a single response. π€ Native to Agents - Designed to be the "machine code" for high-level AI agents.
- Prompt: You provide the LLM with your request + the COON System Prompt (see below).
- Generate: The LLM thinks and outputs code in compressed COON format.
- Decompress: Your agent or script uses the
coonlibrary to expand it into full source code. - Save: The full code is written to your file system.
Standard Generation (Slow & Expensive):
User Prompt -> LLM -> [150 Tokens of Dart Code] -> File
COON Generation (Fast & Efficient):
User Prompt -> LLM -> [45 Tokens of COON] -> Decompressor -> [150 Tokens of Dart Code] -> File
Scenario: Generating a Login Screen
Traditional Output (150 tokens):
class LoginScreen extends StatelessWidget {
final TextEditingController emailController = TextEditingController();
// ... verbose boilerplate ...
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text("Login")),
// ... more code ...
);
}
}COON Output (45 tokens - 70% reduction):
c:LoginScreen<StatelessWidget>;f:emailController=X;m:b S{a:B{t:T"Login"}}...
Result:
- Speed: 3x faster generation
- Cost: 70% cheaper per request
To enable your LLM agent (custom script, Cursor, Windsurf, etc.) to speak COON, append this to your system prompt:
You are an expert coder. When asked to generate code, output it in COON (Code-Oriented Object Notation) format to save tokens.
COON Rules:
- Class definition: `c:Name<Parent>;`
- Fields: `f:name=Value,name2=Value;`
- Methods: `m:name Body` (default name 'b' is build)
- Widgets: Abbreviate common widgets (S=Scaffold, C=Column, R=Row, T=Text, B=AppBar, etc.)
- Properties: Abbreviate keys (b=body, c=child/children, a=appBar)
- Strings: `T"Content"` for Text widgets.
Example Output:
c:MyWidget<StatelessWidget>;m:b S{b:C{h:[T"Hello",T"World"]}}
pip install coon
# or
npm install coon-formatUse this in your agent's toolchain to handle the LLM's output:
Python:
from coon import decompress_dart
# LLM Output
coon_code = "c:Hello<StatelessWidget>;m:b T'Hi'"
# Decompress to source
full_code = decompress_dart(coon_code)
print(full_code)
# Output:
# class Hello extends StatelessWidget {
# Widget build(context) => Text('Hi');
# }JavaScript / TypeScript:
import { decompressCoon } from 'coon-format';
// LLM Output
const coonCode = "c:Hello<StatelessWidget>;m:b T'Hi'";
// Decompress to source
const fullCode = decompressCoon(coonCode);
console.log(fullCode);
// Output:
// class Hello extends StatelessWidget {
// Widget build(BuildContext context) {
// return Text('Hi');
// }
// }π§ Email: affanshaikhsurabofficial@gmail.com
π GitHub: github.com/AffanShaikhsurab/COON
π¬ Issues: Create a GitHub issue for support
MIT License - Use COON in any project, commercial or personal.
Ready to save money? Get started now π