An in-process semantic engine that compiles .view.yml definitions into dialect-specific SQL — and optionally executes queries against real databases. Built in Rust as both a library and CLI tool.
bash <(curl -sSfL https://raw.githubusercontent.com/oxy-hq/airlayer/main/install_airlayer.sh)Then initialize a project within an empty directory:
mkdir my-project && cd my-project
airlayer initThis connects to your database, discovers your schema, and generates config.yml, .view.yml files, and Claude Code sub-agents for querying and building your semantic layer.
Given an orders.view.yml:
name: orders
table: public.orders
dialect: postgres
dimensions:
- name: status
type: string
expr: status
measures:
- name: total_revenue
type: sum
expr: amountYou can query it with the CLI as follows:
# add -x to execute against the database
airlayer query \
--dimension orders.status \
--measure orders.total_revenue \
--filter orders.status:equals:active \
--limit 10Which will compile to the following SQL, returned to stdout:
SELECT
"orders".status AS "orders__status",
SUM("orders".amount) AS "orders__total_revenue"
FROM public.orders AS "orders"
WHERE ("orders".status = 'active')
GROUP BY 1
LIMIT 10| Database | Dialect flag | Execution |
|---|---|---|
| PostgreSQL | postgres |
Native (TCP) |
| MySQL | mysql |
Native (TCP) |
| BigQuery | bigquery |
REST API (OAuth2) |
| Snowflake | snowflake |
REST API (session auth) |
| DuckDB | duckdb |
In-process (libduckdb) |
| ClickHouse | clickhouse |
HTTP API |
| Databricks | databricks |
SQL Statement API |
| Redshift | redshift |
Native (TCP, via Postgres wire protocol) |
| Presto / Trino | presto |
REST API (polling) |
| SQLite | sqlite |
In-process (libsqlite3) |
| MotherDuck | duckdb |
In-process (md: protocol) |
| Domo | domo |
REST API |
All dialects support SQL compilation. Execution requires the corresponding feature flag (e.g., exec-postgres). See docs/dialects.md for per-dialect SQL behavior.
airlayer can query directly from Cube.js, Looker LookML, dbt MetricFlow, and Omni repositories — no conversion step required. Just point airlayer at a project directory and it auto-detects the format.
| Feature | Cube.js | LookML | dbt | Omni |
|---|---|---|---|---|
| Dimensions (string, number, boolean, date) | Y | Y | Y | Y |
| Dimension groups (time / duration) | — | Y | — | Y |
| Measures (count, sum, avg, min, max, count_distinct) | Y | Y | Y | Y |
| Measure filters | Y | Y | Y | Y |
| Joins / relationships | Y | Y | Y | Y |
| SQL reference rewriting | Y | Y | Y | Y |
| Segments / named filters | Y | Y | — | Y |
| Primary keys | Y | Y | Y | Y |
| Labels / descriptions | Y | Y | Y | Y |
| Native loading (no conversion) | Y | Y | Y | Y |
| Directory-level aggregation | Y | Y | Y | Y |
See docs/foreign-models.md for the full feature parity table, format details, and limitations.
airlayer can be used in two ways:
Project mode (CLI) — You have a directory with config.yml and .view.yml files (plus optional .motif.yml and .query.yml files). The config.yml file anchors the project: all CLI commands auto-detect the project root by walking up from the current directory until they find it. This means you can run commands from any subdirectory without specifying --config:
cd my-project/ # anywhere inside the project
airlayer query -x --measure orders.total_revenue # just works
airlayer inspect --motifs # just works
airlayer query queries/revenue_investigation.query.yml -x # just worksLibrary mode (Python / JS / Rust) — You embed airlayer as a library and pass view definitions, motifs, and queries programmatically. No config.yml or filesystem structure is needed — everything is constructed in code. Available as a Python package and an npm package (WebAssembly).
import airlayer
result = airlayer.compile(
views_yaml=[open("orders.view.yml").read()],
query_json='{"measures": ["orders.total_revenue"], "dimensions": ["orders.status"]}',
dialect="postgres",
)
print(result["sql"])import init, { compile } from 'airlayer';
await init();
const result = compile(
[ordersViewYaml],
JSON.stringify({ measures: ['orders.total_revenue'], dimensions: ['orders.status'] }),
'postgres'
);
console.log(result.sql);This project uses just as a task runner. Install with cargo install just, then run just to see all available recipes.
just build # core only (no database drivers)
just build-all # with all database drivers
just build-wasm # WebAssembly package (output in pkg/)
just build-python # Python package (dev install into current venv)
just build-python-release # Python wheel (release)
just test # tier 1: unit tests + in-process integration (DuckDB, SQLite)
just test-docker # tier 2: starts Docker DBs + runs tests
just test-cloud # tier 3: Snowflake, BigQuery, Databricks, MotherDuck
just test-all # all tiers
just lint # clippy lints
just fmt # format codeSee docs/testing.md for the full three-tier testing strategy.
| Document | Description |
|---|---|
| PHILOSOPHY.md | Design principles |
| docs/schema-format.md | .view.yml reference — dimensions, measures, entities, segments |
| docs/query-api.md | Query format, filter operators, time dimensions |
| docs/agent-execution.md | Execution envelope spec, config format |
| docs/architecture.md | Pipeline stages: parse → resolve → plan → generate |
| docs/dialects.md | Per-dialect SQL behavior |
| docs/testing.md | Three-tier testing strategy |
| docs/library-usage.md | Python, JS/WASM, and Rust library API |
| npm package | WebAssembly build for browsers and Node.js |
| PyPI package | Native Python package |
| DEVELOPMENT.md | Contributing and release workflow |