Skip to content

Aampe/datashare-tools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

datashare-tools

Ready-to-use models, queries, and semantic layers for Aampe's shared data tables. Works with BigQuery, Snowflake, and Databricks.

What's in here

What Where What it does
dbt package dbt/ Staging models flatten nested structs (copy assignments, propensities, timing). Mart models aggregate reward lift, daily summaries, contact-level views, and label convergence. Cross-dialect macros handle BigQuery/Snowflake/Databricks differences.
SQL cookbook cookbook/ 12 copy-pasteable recipes: propensity ranking, reward trends, channel comparison, agent vs. controlled analysis, diversity, timing, label convergence, label trends, co-occurrence, messaging cadence, learning velocity, and event attribution.
Semantic layer airlayer/ Airlayer view definitions with dimensions and measures for message events, labels, contact profiles, and propensities. Includes a reward lift motif and a weekly channel review query.
Schema & domain docs docs/ Field-level schema reference, domain concepts (agents, labels, Beta posteriors, reward functions), and FAQ.
Agent context AGENTS.md, llms.txt Schema and domain knowledge formatted for AI coding agents (Claude Code, Cursor, Copilot, etc.).
Hex guides hex/guides/ Schema guide, query patterns, and domain glossary for Hex Context Studio.
Snowflake Cortex snowflake/ Cortex Analyst semantic model and sample natural-language questions.
Sample data sample_data/ Synthetic parquet files from a fictional food delivery app (FeastFleet). Load into DuckDB or pandas to explore without warehouse credentials.

Quick start

Option 1: dbt package

  1. Clone this repo
  2. Edit dbt/models/staging/_stg_aampe__sources.yml — replace YOUR_DATABASE and YOUR_SCHEMA with where the shared tables live in your warehouse
  3. cd dbt && dbt run

This gives you flattened staging tables and pre-built aggregations. See the dbt model docs for what each model produces.

Option 2: SQL cookbook

Open any recipe in cookbook/ and run the SQL directly against your shared tables. Set your default schema first:

-- BigQuery: select project + dataset in the console, or qualify table names
-- Snowflake: USE DATABASE your_db; USE SCHEMA your_schema;
-- Databricks: USE CATALOG your_catalog; USE SCHEMA your_schema;

See the cookbook README for the full recipe list.

Option 3: Hex

Add the files in hex/guides/ as workspace context in Hex's Context Studio. You can also add AGENTS.md or any of the docs/ files — Context Studio accepts any markdown. The Hex AI agent will then understand the Aampe schema, common query patterns, and domain terminology when helping you build notebooks.

Option 4: AI agent context

Copy AGENTS.md or llms.txt into your agent's context to give it full schema and domain knowledge for writing queries against the shared tables.

Shared tables

Table Grain Description
AAMPE_CONTACT_PROFILES One row per contact Propensity scores across 5 label dimensions (offering, value proposition, tone, timing, channel). Updated daily.
AAMPE_MESSAGE_EVENTS One row per delivered message Copy label assignments (with Beta posterior parameters), reward metrics, timing/channel decisions, message content.
AAMPE_MESSAGE_ATTEMPTS One row per queued message All messages including delivery failures. Use for delivery rate analysis.
AAMPE_CONTACT_EVENTS_PARTIAL One row per contact event Aampe-specific events (aampe_* prefix) with timestamps and JSON metadata.

For full column-level documentation, see docs/schema_reference.md. For domain concepts (how propensities work, what the Beta parameters mean, how reward is calculated), see docs/domain_concepts.md.

Contributing

This repo is organized by integration target — each top-level directory is a self-contained module for a specific tool or platform. To add support for a new tool (e.g., Cube, Looker, Metabase):

  1. Create a new top-level directory named after the tool (e.g., cube/)
  2. Use docs/schema_reference.md as the source of truth for table and column definitions
  3. Add a row to the "What's in here" table above
  4. Follow the authoring guidelines: direct, precise, no marketing language, copy-pasteable examples

For new cookbook recipes, follow the numbering convention (09_descriptive_name.md) and include Snowflake/Databricks variants where syntax differs. See the cookbook README for the pattern.

Learn more

About

Tools for extracting insights and building applications on top of Aampe's shared tables

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors