CQLAI is a fast, portable interactive terminal for Cassandra (CQL), built in Go. It provides a modern, user-friendly alternative to cqlsh
with an advanced terminal UI, client-side command parsing, and enhanced productivity features.
AI features are completely optional - CQLAI works perfectly as a standalone CQL shell without any AI configuration or API keys.
The original cqlsh command is written in Python which requires Python to be installed on the system. cqlai is compiled to a single executable binary, requiring no external dependencies. This project provides binaries for the following platforms:
- Linux x86-64
- macOS x86-64
- Windows x86-64
- Linux aarch64
- macOS arm64
It is built with Bubble Tea, Bubbles, and Lip Gloss for the beautiful terminal UI. A big shout out to the cassandra gocql driver team for implementing the latest Cassandra functionalities gocql
CQLAI is production-ready and actively used in development, testing, and production environments with Cassandra clusters. The tool provides a complete, stable alternative to cqlsh
with enhanced features and performance.
- All core CQL operations and queries
- Complete meta-command support (
DESCRIBE
,SHOW
,CONSISTENCY
, etc.) - Client-side command parsing (lightweight, no ANTLR dependency)
- Data import/export with
COPY TO/FROM
(CSV and Parquet formats) - SSL/TLS connections and authentication
- User-Defined Types (UDTs) and complex data types
- Batch mode for scripting and automation
- Apache Parquet format support for efficient data interchange
- Tab completion for CQL keywords, tables, columns, and keyspaces
- Small binary size (~43MB, 53% smaller than previous versions)
- Optional: AI-powered query generation (OpenAI, Anthropic, Gemini)
- Enhanced AI context awareness
- Cassandra MCP service
- Additional performance optimizations
We encourage you to try CQLAI today and help shape its development! Your feedback and contributions are invaluable in making this the best CQL shell for the Cassandra community. Please report issues or contribute.
- Interactive CQL Shell: Execute any CQL query that your Cassandra cluster supports.
- Rich Terminal UI:
- A multi-layer, full-screen terminal application with alternate screen buffer (preserves terminal history).
- Virtualized, scrollable table for results with automatic data loading, preventing memory overload from large queries.
- Advanced navigation modes with vim-style keyboard shortcuts.
- Full mouse support including wheel scrolling and text selection.
- Sticky footer/status bar showing connection details, query latency, and session status (consistency, tracing).
- Modal overlays for history, help, and command completion.
- Apache Parquet Support:
- High-performance columnar data format for analytics and machine learning workflows.
- Export Cassandra tables to Parquet files with
COPY TO
command. - Import Parquet files into Cassandra with automatic schema inference.
- Partitioned datasets with Hive-style directory structures.
- TimeUUID / timestamp virtual columns for intelligent time-based partitioning.
- Support for all Cassandra data types including UDTs, collections, and vectors.
- Optional AI-Powered Query Generation:
- Natural language to CQL conversion using AI providers (OpenAI, Anthropic, Gemini).
- Schema-aware query generation with automatic context.
- Safe preview and confirmation before execution.
- Support for complex operations including DDL and DML.
- Requires API key configuration - not needed for core functionality.
- Configuration:
- Simple configuration via
cqlai.json
in current directory or~/.cqlai.json
. - Support for SSL/TLS connections with certificate authentication.
- Simple configuration via
- Single Binary: Distributed as a single, static binary with no external dependencies. Fast startup and small footprint.
You can install cqlai
in several ways. For detailed instructions including package managers (APT, YUM) and Docker, see the Installation Guide.
Download the appropriate binary for your OS and architecture from the Releases page.
go install github.com/axonops/cqlai/cmd/cqlai@latest
git clone https://github.com/axonops/cqlai.git
cd cqlai
go build -o cqlai cmd/cqlai/main.go
# Build the image
docker build -t cqlai .
# Run the container
docker run -it --rm --name cqlai-session cqlai --host your-cassandra-host
Connect to a Cassandra host:
# With password on command line (not recommended - visible in ps)
cqlai --host 127.0.0.1 --port 9042 --username cassandra --password cassandra
# With password prompt (secure - password hidden)
cqlai --host 127.0.0.1 --port 9042 -u cassandra
# Password: [hidden input]
# Using environment variable (secure for scripts/containers)
export CQLAI_PASSWORD=cassandra
cqlai --host 127.0.0.1 -u cassandra
Or use a configuration file:
# Create configuration from example
cp cqlai.json.example cqlai.json
# Edit cqlai.json with your settings, then run:
cqlai
cqlai [options]
Option | Short | Description |
---|---|---|
--host <host> |
Cassandra host (overrides config) | |
--port <port> |
Cassandra port (overrides config) | |
--keyspace <keyspace> |
-k |
Default keyspace (overrides config) |
--username <username> |
-u |
Username for authentication |
--password <password> |
-p |
Password for authentication* |
--no-confirm |
Disable confirmation prompts | |
--connect-timeout <seconds> |
Connection timeout (default: 10) | |
--request-timeout <seconds> |
Request timeout (default: 10) | |
--debug |
Enable debug logging |
*Note: Password can be provided in three ways:
- Command line with
-p
(not recommended - visible in process list) - Interactive prompt when
-u
is used without-p
(recommended) - Environment variable
CQLAI_PASSWORD
(good for automation)
Option | Short | Description |
---|---|---|
--execute <statement> |
-e |
Execute CQL statement and exit |
--file <file> |
-f |
Execute CQL from file and exit |
--format <format> |
Output format: ascii, json, csv, table | |
--no-header |
Don't output column headers (CSV) | |
--field-separator <sep> |
Field separator for CSV (default: ,) | |
--page-size <n> |
Rows per batch (default: 100) |
Option | Short | Description |
---|---|---|
--help |
-h |
Show help message |
--version |
-v |
Print version and exit |
Execute CQL statements non-interactively (compatible with cqlsh):
# Execute a single statement
cqlai -e "SELECT * FROM system_schema.keyspaces;"
# Execute from a file
cqlai -f script.cql
# Pipe input
echo "SELECT * FROM users;" | cqlai
# Control output format
cqlai -e "SELECT * FROM users;" --format json
cqlai -e "SELECT * FROM users;" --format csv --no-header
# Control pagination size
cqlai -e "SELECT * FROM large_table;" --page-size 50
- Execute CQL: Type any CQL statement and press Enter.
- Meta-Commands:
DESCRIBE KEYSPACES; USE my_keyspace; DESCRIBE TABLES; CONSISTENCY QUORUM; TRACING ON; PAGING 50; EXPAND ON; -- Vertical output mode SOURCE 'script.cql'; -- Execute CQL script
- AI-Powered Query Generation:
.ai What keyspaces are there? .ai What columns does the users table have? .ai create a table for storing product inventory .ai delete orders older than 1 year from the orders table
Shortcut | Action | macOS Alternative |
---|---|---|
↑ /↓ |
Navigate command history | Same |
Ctrl+P /Ctrl+N |
Previous/Next in command history | Same |
Alt+N |
Move to next line in history | Option+N |
Tab |
Autocomplete commands and table/keyspace names | Same |
Ctrl+C |
Clear input / Cancel pagination / Cancel operation (twice to exit) | ⌘+C or Ctrl+C |
Ctrl+D |
Exit application | ⌘+D or Ctrl+D |
Ctrl+R |
Search command history | ⌘+R or Ctrl+R |
Esc |
Toggle navigation mode / Cancel pagination / Close modals | Same |
Enter |
Execute command / Load next page (during pagination) | Same |
Shortcut | Action | macOS Alternative |
---|---|---|
Ctrl+A |
Jump to beginning of line | Same |
Ctrl+E |
Jump to end of line | Same |
Ctrl+Left /Ctrl+Right |
Jump by word (or 20 chars) | Same |
PgUp /PgDn (in input) |
Page left/right in long queries | Fn+↑ /Fn+↓ |
Ctrl+K |
Cut from cursor to end of line | Same |
Ctrl+U |
Cut from beginning to cursor | Same |
Ctrl+W |
Cut word backward | Same |
Alt+D |
Delete word forward | Option+D |
Ctrl+Y |
Paste previously cut text | Same |
Shortcut | Action |
---|---|
F2 |
Switch to query/history view |
F3 |
Switch to table view |
F4 |
Switch to trace view (when tracing enabled) |
F5 |
Switch to AI conversation view |
F6 |
Toggle column data types in table headers |
Shortcut | Action | macOS Alternative |
---|---|---|
PgUp /PgDn |
Scroll viewport by page / Load more data when available | Fn+↑ /Fn+↓ |
Space |
Load next page when more data available | Same |
Enter (empty input) |
Load next page when more data available | Same |
Alt+↑ /Alt+↓ |
Scroll viewport by single row (respects row boundaries) | Option+↑ /Option+↓ |
Alt+← /Alt+→ |
Scroll table horizontally (wide tables) | Option+← /Option+→ |
↑ /↓ |
Navigate table rows (when in navigation mode) | Same |
Press Esc
to toggle navigation mode when viewing tables or traces.
Shortcut | Action in Navigation Mode |
---|---|
j / k |
Scroll down/up by single line |
d / u |
Scroll down/up by half page |
g / G |
Jump to top/bottom of results |
< / > |
Scroll left/right by 10 columns |
{ / } |
Scroll left/right by 50 columns |
0 / $ |
Jump to first/last column |
Esc |
Exit navigation mode / Cancel pagination if active |
Action | Function |
---|---|
Mouse Wheel | Scroll vertically with automatic data loading |
Alt+Mouse Wheel | Scroll horizontally in tables |
Shift+Mouse Wheel | Scroll horizontally (alternative) |
Ctrl+Mouse Wheel | Scroll horizontally (alternative) |
Shift+Click+Drag | Select text for copying |
Ctrl+Shift+C | Copy selected text to clipboard |
Middle Click | Paste from selection buffer (Linux/Unix) |
Note for macOS Users:
- Most
Ctrl
shortcuts work as-is on macOS, but you can also use⌘
(Command) key as an alternative Alt
key is labeled asOption
on Mac keyboards- Function keys (F1-F6) may require holding
Fn
key depending on your Mac settings
CQLAI provides intelligent, context-aware tab completion to speed up your workflow. Press Tab
at any point to see available completions.
CQL Keywords & Commands:
- All CQL keywords:
SELECT
,INSERT
,CREATE
,ALTER
,DROP
, etc. - Meta-commands:
DESCRIBE
,CONSISTENCY
,COPY
,SHOW
, etc. - Data types:
TEXT
,INT
,UUID
,TIMESTAMP
, etc. - Consistency levels:
ONE
,QUORUM
,ALL
,LOCAL_QUORUM
, etc.
Schema Objects:
- Keyspace names
- Table names (within current keyspace)
- Column names (when context allows)
- User-defined type names
- Function and aggregate names
- Index names
Context-Aware Completions:
-- After SELECT, suggests column names and keywords
SELECT <Tab> -- Shows: *, column names, DISTINCT, JSON, etc.
-- After FROM, suggests table names
SELECT * FROM <Tab> -- Shows: available tables in current keyspace
-- After USE, suggests keyspace names
USE <Tab> -- Shows: available keyspaces
-- After DESCRIBE, suggests object types
DESCRIBE <Tab> -- Shows: KEYSPACE, TABLE, TYPE, etc.
-- After consistency command
CONSISTENCY <Tab> -- Shows: ONE, QUORUM, ALL, etc.
File Path Completion:
-- For commands that accept file paths
SOURCE '<Tab> -- Shows: files in current directory
SOURCE '/path/<Tab> -- Shows: files in /path/
- Case Insensitive: Type
sel<Tab>
to getSELECT
- Partial Matching: Type part of a word and press Tab
- Multiple Matches: When multiple completions are available:
- First Tab: Shows inline completion if unique
- Second Tab: Shows all available options in a modal
- Smart Filtering: Completions are filtered based on current context
- Escape to Cancel: Press
Esc
to close the completion modal
-- Complete table name
SELECT * FROM us<Tab>
-- Completes to: SELECT * FROM users
-- Complete consistency level
CONSISTENCY LOC<Tab>
-- Shows: LOCAL_ONE, LOCAL_QUORUM, LOCAL_SERIAL
-- Complete column names after SELECT
SELECT id, na<Tab> FROM users
-- Completes to: SELECT id, name FROM users
-- Complete file paths for SOURCE command
SOURCE 'sche<Tab>
-- Completes to: SOURCE 'schema.cql'
-- Complete COPY command options
COPY users TO 'file.csv' WITH <Tab>
-- Shows: HEADER, DELIMITER, NULLVAL, PAGESIZE, etc.
-- Show all tables when multiple exist
SELECT * FROM <Tab>
-- Shows modal with: users, orders, products, etc.
- Use Tab liberally: The completion system is smart and context-aware
- Type minimum characters: Often 2-3 characters are enough to get unique completion
- Use for discovery: Press Tab on empty input to see what's available
- File paths: Remember to include quotes for file path completion
- Navigate completions: Use arrow keys to select from multiple options
CQLAI supports all standard CQL commands plus additional meta-commands for enhanced functionality.
Execute any valid CQL statement supported by your Cassandra cluster:
- DDL:
CREATE
,ALTER
,DROP
(KEYSPACE, TABLE, INDEX, etc.) - DML:
SELECT
,INSERT
,UPDATE
,DELETE
- DCL:
GRANT
,REVOKE
- Other:
USE
,TRUNCATE
,BEGIN BATCH
, etc.
Meta-commands provide additional functionality beyond standard CQL:
-
CONSISTENCY
<level>
- Set consistency level (ONE, QUORUM, ALL, etc.)CONSISTENCY QUORUM CONSISTENCY LOCAL_ONE
-
PAGING
<size>
| OFF - Set result paging sizePAGING 1000 PAGING OFF
-
TRACING ON | OFF - Enable/disable query tracing
TRACING ON SELECT * FROM users; TRACING OFF
-
OUTPUT [FORMAT] - Set output format
OUTPUT -- Show current format OUTPUT TABLE -- Table format (default) OUTPUT JSON -- JSON format OUTPUT EXPAND -- Expanded vertical format OUTPUT ASCII -- ASCII table format
- DESCRIBE - Show schema information
DESCRIBE KEYSPACES -- List all keyspaces DESCRIBE KEYSPACE <name> -- Show keyspace definition DESCRIBE TABLES -- List tables in current keyspace DESCRIBE TABLE <name> -- Show table structure DESCRIBE TYPES -- List user-defined types DESCRIBE TYPE <name> -- Show UDT definition DESCRIBE FUNCTIONS -- List user functions DESCRIBE FUNCTION <name> -- Show function definition DESCRIBE AGGREGATES -- List user aggregates DESCRIBE AGGREGATE <name> -- Show aggregate definition DESCRIBE MATERIALIZED VIEWS -- List materialized views DESCRIBE MATERIALIZED VIEW <name> -- Show view definition DESCRIBE INDEX <name> -- Show index definition DESCRIBE CLUSTER -- Show cluster information DESC <keyspace>.<table> -- Shorthand for table description
-
COPY TO - Export table data to CSV or Parquet file
-- Basic export to CSV COPY users TO 'users.csv' -- Export to Parquet format (auto-detected by extension) COPY users TO 'users.parquet' -- Export to Parquet with explicit format and compression COPY users TO 'data.parquet' WITH FORMAT='PARQUET' AND COMPRESSION='SNAPPY' -- Export specific columns COPY users (id, name, email) TO 'users_partial.csv' -- Export with options COPY users TO 'users.csv' WITH HEADER = TRUE AND DELIMITER = '|' -- Export to stdout COPY users TO STDOUT WITH HEADER = TRUE -- Available options: -- FORMAT = 'CSV'/'PARQUET' -- Output format (default: CSV, auto-detected) -- HEADER = TRUE/FALSE -- Include column headers (CSV only) -- DELIMITER = ',' -- Field delimiter (CSV only) -- NULLVAL = 'NULL' -- String to use for NULL values -- PAGESIZE = 1000 -- Rows per page for large exports -- COMPRESSION = 'SNAPPY' -- For Parquet: SNAPPY, GZIP, ZSTD, LZ4, NONE -- CHUNKSIZE = 10000 -- Rows per chunk for Parquet
-
COPY FROM - Import CSV or Parquet data into table
-- Basic import from CSV file COPY users FROM 'users.csv' -- Import from Parquet file (auto-detected) COPY users FROM 'users.parquet' -- Import from Parquet with explicit format COPY users FROM 'data.parquet' WITH FORMAT='PARQUET' -- Import with header row (CSV) COPY users FROM 'users.csv' WITH HEADER = TRUE -- Import specific columns COPY users (id, name, email) FROM 'users_partial.csv' -- Import from stdin COPY users FROM STDIN -- Import with custom options COPY users FROM 'users.csv' WITH HEADER = TRUE AND DELIMITER = '|' AND NULLVAL = 'N/A' -- Available options: -- HEADER = TRUE/FALSE -- First row contains column names -- DELIMITER = ',' -- Field delimiter -- NULLVAL = 'NULL' -- String representing NULL values -- MAXROWS = -1 -- Maximum rows to import (-1 = unlimited) -- SKIPROWS = 0 -- Number of initial rows to skip -- MAXPARSEERRORS = -1 -- Max parsing errors allowed (-1 = unlimited) -- MAXINSERTERRORS = 1000 -- Max insert errors allowed -- MAXBATCHSIZE = 20 -- Max rows per batch insert -- MINBATCHSIZE = 2 -- Min rows per batch insert -- CHUNKSIZE = 5000 -- Rows between progress updates -- ENCODING = 'UTF8' -- File encoding -- QUOTE = '"' -- Quote character for strings
-
CAPTURE - Capture query output to file (continuous recording)
CAPTURE 'output.txt' -- Start capturing to text file CAPTURE JSON 'output.json' -- Capture as JSON CAPTURE CSV 'output.csv' -- Capture as CSV SELECT * FROM users; CAPTURE OFF -- Stop capturing
-
SAVE - Save displayed query results to file (without re-executing)
-- First run a query SELECT * FROM users WHERE status = 'active'; -- Then save the displayed results in various formats: SAVE -- Interactive dialog (choose format & filename) SAVE 'users.csv' -- Save to CSV (format auto-detected) SAVE 'users.json' -- Save to JSON (format auto-detected) SAVE 'users.txt' ASCII -- Save as ASCII table SAVE 'data.csv' CSV -- Explicitly specify format -- Key differences from CAPTURE: -- - SAVE exports the currently displayed results -- - No need to re-run the query -- - Preserves exact data shown in terminal -- - Works with paginated results (saves only loaded pages)
-
SHOW - Display session information
SHOW VERSION -- Show Cassandra version SHOW HOST -- Show current connection details SHOW SESSION -- Show all session settings
-
EXPAND ON | OFF - Toggle expanded output mode
EXPAND ON -- Vertical output (one field per line) SELECT * FROM users WHERE id = 1; EXPAND OFF -- Normal table output
- SOURCE - Execute CQL scripts from file
SOURCE 'schema.cql' -- Execute script SOURCE '/path/to/script.cql' -- Absolute path
- HELP - Display command help
HELP -- Show all commands HELP DESCRIBE -- Help for specific command HELP CONSISTENCY -- Help for consistency levels
- .ai
<natural language query>
- Generate CQL from natural language.ai show all users with active status .ai create a table for storing user sessions .ai find orders placed in the last 30 days
CQLAI supports multiple configuration methods for maximum flexibility and compatibility with existing Cassandra setups.
Configuration sources are loaded in the following order (later sources override earlier ones):
-
CQLSHRC files (for compatibility with existing cqlsh setups)
~/.cassandra/cqlshrc
(standard location)~/.cqlshrc
(alternative location)$CQLSH_RC
(if environment variable is set)
-
CQLAI JSON configuration files
./cqlai.json
(current directory)~/.cqlai.json
(user home directory)~/.config/cqlai/config.json
(XDG config directory)/etc/cqlai/config.json
(system-wide)
-
Environment variables
CQLAI_HOST
,CQLAI_PORT
,CQLAI_KEYSPACE
, etc.CASSANDRA_HOST
,CASSANDRA_PORT
(for compatibility)
-
Command-line flags (highest priority)
--host
,--port
,--keyspace
,--username
,--password
, etc.
CQLAI can read standard CQLSHRC files used by the traditional cqlsh
tool, making migration seamless.
Supported CQLSHRC sections:
[connection]
- hostname, port, ssl settings[authentication]
- keyspace, credentials file path[auth_provider]
- authentication module and username[ssl]
- SSL/TLS certificate configuration
Example CQLSHRC file:
; ~/.cassandra/cqlshrc
[connection]
hostname = cassandra.example.com
port = 9042
ssl = true
[authentication]
keyspace = my_keyspace
credentials = ~/.cassandra/credentials
[ssl]
certfile = ~/certs/ca.pem
userkey = ~/certs/client-key.pem
usercert = ~/certs/client-cert.pem
validate = true
See CQLSHRC_SUPPORT.md for complete CQLSHRC compatibility details.
For advanced features and AI configuration, CQLAI uses its own JSON format:
Example cqlai.json
:
{
"host": "127.0.0.1",
"port": 9042,
"keyspace": "",
"username": "cassandra",
"password": "cassandra",
"requireConfirmation": true,
"consistency": "LOCAL_ONE",
"pageSize": 100,
"historyFile": "~/.cqlai/history",
"aiHistoryFile": "~/.cqlai/ai_history",
"ssl": {
"enabled": false,
"certPath": "/path/to/client-cert.pem",
"keyPath": "/path/to/client-key.pem",
"caPath": "/path/to/ca-cert.pem",
"hostVerification": true,
"insecureSkipVerify": false
},
"ai": {
"provider": "openai",
"openai": {
"apiKey": "sk-...",
"model": "gpt-4-turbo-preview"
}
}
}
Configuration Options:
Option | Type | Default | Description |
---|---|---|---|
host |
string | 127.0.0.1 |
Cassandra host address |
port |
number | 9042 |
Cassandra port |
keyspace |
string | "" |
Default keyspace to use |
username |
string | "" |
Authentication username |
password |
string | "" |
Authentication password |
requireConfirmation |
boolean | true |
Require confirmation for dangerous commands |
consistency |
string | LOCAL_ONE |
Default consistency level (ANY, ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, LOCAL_ONE) |
pageSize |
number | 100 |
Number of rows per page |
maxMemoryMB |
number | 10 |
Maximum memory for query results in MB |
connectTimeout |
number | 10 |
Connection timeout in seconds |
requestTimeout |
number | 10 |
Request timeout in seconds |
historyFile |
string | ~/.cqlai/history |
Path to CQL command history file (supports ~ expansion) |
aiHistoryFile |
string | ~/.cqlai/ai_history |
Path to AI command history file (supports ~ expansion) |
debug |
boolean | false |
Enable debug logging |
CQLAI searches for configuration files in the following locations:
CQLSHRC files:
$CQLSH_RC
(if environment variable is set)~/.cassandra/cqlshrc
(standard cqlsh location)~/.cqlshrc
(alternative location)
CQLAI JSON files:
./cqlai.json
(current working directory)~/.cqlai.json
(user home directory)~/.config/cqlai/config.json
(XDG config directory on Linux/macOS)/etc/cqlai/config.json
(system-wide configuration)
Common environment variables:
CQLAI_HOST
orCASSANDRA_HOST
- Cassandra hostCQLAI_PORT
orCASSANDRA_PORT
- Cassandra portCQLAI_KEYSPACE
- Default keyspaceCQLAI_USERNAME
- Authentication usernameCQLAI_PASSWORD
- Authentication passwordCQLAI_PAGE_SIZE
- Batch mode pagination size (default: 100)CQLSH_RC
- Path to custom CQLSHRC file
If you're migrating from cqlsh
, CQLAI will automatically read your existing ~/.cassandra/cqlshrc
file. No changes are needed to start using CQLAI with your existing Cassandra configuration.
CQLAI includes built-in AI capabilities to convert natural language into CQL queries. Simply prefix your request with .ai
:
-- Simple queries
.ai show all users
.ai find products with price less than 100
.ai count orders from last month
-- Complex operations
.ai create a table for storing customer feedback with id, customer_id, rating, and comment
.ai update user status to inactive where last_login is older than 90 days
.ai delete all expired sessions
-- Schema exploration
.ai what tables are in this keyspace
.ai describe the structure of the users table
- Natural Language Input: Type
.ai
followed by your request in plain English - Schema Context: CQLAI automatically extracts your current schema to provide context
- Query Generation: The AI generates a structured query plan
- Preview & Confirm: Review the generated CQL before execution
- Execute or Edit: Choose to execute, edit, or cancel the query
Configure your preferred AI provider in cqlai.json
:
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude 3)
- Google Gemini
- Mock (default, for testing without API keys)
- Read-only by default: AI prefers SELECT queries unless explicitly asked to modify
- Dangerous operation warnings: DROP, DELETE, TRUNCATE operations show warnings
- Confirmation required: Destructive operations require additional confirmation
- Schema validation: Queries are validated against your current schema
CQLAI provides comprehensive support for Apache Parquet format, making it ideal for data analytics workflows and integration with modern data ecosystems.
- Efficient Storage: Columnar format with excellent compression (50-80% smaller than CSV)
- Fast Analytics: Optimized for analytical queries in Spark, Presto, and other engines
- Type Preservation: Maintains Cassandra data types including collections and UDTs
- Machine Learning Ready: Direct compatibility with pandas, PyArrow, and ML frameworks
- Streaming Support: Memory-efficient streaming for large datasets
-- Export to Parquet (auto-detected by extension)
COPY users TO 'users.parquet';
-- Export with compression
COPY events TO 'events.parquet' WITH FORMAT='PARQUET' AND COMPRESSION='ZSTD';
-- Import from Parquet
COPY users FROM 'users.parquet';
-- Capture query results in Parquet format
CAPTURE 'results.parquet' FORMAT='PARQUET';
SELECT * FROM large_table WHERE condition = true;
CAPTURE OFF;
- All Cassandra primitive types (int, text, timestamp, uuid, etc.)
- Collection types (list, set, map)
- User-Defined Types (UDTs)
- Frozen collections
- Vector types for ML workloads (Cassandra 5.0+)
- Multiple compression algorithms (Snappy, GZIP, ZSTD, LZ4)
For detailed documentation, see Parquet Support Guide.
When outputting data as JSON, there are some limitations due to how the underlying gocql driver handles dynamic typing:
- Issue: NULL values in primitive columns (int, boolean, text, etc.) appear as zero values (
0
,false
,""
) instead ofnull
- Cause: The gocql driver returns zero values for NULLs when scanning into dynamic types (
interface{}
) - Workaround: Use
SELECT JSON
queries which return proper JSON from Cassandra server-side
- Issue: UDT columns appear as empty objects
{}
in JSON output - Cause: The gocql driver cannot properly unmarshal UDTs without compile-time knowledge of their structure
- Workaround: Use
SELECT JSON
queries for proper UDT serialization
-- Regular SELECT (has limitations)
SELECT * FROM users;
-- Returns: {"id": 1, "age": 0, "active": false} -- age and active might be NULL
-- Using SELECT JSON (preserves types correctly)
SELECT JSON * FROM users;
-- Returns: {"id": 1, "age": null, "active": null} -- NULLs properly represented
Note: Complex types (lists, sets, maps, vectors) are properly preserved in JSON output.
To work on cqlai
, you'll need Go (≥ 1.24).
# Clone the repository
git clone https://github.com/axonops/cqlai.git
cd cqlai
# Install dependencies
go mod download
# Build a standard binary
make build
# Build a development binary with race detection
make build-dev
# Run all tests
make test
# Run tests with coverage report
make test-coverage
# Run the linter
make lint
# Run all checks (format, lint, test)
make check
- Language: Go
- TUI Framework: Bubble Tea
- TUI Components: Bubbles
- Styling: Lip Gloss
- Cassandra Driver: gocql
This project is licensed under the Apache 2.0 license. See the LICENSE file for details.
Third-party dependency licenses are available in the THIRD-PARTY-LICENSES directory. To regenerate license attributions, run make licenses
.