Skip to content

A high-performance, non-blocking database driver for PostgreSQL, MySQL, and SQLite, written for Kotlin Multiplatform.

License

Notifications You must be signed in to change notification settings

smyrgeorge/sqlx4k

Repository files navigation

sqlx4k

Build Maven Central GitHub License GitHub commit activity GitHub issues Kotlin

A high-performance, non-blocking database driver for PostgreSQL, MySQL, and SQLite, written for Kotlin Multiplatform.

📖 Documentation

🏠 Homepage (under construction)

📰 Articles

Short deep‑dive posts covering Kotlin/Native, FFI, and Rust ↔ Kotlin interop used in sqlx4k:

  • Introduction to the Kotlin Native and FFI: [Part 1], [Part 2]
  • Interoperability between Kotlin and Rust, using FFI: [Part 1], (Part 2 soon)

Features

Next Steps (contributions are welcome)

  • Create and publish sqlx4k-gradle-plugin.
  • Pure Kotlin implementation for ConnectionPool.
  • Validate queries at compile time (avoid runtime errors)
    • Syntax checking is already supported (using the @Query annotation). ✅
    • Validate queries by accessing the DB schema ✅
    • Validate query literal types (type check query parameters)
  • Add support for SQLite JVM target.
  • WASM support (?).
  • Pure Kotlin implementation for PostgreSQL.
  • Pure Kotlin implementation for MySQL.
  • Pure Kotlin implementation for SQLite.

Supported Databases

  • MySQL
  • PostgreSQL
  • SQLite

Async-io

The driver is designed with full support for non-blocking I/O, enabling seamless integration with modern, high-performance applications. By leveraging asynchronous, non-blocking operations, it ensures efficient resource management, reduces latency, and improves scalability.

Connection Pool

Connection Pool Settings

The driver allows you to configure connection pool settings directly from its constructor, giving you fine-grained control over how database connections are managed. These settings are designed to optimize performance and resource utilization for your specific application requirements.

Key Configuration Options:

  • minConnections
    Specifies the minimum number of connections to maintain in the pool at all times. This ensures that a baseline number of connections are always ready to serve requests, reducing the latency for acquiring connections during peak usage.

  • maxConnections
    Defines the maximum number of connections that can be maintained in the pool. This setting helps limit resource usage and ensures the pool does not exceed the available database or system capacity.

  • acquireTimeout
    Sets the maximum duration to wait when attempting to acquire a connection from the pool. If a connection cannot be acquired within this time, an exception is thrown, allowing you to handle connection timeouts gracefully.

  • idleTimeout
    Specifies the maximum duration a connection can remain idle before being closed and removed from the pool. This helps clean up unused connections, freeing up resources.

  • maxLifetime
    Defines the maximum lifetime for individual connections. Once a connection reaches this duration, it is closed and replaced, even if it is active, helping prevent issues related to stale or long-lived connections.

By adjusting these parameters, you can fine-tune the driver's behavior to match the specific needs of your application, whether you're optimizing for low-latency responses, high-throughput workloads, or efficient resource utilization.

// Additionally, you can set minConnections, acquireTimeout, idleTimeout, etc. 
val options = Driver.Pool.Options.builder()
    .maxConnections(10)
    .build()

/**
 * The following urls are supported:
 *  postgresql://
 *  postgresql://localhost
 *  postgresql://localhost:5433
 *  postgresql://localhost/mydb
 *
 * Additionally, you can use the `postgreSQL` function, if you are working in a multiplatform setup.
 */
val db = PostgreSQL(
    url = "postgresql://localhost:15432/test",
    username = "postgres",
    password = "postgres",
    options = options
)

/**
 *  The connection URL should follow the nex pattern,
 *  as described by [MySQL](https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-jdbc-url-format.html).
 *  The generic format of the connection URL:
 *  mysql://[host][/database][?properties]
 */
val db = MySQL(
    url = "mysql://localhost:13306/test",
    username = "mysql",
    password = "mysql"
)

/**
 * The following urls are supported:
 * `sqlite::memory:`            | Open an in-memory database.
 * `sqlite:data.db`             | Open the file `data.db` in the current directory.
 * `sqlite://data.db`           | Open the file `data.db` in the current directory.
 * `sqlite:///data.db`          | Open the file `data.db` from the root (`/`) directory.
 * `sqlite://data.db?mode=ro`   | Open the file `data.db` for read-only access.
 */
val db = SQLite(
    url = "sqlite://test.db", // If the `test.db` file is not found, a new db will be created.
    options = options
)

Acquiring and using connections

The driver provides two complementary ways to run queries:

  • Directly through the database instance (recommended). Each call acquires a pooled connection, executes the work, and returns it to the pool automatically.
  • Manually acquire a connection from the pool when you need to batch multiple operations on the same connection without starting a transaction.

Notes:

  • When you manually acquire a connection, you must release it to return it to the pool.

Examples (PostgreSQL shown, similar to MySQL/SQLite):

// Manual connection acquisition (remember to release)
val conn: Connection = db.acquire().getOrThrow()
try {
    conn.execute("insert into users(id, name) values (2, 'Bob');").getOrThrow()
    val rs = conn.fetchAll("select * from users;").getOrThrow()
    // ...
} finally {
    conn.release().getOrThrow() // Return to pool
}

Prepared Statements

// With named parameters:
val st1 = Statement
    .create("select * from sqlx4k where id = :id")
    .bind("id", 65)

db.fetchAll(st1).getOrThrow().map {
    val id: ResultSet.Row.Column = it.get("id")
    Test(id = id.asInt())
}

// With positional parameters:
val st2 = Statement
    .create("select * from sqlx4k where id = ?")
    .bind(0, 65)

db.fetchAll(st2).getOrThrow().map {
    val id: ResultSet.Row.Column = it.get("id")
    Test(id = id.asInt())
}

RowMapper(s)

object Sqlx4kRowMapper : RowMapper<Sqlx4k> {
    override fun map(row: ResultSet.Row): Sqlx4k {
        val id: ResultSet.Row.Column = row.get(0)
        val test: ResultSet.Row.Column = row.get(1)
        // Use built-in mapping methods to map the values to the corresponding type.
        return Sqlx4k(id = id.asInt(), test = test.asString())
    }
}

val res: List<Sqlx4k> = db.fetchAll("select * from sqlx4k limit 100;", Sqlx4kRowMapper).getOrThrow()

Transactions

val tx1: Transaction = db.begin().getOrThrow()
tx1.execute("delete from sqlx4k;").getOrThrow()
tx1.fetchAll("select * from sqlx4k;").getOrThrow().forEach { println(it) }
tx1.commit().getOrThrow()

You can also execute entire blocks in a transaction scope.

db.transaction {
    execute("delete from sqlx4k;").getOrThrow()
    fetchAll("select * from sqlx4k;").getOrThrow().forEach { println(it) }
    // At the end of the block will auto commit the transaction.
    // If any error occurs, it will automatically trigger the rollback method.
}

TransactionContext (coroutines)

When using coroutines, you can propagate a transaction through the coroutine context using TransactionContext. This allows you to write small, composable suspend functions that either:

  • start a transaction at the boundary of your use case, and
  • inside helper functions call TransactionContext.current() to participate in the same transaction without having to propagate Transaction or Driver parameters everywhere.
val db = PostgreSQL(
    url = "postgresql://localhost:15432/test",
    username = "postgres",
    password = "postgres",
    options = options
)

fun main() = runBlocking {
    TransactionContext.new(db) {
        // `this` is a TransactionContext and also a Transaction (delegation),
        // so you can call query methods directly:
        execute("insert into sqlx4k (id, test) values (66, 'test');").getOrThrow()

        // In deeper code, fetch the same context and keep using the same tx
        doBusinessLogic()
        doMoreBusinessLogic()
        doExtraBusinessLogic()
    }
}

suspend fun doBusinessLogic() {
    // Get the active transaction from the coroutine context
    val tx = TransactionContext.current()
    // Continue operating within the same database transaction
    tx.execute("update sqlx4k set test = 'updated' where id = 66;").getOrThrow()
}

// Or you can use the `withCurrent` method to get the transaction and execute the block in an ongoing transaction.
suspend fun doMoreBusinessLogic(): Unit = TransactionContext.withCurrent {
    // Continue operating within the same database transaction
}

// You can also pass the db instance to `withCurrent`.
// If a transaction is already active, the block runs within it; otherwise, a new transaction is started for the block.
suspend fun doExtraBusinessLogic(): Unit = TransactionContext.withCurrent(db) {
    // Continue operating within the same database transaction
}

Code-Generation, CRUD and @Repository Implementations

For this operation you will need to include the KSP plugin to your project.

plugins {
    alias(libs.plugins.ksp)
}

// Then you need to configure the processor (it will generate the necessary code files).
ksp {
    // Optional: pick SQL dialect for CRUD generation from @Table classes.
    // Currently only "mysql" is special-cased; everything else falls back to a generic ANSI-like dialect.
    // This setting affects the shape of INSERT/UPDATE/DELETE that TableProcessor emits.
    // It does NOT affect @Query validation (see notes below).
    // arg("dialect", "mysql")

    // Required: where to place the generated sources.
    arg("output-package", "io.github.smyrgeorge.sqlx4k.examples.postgres")

    // Compile-time SQL syntax checking for @Query methods (default = true).
    // Set to "false" to turn it off if you use vendor-specific syntax not understood by the parser.
    // arg("validate-sql-syntax", "false")
}

dependencies {
    // Will generate code for macosArm64. Add more targets if you want.
    add("kspMacosArm64", implementation("io.github.smyrgeorge:sqlx4k-codegen:x.y.z"))
}

Then create your data class that will be mapped to a table:

@Table("sqlx4k")
data class Sqlx4k(
    @Id(insert = true) // Will be included in the insert query.
    val id: Int,
    val test: String
)

@Repository(mapper = Sqlx4kRowMapper::class)
interface Sqlx4kRepository : CrudRepository<Sqlx4k> {
    // The processor will validate the SQL syntax in the @Query methods.
    // If you want to disable this validation, you can set the "validate-sql-syntax" arg to "false".
    @Query("SELECT * FROM sqlx4k WHERE id = :id")
    suspend fun findOneById(context: QueryExecutor, id: Int): Result<Sqlx4k?>

    @Query("SELECT * FROM sqlx4k")
    suspend fun findAll(context: QueryExecutor): Result<List<Sqlx4k>>

    @Query("SELECT count(*) FROM sqlx4k")
    suspend fun countAll(context: QueryExecutor): Result<Long>
}

Note

Besides your @Query methods, because your interface extends CrudRepository<T>, the generator also adds the CRUD helper methods automatically: insert, update, delete, and save.

Then in your code you can use it like:

// Insert a new record.
val record = Sqlx4k(id = 1, test = "test")
val res: Sqlx4k = Sqlx4kRepositoryImpl.insert(db, record).getOrThrow()
// Execute a generated query.
val res: List<Sqlx4k> = Sqlx4kRepositoryImpl.selectAll(db).getOrThrow()

For more details take a look at the examples.

Context-Parameters

Optional: Using ContextCrudRepository with context-parameters.

You can opt in to generated repositories that use Kotlin context-parameters instead of passing a QueryExecutor parameter to every method. This switches your repository to ContextCrudRepository and makes all generated CRUD and @Query methods require an ambient QueryExecutor provided via a context-parameter.

To enable this mode:

  • Add KSP argument enable-context-parameters set to true in your ksp configuration.
  • Make your repository interface extend ContextCrudRepository instead of CrudRepository.
  • Declare your @Query methods with a context(context: QueryExecutor) receiver instead of an explicit context parameter.

ksp configuration example:

ksp {
    // ... other args
    arg("output-package", "io.github.smyrgeorge.sqlx4k.examples.postgres")
    arg("enable-context-parameters", "true") // see RepositoryProcessor
}

Repository interface example with context receivers:

@Repository(mapper = Sqlx4kRowMapper::class)
interface Sqlx4kRepository : ContextCrudRepository<Sqlx4k> {
    @Query("SELECT * FROM sqlx4k WHERE id = :id")
    context(context: QueryExecutor)
    suspend fun findOneById(id: Int): Result<Sqlx4k?>

    @Query("SELECT * FROM sqlx4k")
    context(context: QueryExecutor)
    suspend fun findAll(): Result<List<Sqlx4k>>
}

Usage with a context-parameter (no explicit db parameter on each call):

val record = Sqlx4k(id = 1, test = "test")
with(db) {
    val inserted = Sqlx4kRepositoryImpl.insert(record).getOrThrow()
    val one = Sqlx4kRepositoryImpl.findOneById(1).getOrThrow()
}

If you prefer the explicit-parameter style, keep CrudRepository and do not set enable-context-parameters. In that case, each generated method takes a QueryExecutor (e.g., db or transaction) as the first argument.

SQL syntax validation (compile-time)

  • What it is: during code generation, sqlx4k parses the SQL string in each @Query method using JSqlParser. If the parser detects a syntax error, the build fails early with a clear error message pointing to the offending repository method.
  • What it checks: only SQL syntax. It does not verify that tables/columns exist, parameter names match, or types are compatible.
  • When it runs: at KSP processing time, before your code is compiled/run.
  • Dialect notes: validation is dialect-agnostic and aims for an ANSI/portable subset. Some vendor-specific features (e.g., certain MySQL or PostgreSQL extensions) may not be recognized. If you hit a false positive, you can disable validation per module with ksp arg validate-sql-syntax=false, or disable it per query with @Query(checkSyntax = false).
  • Most reliable with: SELECT, INSERT, UPDATE, DELETE statements. DDL or very advanced constructs may not be fully supported.

Example of a build error you might see if your query is malformed:

> Task :compileKotlin
Invalid SQL in function findAllBy: Encountered "FROMM" at line 1, column 15

Tip: keep it enabled to catch typos early; if you rely heavily on vendor-specific syntax not yet supported by the parser, turn it off either globally or just for a specific method:

  • Globally (module-wide):
ksp { arg("validate-sql-syntax", "false") }
  • Per query:
@Repository(mapper = UserMapper::class)
interface UserRepository {
    @Query("select * from users where id = :id", checkSyntax = false)
    suspend fun findOneById(context: QueryExecutor, id: Int): Result<User?>
}

SQL schema validation (compile-time)

  • What it is: during code generation, sqlx4k can also validate your @Query SQL against a known database schema. It loads your migration files, builds an in-memory schema, and uses Apache Calcite to validate that tables, columns, and basic types referenced by the query exist and are compatible.
  • What it checks:
    • Existence of referenced tables and columns.
    • Basic type compatibility for literals and simple expressions.
    • It does not execute queries or connect to a database.
  • When it runs: at KSP processing time, right after syntax validation.
  • Default: disabled. You must enable it explicitly per module.
  • Requirements: point the processor to your migrations directory so it can reconstruct the schema. The loader supports a pragmatic subset of DDL: CREATE TABLE, ALTER TABLE ADD/DROP COLUMN, and DROP TABLE, processed in migration order.
  • Dialect notes: validation is based on Calcite’s SQL semantics and a simplified schema model derived from your migrations. Some vendor-specific features and advanced DDL may not be fully supported.

Enable module-wide schema validation by adding KSP args in your build.gradle.kts:

ksp {
    arg("validate-sql-schema", "true")
    // Path to your migration .sql files (processed in ascending file version order)
    arg("schema-migrations-path", "./db/migrations")
}

You can also disable schema checks for a specific query:

@Repository(mapper = UserMapper::class)
interface UserRepository {
    @Query("select * from users where id = :id", checkSchema = false)
    suspend fun findOneById(context: QueryExecutor, id: Int): Result<User?>
}

Database Migrations

Run any pending migrations against the database; and validate previously applied migrations against the current migration source to detect accidental changes in previously applied migrations.

val res = db.migrate(
    path = "./db/migrations",
    table = "_sqlx4k_migrations",
    afterFileMigration = { m, d -> println("Migration of file: $m, took $d") }
).getOrThrow()
println("Migration completed. $res")

This process will create a table with name _sqlx4k_migrations. For more information, take a look at the examples.

Listen/Notify (only for PostgreSQL)

db.listen("chan0") { notification: Postgres.Notification ->
    println(notification)
}

(1..10).forEach {
    db.notify("chan0", "Hello $it")
    delay(1000)
}

SQLDelight

Check here: https://github.com/smyrgeorge/sqlx4k-sqldelight

Supported Targets

  • jvm (only PostgreSQL and MySQL are supported at the moment)
  • iosArm64
  • iosSimulatorArm64
  • androidNativeX64
  • androidNativeArm64
  • macosArm64
  • macosX64
  • linuxArm64
  • linuxX64
  • mingwX64
  • wasmWasi (potential future candidate)

Usage

implementation("io.github.smyrgeorge:sqlx4k-postgres:x.y.z")
// or for MySQL
implementation("io.github.smyrgeorge:sqlx4k-mysql:x.y.z")
// or for SQLite
implementation("io.github.smyrgeorge:sqlx4k-sqlite:x.y.z")

Windows

If you are building your project on Windows, for target mingwX64, and you encounter the following error:

lld-link: error: -exclude-symbols:___chkstk_ms is not allowed in .drectve

Please look at this issue: #18

Compilation

You will need the Rust toolchain to build this project. Check here: https://rustup.rs/

Note

By default, the project will build only for your system architecture-os (e.g. macosArm64, linuxArm64, etc.)

Also, make sure that you have installed all the necessary targets (only if you want to build for all targets):

rustup target add aarch64-apple-ios
rustup target add aarch64-apple-ios-sim
rustup target add x86_64-linux-android
rustup target add aarch64-linux-android
rustup target add aarch64-apple-darwin
rustup target add x86_64-apple-darwin
rustup target add aarch64-unknown-linux-gnu
rustup target add x86_64-unknown-linux-gnu
rustup target add x86_64-pc-windows-gnu

Then, run the build.

# will build only for the current target
./gradlew build

You can also build for specific targets.

./gradlew build -Ptargets=macosArm64,macosX64

To build for all available targets, run:

./gradlew build -Ptargets=all

Publishing

./gradlew publishAllPublicationsToMavenCentralRepository -Ptargets=all

Run

First, you need to run start-up the postgres instance.

docker compose up -d

And then run the examples.

# For macosArm64
./examples/postgres/build/bin/macosArm64/releaseExecutable/postgres.kexe
./examples/mysql/build/bin/macosArm64/releaseExecutable/mysql.kexe
./examples/sqlite/build/bin/macosArm64/releaseExecutable/sqlite.kexe
# If you run in another platform consider running the correct tartge.

Examples

Here are small, self‑contained snippets for the most common tasks. For full runnable apps, see the modules under:

Checking for memory leaks

macOS (using 'leaks' tool)

Check for memory leaks with the leaks tool. First sign you binary:

codesign -s - -v -f --entitlements =(echo -n '<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "https://www.apple.com/DTDs/PropertyList-1.0.dtd"\>
<plist version="1.0">
    <dict>
        <key>com.apple.security.get-task-allow</key>
        <true/>
    </dict>
</plist>') ./bench/postgres-sqlx4k/build/bin/macosArm64/releaseExecutable/postgres-sqlx4k.kexe

Then run the tool:

leaks -atExit -- ./bench/postgres-sqlx4k/build/bin/macosArm64/releaseExecutable/postgres-sqlx4k.kexe

Acknowledgements

sqlx4k stands on the shoulders of excellent open-source projects:

Huge thanks to the maintainers and contributors of these projects.

License

MIT — see LICENSE.