S3 storage backend for datasette-files.
Install this plugin in the same environment as Datasette.
datasette install datasette-files-s3Configure a datasette-files source to use S3 storage by setting "storage": "s3" and providing the required configuration options:
plugins:
datasette-files:
sources:
my-s3-files:
storage: s3
config:
bucket: my-bucket-name
region: us-east-1
access_key_id: AKIAIOSFODNN7EXAMPLE
secret_access_key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEYOr using Datasette's -s flag:
datasette data.db \
-s plugins.datasette-files.sources.my-s3-files.storage s3 \
-s plugins.datasette-files.sources.my-s3-files.config.bucket my-bucket-name \
-s plugins.datasette-files.sources.my-s3-files.config.region us-east-1You can also use a credentials broker that returns temporary AWS credentials plus the S3 folder to use:
plugins:
datasette-files:
sources:
my-s3-files:
storage: s3
config:
credentials_url: https://example.com/api/s3-credentials
credentials_url_secret: shared-secret
region: us-east-1When credentials_url is configured, the plugin sends a POST request with
secret=... as form-encoded data and expects a JSON response shaped like this:
{
"AccessKeyId": "ASIAWXFXAIOZGGVU5O6Y",
"SecretAccessKey": "...",
"SessionToken": "...",
"Expiration": "2026-03-20T11:56:23Z",
"S3Folder": "s3://datasettecloud-dev-files/team-1/"
}- bucket (required unless
credentials_urlis used): The name of the S3 bucket. - region (optional, default
us-east-1): The AWS region. - prefix (optional): A prefix to add to all S3 object keys. This allows you to store files under a specific path within the bucket. A trailing slash will be added automatically if not provided -
"uploads"and"uploads/"are equivalent. - endpoint_url (optional): A custom S3 endpoint URL, for use with S3-compatible services.
- access_key_id (optional): AWS access key ID.
- secret_access_key (optional): AWS secret access key.
- session_token (optional): AWS session token, for temporary credentials supplied directly in config.
- credentials_url (optional): URL to
POSTto for temporary credentials. The plugin sendssecret=...as form-encoded data and expects a JSON response containingAccessKeyId,SecretAccessKey,SessionToken,Expiration, andS3Folder. - credentials_url_secret (required with
credentials_url): Shared secret sent to the credentials endpoint as the form fieldsecret.
The plugin resolves AWS credentials using the following priority:
- Credentials broker: If
credentials_urlis configured, the plugin fetches temporary credentials byPOSTingsecret=...to that URL. It stores the returnedAccessKeyId,SecretAccessKey,SessionToken,Expiration, andS3Folder, and automatically fetches a fresh set after the expiration time passes. The returnedS3Folderalso sets the active bucket and prefix for the source. - Direct configuration:
access_key_id,secret_access_key, and optionalsession_tokenin the config block. - Default AWS credential chain: If no credentials are provided through the above methods, the plugin falls back to the default AWS credential chain (environment variables, IAM roles, etc.).
The prefix option lets you scope all files to a specific path within the bucket. For example, with prefix: "uploads/", a file uploaded as photo.jpg will be stored at the S3 key uploads/photo.jpg.
It does not matter whether you include a trailing slash or not - "uploads" and "uploads/" will both result in files stored under uploads/.
When using credentials_url, the returned S3Folder behaves like a dynamically supplied bucket + prefix. For example, s3://datasettecloud-dev-files/team-1/ means the plugin will use bucket datasettecloud-dev-files and prefix team-1/.
To set up this plugin locally, first checkout the code.
cd datasette-files-s3Run tests like this:
uv run pytestYou can use SeaweedFS to run a local development server against a local imitation of the S3 API:
brew install seaweedfs
./dev-server.shTo run a local development server against a real S3 bucket, create a dev-s3.sh script (this file is in .gitignore):
#!/bin/bash
set -e
BUCKET="your-bucket-name"
REGION="us-east-1"
ACCESS_KEY="your-access-key-id"
SECRET_KEY="your-secret-access-key"
uv run datasette data.db --create --internal internal.db --root --secret 1 --reload \
-s plugins.datasette-files.sources.s3-live.storage s3 \
-s plugins.datasette-files.sources.s3-live.config.bucket "$BUCKET" \
-s plugins.datasette-files.sources.s3-live.config.region "$REGION" \
-s plugins.datasette-files.sources.s3-live.config.access_key_id "$ACCESS_KEY" \
-s plugins.datasette-files.sources.s3-live.config.secret_access_key "$SECRET_KEY" \
-s plugins.datasette-files.sources.s3-live.config.prefix "demo-prefix/" \
-s permissions.files-browse true \
-s permissions.files-upload true \
-s permissions.files-edit trueThen run it with bash dev-s3.sh and follow the login token URL printed to the console.