The first version of the Alfe AI Cloud Platform https://alfe.sh (beta-2.30).
This initial cloud release includes the image design component of the Alfe AI Platform.
It now defaults to OpenAI's gpt-image-1 model for image generation via the built-in API. You
can change the model globally via the new image_gen_model
setting which accepts gptimage1
,
dalle2
, or dalle3
.
If the model returns a base64 string instead of a URL, the server automatically decodes and saves the image.
The server also includes an optional color swatch detector that can trim any palette band from the bottom of generated images. This feature is disabled by default and can be enabled via the remove_color_swatches
setting.
The software development component is coming soon, and is available now as a Pre-release on GitHub.
Alfe AI beta-2.30+ (Image Design): https://github.com/alfe-ai/alfe-ai-2.0_beta
Alfe AI beta-0.4x+ (Software Development): https://github.com/alfe-ai/Sterling
wget https://raw.githubusercontent.com/alfe-ai/alfe-ai-Aurelix/refs/heads/Aurora/Aurelix/dev/main-rel2/deploy_aurelix.sh && chmod +x deploy_aurelix.sh && ./deploy_aurelix.sh
Set HTTPS_KEY_PATH
and HTTPS_CERT_PATH
to the SSL key and certificate files
to enable HTTPS across the included servers. If the files are missing the
services fall back to HTTP.
You can quickly obtain free certificates from Let's Encrypt by running the
setup_certbot.sh
script. It installs Certbot and generates the key and
certificate files for the domain you specify.
After obtaining the certificates, run setup_ssl_permissions.sh <domain> [user]
to grant the specified user (default: admin
) read access to the key and
certificate so Aurora can run without root privileges.
The Aurora server reads its port from the AURORA_PORT
environment variable
(default: 3000
). Binding directly to port 443
typically requires root
privileges. If you prefer to run the server as a regular user, you can forward
incoming connections from port 443
to your configured AURORA_PORT
.
Run the helper script with sudo
to set up the forwarding rule:
sudo ./forward_port_443.sh 3000
Replace 3000
with your chosen AURORA_PORT
. After adding the rule, start the
server normally and clients can connect using https://your-domain/
on port
443
while the Node.js process continues to run on the higher port.
Set SQL_SERVER_PORT
in .env
(see Aurora/.env.example
) to configure the port
for a simple HTTP interface to the SQLite database. Start the server with:
npm run sqlserver --prefix Aurora
Send POST requests to /sql
with a JSON body containing a sql
string and
optional params
array. Select queries return rows while other statements
return change information.
A small command-line script perplexity-cli.js
lets you query the official Perplexity API and prints any citation URLs from the response. A newer interactive version pplx.js
is also provided.
Supported Perplexity models:
sonar
sonar-pro
sonar-reasoning
sonar-reasoning-pro
sonar-deep-research
r1-1776
npm install axios commander
Set your API key with PERPLEXITY_API_KEY
or pass --key
when running.
chmod +x pplx.js
./pplx.js "What causes aurora borealis?"
The script outputs the answer and lists cited URLs if available.
The CLI follows Perplexity's official API which mirrors the OpenAI Chat
Completions format. Set your API key in the PERPLEXITY_API_KEY
environment
variable and send requests to https://api.perplexity.ai/chat/completions
.
{
"model": "sonar-pro",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Who discovered penicillin?" }
],
"max_tokens": 512,
"temperature": 0.7
}
Responses include optional citations
and search_results
arrays with the
source URLs used. Common errors are a 400 Bad Request
when the model name is
wrong and 401
/403
for missing or invalid API keys. Use plain model names
like sonar
, sonar-pro
, sonar-reasoning
, sonar-reasoning-pro
,
sonar-deep-research
, or r1-1776
without any provider prefix.
The order of models shown in the reasoning tooltip can be customized.
Edit Aurora/public/reasoning_tooltip_config.js
and reorder the
chatModels
and reasoningModels
arrays to suit your preferences.
The chat model list now includes Anthropic Claude Sonnet 4 as an
ultimate
tier option. Pricing is $3 per million input tokens,
$15 per million output tokens, and $4.80 per thousand input images.
Anthropic Claude 3.7 Sonnet is available under the pro
tier. Created Feb 24, 2025 with a 200,000 token context limit, pricing is $3 per million input tokens and $15 per million output tokens.
Anthropic Claude Opus 4 is also available under the ultimate
tier.
Created May 22, 2025 with a 200,000 token context limit, pricing is
$15 per million input tokens and $75 per million output tokens.
Anthropic Claude 3.7 Sonnet (thinking) has been added to the reasoning
menu as an ultimate
tier option. Created Feb 24, 2025 with a 200,000 token
context limit, pricing is $3 per million input tokens and $15 per million
output tokens.
perplexity/sonar-reasoning has been added to the reasoning menu under the
pro
tier. Created Jan 29, 2025 with a 127,000 token context limit, pricing
starts at $1 per million input tokens and $5 per million output tokens.
perplexity/sonar-reasoning-pro has been added to the reasoning menu under the
pro
tier. Created Mar 7, 2025 with a 128,000 token context limit, pricing
starts at $2 per million input tokens and $8 per million output tokens.
Anthropic Claude 3.5 Haiku has been added to the chat model menu under the
pro
tier. It offers enhanced speed, coding accuracy, and tool use, making it
ideal for real-time applications like chat interactions and instant coding
suggestions. Created Nov 4, 2024 with a 200,000 token context limit, pricing is
$0.80 per million input tokens and $4 per million output tokens.
OpenAI GPT-5 Chat has been added under the pro
tier for advanced, natural,
multimodal, and context-aware conversations. It supports a 400,000 token context
with pricing at $1.25 per million input tokens and $10 per million output tokens.
OpenAI GPT-5 is now available under the pro
tier, offering major
improvements in reasoning, code quality, and user experience. It also supports a
400,000 token context and is priced at $1.25 per million input tokens and $10 per
million output tokens.
OpenAI GPT-5 Mini provides lighter-weight reasoning with reduced latency and
cost. Available under the pro
tier, it offers a 400,000 token context and
costs $0.25 per million input tokens and $2 per million output tokens.
OpenAI GPT-5 Nano is optimized for ultra-low latency environments. Under the
pro
tier, it features a 400,000 token context and pricing of $0.05 per million
input tokens and $0.40 per million output tokens.