-
Airwallex
- Beijing
-
02:52
(UTC +08:00)
Lists (1)
Sort Name ascending (A-Z)
Starred repositories
Developer-friendly OSS embedded retrieval library for multimodal AI. Search More; Manage Less.
🔥 The Web Data API for AI - Power AI agents with clean web data
Packer is a tool for creating identical machine images for multiple platforms from a single source configuration.
A fast, cross-platform build tool inspired by Make, designed for modern workflows.
Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and…
Build GUI for your Python program with JavaScript, HTML, and CSS
The swiss army knife of lossless video/audio editing
</> htmx - high power tools for HTML
Termux - a terminal emulator application for Android OS extendible by variety of packages.
OpenDHT: a C++17 Distributed Hash Table implementation
ts-fsrs is a versatile package written in TypeScript that supports ES modules, CommonJS, and UMD.
CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
SeaweedFS is a distributed storage system for object storage (S3), file systems, and Iceberg tables, designed to handle billions of files with O(1) disk access and effortless horizontal scaling.
Source for remoteintech.company — a community-maintained directory of remote-friendly tech companies
DigitalPlat FreeDomain: Free Domain For Everyone
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthr…
Securely connect your devices into a private network. Mesh VPN, socks5 proxy server/client
Open-Source Chrome extension for AI-powered web automation. Run multi-agent workflows using your own LLM API key. Alternative to OpenAI Operator.
Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.