-
Updated
Sep 13, 2024 - Go
llmops
Here are 33 public repositories matching this topic...
Provide visibility, traceability, and cost tracking for AI requests with minimal overhead across multiple AI providers and APIs.
-
Updated
Mar 26, 2026 - Go
A high-performance reverse proxy that intelligently distributes requests across multiple LLM providers (OpenAI, Google Gemini, Anthropic Claude) and API keys. It provides seamless OpenAI API compatibility, advanced load balancing algorithms, real-time cost optimization, and enterprise-grade observability.
-
Updated
Oct 18, 2025 - Go
🚀 Build with the Langfuse Go SDK to enhance your applications using Langfuse's open-source LLM engineering platform.
-
Updated
Mar 26, 2026 - Go
When AI makes $10M decisions, hallucinations aren't bugs—they're business risks. We built the verification infrastructure that makes AI agents accountable without slowing them down.
-
Updated
Oct 25, 2025 - Go
🚀 An intelligent, production-ready LLM orchestration platform in Go, engineered for high availability with dynamic real-time routing and seamless failover. Optimizes for cost, latency, and quality across any model from any provider, with a modular design and built-in support for OpenAI, Anthropic, Google, and Mistral.
-
Updated
Sep 17, 2025 - Go
The Control Plane for your LLM API Keys. Manage 30+ providers with one CLI.
-
Updated
Mar 11, 2026 - Go
XScopeHub — Observability Suite : Bridges exporters, OpenTelemetry, and OpenObserve with ETL pipelines for metrics, logs, and traces.
-
Updated
Mar 19, 2026 - Go
A self-hosted, open-source (Apache 2.0) proxy for LLM's with prometheus metrics
-
Updated
Jul 30, 2025 - Go
A proxy-based tool for tracing, recording, and replaying LLM API requests.
-
Updated
Feb 3, 2026 - Go
Enterprise AI governance reference implementation with gateway controls, telemetry, detections, and rollout validation.
-
Updated
Mar 20, 2026 - Go
The reliability layer between your code and LLM providers. Hapax is a production-ready AI infrastructure layer that ensures uninterrupted AI operations through intelligent provider management and automatic failover. It is designed to address common challenges in managing AI infrastructure
-
Updated
Sep 18, 2025 - Go
The reliability layer between your code and LLM providers.
-
Updated
Jan 6, 2025 - Go
🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.
-
Updated
Mar 14, 2026 - Go
One API for 25+ LLMs, OpenAI, Anthropic, Bedrock, Azure. Caching, guardrails & cost controls. Go-native LiteLLM & Kong AI Gateway alternative.
-
Updated
Mar 26, 2026 - Go
Finetune LLMs on K8s by using Runbooks
-
Updated
Aug 28, 2024 - Go
专注在智能运维、自动化运维、Zabbix、Prometheus、Grafana、Nagios、ELK Stack(Elasticsearch、Logstash、Kibana)、Graylog、Ansible、SaltStack、Puppet、Chef、Terraform、Docker、Kubernetes、OpenShift、Jenkins、MySQL、PostgreSQL、MariaDB、Redis、MongoDB、InfluxDB、Ceph、MinIO,RabbitMQ、Kafka、NATS、Apache Pulsar、Nginx、Apache HTTP Server、HAProxy、Traefik、Caddy、OpenStack、OpenLDAP、FreeRDP等多个领域。
-
Updated
Jul 14, 2025 - Go
Improve this page
Add a description, image, and links to the llmops topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llmops topic, visit your repo's landing page and select "manage topics."