AutoMQ is a cloud-native, serverless reinvented Kafka that is easily scalable, manage-less and cost-effective.
- Cloud Native: Built on cloud service. Every system design decision take cloud service's feature and billing items into consideration to offer best low-latency, scalable, reliable and cost-effective Kafka service on cloud.
- High Reliability: Leverage the features of cloud service to offer RPO of 0 and RTO in seconds.
- AWS: Use S3 express one zone and S3 to offer AZ level disaster recovery.
- GCP: Use regional SSD and cloud storage to offer AZ level disaster recovery.
- Azure: Use zone-redundant storage and blob storage to offer AZ level disaster recovery.
- Serverless:
- Auto Scaling: Watch key metrics of cluster and scale in/out automatically to match you workload and achieve pay-as-you-go.
- Scaling in seconds: Computing layer (broker) is stateless and could scale in/out in seconds, which make AutoMQ true serverless. Learn more
- Infinite scalable: Use cloud's object storage as the main storage, never worry about storage capacity.
- Manage-less: Built-in auto-balancer component balance partition and network traffic across brokers automatically. Never worry about partition re-balance. Learn more
- Cost effective: Use object storage as the main storage, take billing items into consideration when design system, fully utilize the cloud service, all of them contribute to AutoMQ and make it 10x cheaper than Apache Kafka. Refer to this report to see how we cut Apache Kafka billing by 90% on the cloud.
- High performance:
- Low latency: Use cloud block storage like AWS EBS as the durable cache layer to accelerate write.
- High throughput: Use pre-fetching, batch processing and parallel to achieve high throughput.
Refer to the AutoMQ Performance White Paper to see how we achieve this.
- A superior alternative to Apache Kafka: 100% compatible with Apache Kafka greater than 0.9.x and not lose any good features of it, but cheaper and better.
AutoMQ use logSegment as a code aspect of Apache Kafka to weave into our features. The architecture including the following main components:
- S3Stream: A streaming library based on object storage offered by AutoMQ. It is the core component of AutoMQ and is responsible for reading and writing data to object storage. Learn more.
- Stream: Stream is an abstraction to mapping the logSegment of Apache Kafka. LogSegment's data, index and other meta will mapping to different type of stream. Learn more
- Durable Cache Layer: AutoMQ use a small size cloud block storage like AWS EBS as the durable cache layer to accelerate write. Pay attention that this is not tiered storage and AutoMQ broker can decoupled from the durable cache layer completely. Learn more
- Stream Object: AutoMQ's data is organized by stream object. Data is read by stream object id through index. One stream have one stream object. Learn more
- Stream set object: Stream set object is a collection of small stream object that aimed to decrease API invoke times and metadata size. Learn more
The easiest way to run AutoMQ. You can experience the feature like fast partition move and network traffic auto-balance. Learn more
Attention: Local mode mock object storage locally and is not a production ready deployment. It is only for demo and test purpose.
Deploy AutoMQ manually with released tgz files on cloud, currently compatible with AWS, Aliyun Cloud, Tencent Cloud, Huawei Cloud and Baidu Cloud. Learn more
You can join the following groups or channels to discuss or ask questions about AutoMQ:
- Ask questions or report bug by GitHub Issues
- Discuss about AutoMQ or Kafka by Slack or Wechat Group
If you've found a problem with AutoMQ, please open a GitHub Issues. To contribute to AutoMQ please see Code of Conduct and Contributing Guide. We have a list of good first issues that help you to get started, gain experience, and get familiar with our contribution process.
Coming soon...
AutoMQ is released under Business Source License 1.1. When contributing to AutoMQ, you can find the relevant license header in each file.