This projects tries to compare the speed of different programming languages.
In this project we don't really care about getting a precise calculation of pi. We only want to see how fast are the programming languages doing.
It uses an implementation of the Leibniz formula for Ο to do the comparison.
Here is a video which explains how it works: Calculating Ο by hand
You can find the results here: https://niklas-heer.github.io/speed-comparison/
I'm no expert in all these languages, so take my results with a grain of salt.
Also the findings just show how good the languages can handle floating point operations, which is only one aspect of a programming language.
You are also welcome to contribute and help me fix my possible horrible code in some languages. π
The benchmark measures single-threaded computational performance. To keep comparisons fair:
-
No concurrency/parallelism: Implementations must be single-threaded. No multi-threading, async, or parallel processing.
-
SIMD is allowed but separate: SIMD optimizations (using wider registers) are permitted but should be separate targets (e.g.,
swift-simd,cpp-avx2) rather than replacing the standard implementation. -
Standard language features: Use idiomatic code for the language. Compiler optimizations flags are fine.
-
Same algorithm: All implementations must use the Leibniz formula as shown in the existing implementations.
Why no concurrency? Concurrency results depend heavily on core count (4-core vs 64-core gives vastly different results), making comparisons meaningless. SIMD stays single-threaded - it just processes more data per instruction.
The benchmarks run on Ubicloud standard-4 runners:
CPU: 4 vCPUs (2 physical cores) on AMD EPYC 9454P processors
RAM: 16 GB
Storage: NVMe SSDs
OS: Ubuntu 24.04
See Ubicloud Runner Types for more details.
Everything is run by a Docker container and a bash script which envokes the programs.
To measure the execution time a python package is used.
Docker- earthly
Earthly allows to run everything with a single command:
earthly +allThis will run all tasks to collect all measurements and then run the analysis.
To collect data for all languages run:
earthly +collect-dataTo collect data for a single language run:
earthly +rust # or any other language targetLanguage targets are auto-discovered from the Earthfile. You can list them with:
./scripts/discover-languages.shTo generate the combined CSV and chart from all results:
earthly +analysisFor quick testing, run only a subset of fast languages:
earthly +fast-check # runs: c, go, rust, cpythonThe project uses GitHub Actions with a parallel matrix build:
- Auto-discovery: Language targets are automatically detected from the Earthfile
- Parallel execution: All 43+ languages run simultaneously in separate jobs
- Isolation: Each language gets a fresh runner environment
- Results collection: All results are merged and analyzed together
- Auto-publish: Results are published to GitHub Pages
Repository maintainers can trigger benchmarks on PRs using comments:
/bench rust go c # Run specific languages
enable-ci: Trigger full benchmark suite on a PRskip-ci: Skip the fast-check on a PR
This project uses an AI-powered CI workflow to keep all programming languages up to date automatically.
- Weekly Check: A scheduled workflow runs every Monday at 6 AM UTC
- Version Detection: Checks for new versions via:
- Docker Hub Registry API (for official language images)
- GitHub Releases API (for languages like Zig, Nim, Gleam)
- Alpine package index (for Alpine-based packages)
- Automated Updates: Claude Code (via OpenRouter) updates the Earthfile with new versions
- Validation: Runs a quick benchmark to verify the update compiles and runs correctly
- Breaking Changes: If the build fails, Claude Code (Opus) researches and fixes breaking changes (up to 3 attempts)
- PR Creation: Creates a PR for review if successful, or an issue describing the failure if not
You can manually trigger a version check:
- Go to Actions β Version Check β Run workflow
- Optionally specify a single language name to check only that one
- Enable "Dry run" to check versions without creating PRs
Version sources are defined in scripts/version-sources.json. Each language maps to:
source: Where to check for updates (docker, github, alpine, apt)imageorrepo: The Docker image or GitHub repositoryearthfile_pattern: Regex to extract current version from Earthfilesource_file: The source code file for this language
Why do you also count reading a file and printing the output?
Because I think this is a more realistic scenario to compare speeds.
Are the compile times included in the measurements?
No they are not included, because when running the program in the real world this would also be done before.
See all contributors on the Contributors page.
For creating hyperfine which is used for the fundamental benchmarking.
This projects takes inspiration from Thomas who did a similar comparison on his blog.