Skip to content

atacan/speed-comparison

Β 
Β 

Repository files navigation

CI

plot


Speed comparison of programming languages

This projects tries to compare the speed of different programming languages. In this project we don't really care about getting a precise calculation of pi. We only want to see how fast are the programming languages doing.
It uses an implementation of the Leibniz formula for Ο€ to do the comparison.
Here is a video which explains how it works: Calculating Ο€ by hand

You can find the results here: https://niklas-heer.github.io/speed-comparison/

Disclaimer

I'm no expert in all these languages, so take my results with a grain of salt.
Also the findings just show how good the languages can handle floating point operations, which is only one aspect of a programming language.

You are also welcome to contribute and help me fix my possible horrible code in some languages. πŸ˜„

Rules

The benchmark measures single-threaded computational performance. To keep comparisons fair:

  1. No concurrency/parallelism: Implementations must be single-threaded. No multi-threading, async, or parallel processing.

  2. SIMD is allowed but separate: SIMD optimizations (using wider registers) are permitted but should be separate targets (e.g., swift-simd, cpp-avx2) rather than replacing the standard implementation.

  3. Standard language features: Use idiomatic code for the language. Compiler optimizations flags are fine.

  4. Same algorithm: All implementations must use the Leibniz formula as shown in the existing implementations.

Why no concurrency? Concurrency results depend heavily on core count (4-core vs 64-core gives vastly different results), making comparisons meaningless. SIMD stays single-threaded - it just processes more data per instruction.

Used hardware

The benchmarks run on Ubicloud standard-4 runners:

CPU: 4 vCPUs (2 physical cores) on AMD EPYC 9454P processors
RAM: 16 GB
Storage: NVMe SSDs
OS: Ubuntu 24.04

See Ubicloud Runner Types for more details.

Run it yourself

Everything is run by a Docker container and a bash script which envokes the programs.

To measure the execution time a python package is used.

Requirements

Run everything

Earthly allows to run everything with a single command:

earthly +all

This will run all tasks to collect all measurements and then run the analysis.

Collect data

To collect data for all languages run:

earthly +collect-data

To collect data for a single language run:

earthly +rust    # or any other language target

Available language targets

Language targets are auto-discovered from the Earthfile. You can list them with:

./scripts/discover-languages.sh

Analyse results

To generate the combined CSV and chart from all results:

earthly +analysis

Fast check (subset)

For quick testing, run only a subset of fast languages:

earthly +fast-check   # runs: c, go, rust, cpython

CI/CD

The project uses GitHub Actions with a parallel matrix build:

  1. Auto-discovery: Language targets are automatically detected from the Earthfile
  2. Parallel execution: All 43+ languages run simultaneously in separate jobs
  3. Isolation: Each language gets a fresh runner environment
  4. Results collection: All results are merged and analyzed together
  5. Auto-publish: Results are published to GitHub Pages

PR Commands

Repository maintainers can trigger benchmarks on PRs using comments:

/bench rust go c     # Run specific languages

Labels

  • enable-ci: Trigger full benchmark suite on a PR
  • skip-ci: Skip the fast-check on a PR

Automated Version Updates

This project uses an AI-powered CI workflow to keep all programming languages up to date automatically.

How It Works

  1. Weekly Check: A scheduled workflow runs every Monday at 6 AM UTC
  2. Version Detection: Checks for new versions via:
    • Docker Hub Registry API (for official language images)
    • GitHub Releases API (for languages like Zig, Nim, Gleam)
    • Alpine package index (for Alpine-based packages)
  3. Automated Updates: Claude Code (via OpenRouter) updates the Earthfile with new versions
  4. Validation: Runs a quick benchmark to verify the update compiles and runs correctly
  5. Breaking Changes: If the build fails, Claude Code (Opus) researches and fixes breaking changes (up to 3 attempts)
  6. PR Creation: Creates a PR for review if successful, or an issue describing the failure if not

Manual Trigger

You can manually trigger a version check:

  1. Go to Actions β†’ Version Check β†’ Run workflow
  2. Optionally specify a single language name to check only that one
  3. Enable "Dry run" to check versions without creating PRs

Configuration

Version sources are defined in scripts/version-sources.json. Each language maps to:

  • source: Where to check for updates (docker, github, alpine, apt)
  • image or repo: The Docker image or GitHub repository
  • earthfile_pattern: Regex to extract current version from Earthfile
  • source_file: The source code file for this language

FAQ

Why do you also count reading a file and printing the output?

Because I think this is a more realistic scenario to compare speeds.

Are the compile times included in the measurements?

No they are not included, because when running the program in the real world this would also be done before.

Thanks

Contributors

See all contributors on the Contributors page.

Special thanks

sharkdp

For creating hyperfine which is used for the fundamental benchmarking.

Thomas

This projects takes inspiration from Thomas who did a similar comparison on his blog.

About

A repo which compares the speed of different programming languages.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 64.4%
  • Earthly 13.0%
  • Crystal 5.1%
  • Just 3.2%
  • Shell 1.5%
  • C++ 1.2%
  • Other 11.6%