Skip to content

cluster_linearize_tests fails with GCC 16.1 (fixed in GCC 16.2) #35282

@1-21gigasats

Description

@1-21gigasats

Is there an existing issue for this?

I searched for existing issues and found #32276 / #32325, which looks related because it also involves cluster_linearize, MultiIntBitSet, GCC and array-bounds diagnostics.

However, this report is about cluster_linearize_tests failing in a normal Bitcoin Core v31.0 test build with GCC 16 on Arch Linux, not the Valgrind fuzz CI warning-as-error case addressed
there.

Current behaviour

When building Bitcoin Core v31.0 on Arch Linux with the current GCC toolchain, the unit test cluster_linearize_tests fails.

This was found while preparing an Arch Linux packaging merge request for unrelated packaging fixes:

https://gitlab.archlinux.org/archlinux/packaging/packages/bitcoin/-/merge_requests/4

  • backporting the upstream Boost >= 1.91 compatibility patch
  • adding missing python to checkdepends

The Arch package maintainer suggested that an upstream commit would be preferred if this test issue needs to be addressed in the package.

Expected behaviour

cluster_linearize_tests should pass with the current GCC toolchain, as it does with GCC 15.2.1 in the same clean Arch Linux chroot setup.

Steps to reproduce

On Arch Linux, build Bitcoin Core v31.0 in a clean chroot with the current GCC toolchain and run the test suite.

In the Arch packaging setup this is done through pkgctl / clean chroot build, with tests enabled through the package check() function:

ctest --test-dir build

The relevant test is:

cluster_linearize_tests

Relevant log output

With the older GCC 15.2.1 toolchain, the same package build was tested without the local test patch. In that clean chroot run, cluster_linearize_tests passed and the full test suite completed
successfully

With the current GCC 16 toolchain, cluster_linearize_tests fails. I locally tested the patch below, and with that patch the failing test passed again with the current toolchain.

I do not know whether this is the correct upstream fix; I am including it only as a data point and possible direction.

  --- a/src/test/cluster_linearize_tests.cpp
  +++ b/src/test/cluster_linearize_tests.cpp
  @@ -56,7 +56,7 @@

   void TestOptimalLinearization(std::span<const uint8_t> enc, std::initializer_list<DepGraphIndex> optimal_linearization)
   {
  -    DepGraphIndex tx_count = 0;
  +    DepGraphIndex position_range = 0;
       FastRandomContext rng;

       auto test_fn = [&]<typename SetType>() {
  @@ -100,19 +100,22 @@
               SanityCheck(depgraph, lin);
               BOOST_CHECK(std::ranges::equal(lin, optimal_linearization));
           }
  -        tx_count = depgraph.PositionRange();
  +        position_range = depgraph.PositionRange();
       };

  -    // Always run with 64-bit set types
  -    // - The native one that will be used on this platform.
  -    test_fn.template operator()<BitSet<64>>();
  -    // - The one used on 32-bit platforms.
  -    test_fn.template operator()<bitset_detail::MultiIntBitSet<uint32_t, 2>>();
  -    // - An 8-bit one, which is maximally different in terms of bitset behavior.
  -    test_fn.template operator()<bitset_detail::MultiIntBitSet<uint8_t, 8>>();
  +    // Always run with a set type that can hold all encoded test clusters.
  +    test_fn.template operator()<BitSet<256>>();
  +
  +    // Also run with 64-bit set types if the cluster doesn't use indexes above 63.
  +    if (position_range <= 64) {
  +        // - The native one that will be used on this platform.
  +        test_fn.template operator()<BitSet<64>>();
  +        // - The one used on 32-bit platforms.
  +        test_fn.template operator()<bitset_detail::MultiIntBitSet<uint32_t, 2>>();
  +    }

       // Also run with 32-bit set types if the cluster doesn't use indexes above 31.
  -    if (tx_count <= 32) {
  +    if (position_range <= 32) {
           // - The native one that will be used on this platform.
           test_fn.template operator()<BitSet<32>>();
           // - An 8-bit one, which is maximally different in terms of bitset behavior.

How did you obtain Bitcoin Core

Arch Linux package build: https://gitlab.archlinux.org/archlinux/packaging/packages/bitcoin/-/merge_requests/4

What version of Bitcoin Core are you using?

Bitcoin Core v31.0.

Operating system and version

Arch Linux.

The failing setup uses the current Arch GCC 16 toolchain.

The counter-test used GCC 15.2.1 in the same clean chroot setup, where the unpatched test suite passed:

gcc 15.2.1+r604+g0b99615a8aef-1
gcc-libs 15.2.1+r604+g0b99615a8aef-1

Additional context

I am not a C++ developer and cannot fully judge whether the local patch above is semantically correct. It was generated with help from an LLM and then tested locally.

The important observations are:

Assisted-by: OpenAI Codex (GPT-5.5, reasoning: high)

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions