Skip to content

Releases: confluentinc/librdkafka

v2.12.1

21 Oct 15:59
e1db7ea

Choose a tag to compare

librdkafka v2.12.1 is a maintenance release:

  • Restored macOS binaries compatibility with macOS 13 and 14 (#5219).

Fixes

General fixes

  • Fix to restore macOS 13 and 14 compatibility in prebuilt binaries present in librdkafka.redist.
    Happening since 2.12.0 (#5219).

Checksums

Release asset checksums:

  • v2.12.1.zip SHA256 da7571a0c1dc374aabb18af6ca01411d4bc597d321977980c8d3211ec5adf696
  • v2.12.1.tar.gz SHA256 ec103fa05cb0f251e375f6ea0b6112cfc9d0acd977dc5b69fdc54242ba38a16f

v2.12.0

08 Oct 18:36
2f22088

Choose a tag to compare

librdkafka v2.12.0 is a feature release:

KIP-848 – General Availability

Starting with librdkafka 2.12.0, the next generation consumer group rebalance protocol defined in KIP-848 is production-ready. Please refer the following migration guide for moving from classic to consumer protocol.

Note: The new consumer group protocol defined in KIP-848 is not enabled by default. There are few contract change associated with the new protocol and might cause breaking changes. group.protocol configuration property dictates whether to use the new consumer protocol or older classic protocol. It defaults to classic if not provided.

Enhancements and Fixes

  • Support for OAUTHBEARER metadata based authentication types,
    starting with Azure IMDS. Introduction available (#5155).
  • Fix compression types read issue in GetTelemetrySubscriptions response
    for big-endian architectures (#5183, @paravoid).
  • Fix for KIP-1102 time based re-bootstrap condition (#5177).
  • Fix for discarding the member epoch in a consumer group heartbeat response when leaving with an inflight HB (#4672).
  • Fix for an error being raised after a commit due to an existing error in the topic partition (#4672).
  • Fix double free of headers in rd_kafka_produceva method (@blindspotbounty, #4628).
  • Fix to ensure rd_kafka_query_watermark_offsets enforces the specified timeout and does not continue beyond timeout expiry (#5201).
  • New walkthrough in the Wiki about configuring Kafka cross-realm authentication between Windows SSPI and MIT Kerberos.

Fixes

General fixes

  • Issues: #5178.
    Fix for KIP-1102 time based re-bootstrap condition.
    Re-bootstrap is now triggered only after metadata.recovery.rebootstrap.trigger.ms
    have passed since first metadata refresh request after last successful
    metadata response. The calculation was since last successful metadata response
    so it's possible it did overlap with the periodic topic.metadata.refresh.interval.ms
    and cause a re-bootstrap even if not needed.
    Happening since 2.11.0 (#5177).
  • Issues: #4878.
    Fix to ensure rd_kafka_query_watermark_offsets enforces the specified timeout and does not continue beyond timeout expiry.
    Happening since 2.3.0 (#5201).

Telemetry fixes

  • Issues: #5179 .
    Fix issue in GetTelemetrySubscriptions with big-endian
    architectures where wrong values are read as
    accepted compression types causing the metrics to be sent uncompressed.
    Happening since 2.5.0. Since 2.10.1 unit tests are failing when run on
    big-endian architectures (#5183, @paravoid).

Consumer fixes

  • Issues: #5199
    Fixed an issue where topic partition errors were not cleared after a successful
    commit. Previously, a partition could retain a stale error state even though the
    most recent commit succeeded, causing misleading error reporting. Now, successful
    commits correctly clear the error state for the affected partitions
    Happening since 2.4.0 (#4672).

Producer fixes

  • Issues: #4627.
    Fix double free of headers in rd_kafka_produceva method in cases where the partition doesn't exist.
    Happening since 1.x (@blindspotbounty, #4628).

Checksums

Release asset checksums:

  • v2.12.0.zip SHA256 9b2f373e03f3d5d87c2075b3ce07ee9ea3802eea00cea41b99d8351a68d8a062
  • v2.12.0.tar.gz SHA256 1355d81091d13643aed140ba0fe62437c02d9434b44e90975aaefab84c2bf237

v2.11.1

18 Aug 20:51
69b1865

Choose a tag to compare

librdkafka v2.11.1 is a maintenance release:

  • Made the conditions for enabling the features future proof (#5130).
  • Avoid returning an all brokers down error on planned disconnections (#5126).
  • An "all brokers down" error isn't returned when we haven't tried to connect
    to all brokers since last successful connection (#5126).

Fixes

General fixes

  • Issues: #4948, #4956.
    Made the conditions for enabling the features future proof, allowing to
    remove RPC versions in a subsequent Apache Kafka version without disabling
    features. The existing checks were matching a single version instead of
    a range and were failing if the older version was removed.
    Happening since 1.x (#5130).

  • Issues: #5142.
    Avoid returning an all brokers down error on planned disconnections.
    This is done by avoiding to count planned disconnections, such as idle
    disconnections, broker host change and similar as events that can cause
    the client to reach the "all brokers down" state, returning an error and
    since 2.10.0 possibly starting a re-bootstrap sequence.
    Happening since 1.x (#5126).

  • Issues: #5142.
    An "all brokers down" error isn't returned when we haven't tried to connect
    to all brokers since last successful connection. It happened because the down
    state is cached and can be stale when a connection isn't needed to that
    particular broker. Solved by resetting the cached broker down state when any
    broker successfully connects, so that broker needs to be tried again.
    Happening since 1.x (#5126).

Checksums

Release asset checksums:

  • v2.11.1.zip SHA256 4a63e4422e5f5bbbb47f0ac1200e2ebd1f91b7b23f0de1bc625810c943fb870e
  • v2.11.1.tar.gz SHA256 a2c87186b081e2705bb7d5338d5a01bc88d43273619b372ccb7bb0d264d0ca9f

v2.11.0

03 Jul 16:26
c56a3e6

Choose a tag to compare

librdkafka v2.11.0 is a feature release:

  • KIP-1102 Enable clients to rebootstrap based on timeout or error code (#4981).
  • KIP-1139 Add support for OAuth jwt-bearer grant type (#4978).
  • Fix for poll ratio calculation in case the queues are forwarded (#5017).
  • Fix data race when buffer queues are being reset instead of being
    initialized (#4718).
  • Features BROKER_BALANCED_CONSUMER and SASL_GSSAPI don't depend on
    JoinGroup v0 anymore, missing in AK 4.0 and CP 8.0 (#5131).
  • Improve HTTPS CA certificates configuration by probing several paths
    when OpenSSL is statically linked and providing a way to customize their location
    or value (#5133).

Fixes

General fixes

  • Issues: #4522.
    A data race happened when emptying buffers of a failing broker, in its thread,
    with the statistics callback in main thread gathering the buffer counts.
    Solved by resetting the atomic counters instead of initializing them.
    Happening since 1.x (#4718).
  • Issues: #4948
    Features BROKER_BALANCED_CONSUMER and SASL_GSSAPI don't depend on
    JoinGroup v0 anymore, missing in AK 4.0 and CP 8.0. This PR partially
    fixes the linked issue, a complete fix for all features will follow.
    Rest of fixes are necessary only for a subsequent Apache Kafka major
    version (e.g. AK 5.x).
    Happening since 1.x (#5131).

Telemetry fixes

  • Issues: #5109
    Fix for poll ratio calculation in case the queues are forwarded.
    Poll ratio is now calculated per-queue instead of per-instance and
    it allows to avoid calculation problems linked to using the same
    field.
    Happens since 2.6.0 (#5017).

Checksums

Release asset checksums:

  • v2.11.0.zip SHA256 9e76a408f0ed346f21be5e2df58b672d07ff9c561a5027f16780d1b26ef24683
  • v2.11.0.tar.gz SHA256 592a823dc7c09ad4ded1bc8f700da6d4e0c88ffaf267815c6f25e7450b9395ca

v2.10.1

11 Jun 11:39
2af6749

Choose a tag to compare

librdkafka v2.10.1 is a maintenance release:

  • Fix to add locks when updating the metadata cache for the consumer
    after no broker connection is available (@marcin-krystianc, #5066).
  • Fix to the re-bootstrap case when bootstrap.servers is NULL and
    brokers were added manually through rd_kafka_brokers_add (#5067).
  • Fix an issue where the first message to any topic produced via producev or
    produceva was delivered late (by up to 1 second) (#5032).
  • Fix for a loop of re-bootstrap sequences in case the client reaches the
    all brokers down state (#5086).
  • Fix for frequent disconnections on push telemetry requests
    with particular metric configurations (#4912).
  • Avoid copy outside boundaries when reading metric names in telemetry
    subscription (#5105)
  • Metrics aren't duplicated when multiple prefixes match them (#5104)

Fixes

General fixes

  • Issues: #5088.
    Fix for a loop of re-bootstrap sequences in case the client reaches the
    all brokers down state. The client continues to select the
    bootstrap brokers given they have no connection attempt and doesn't
    re-connect to the learned ones. In case it happens a broker restart
    can break the loop for the clients using the affected version.
    Fixed by giving a higher chance to connect to the learned brokers
    even if there are new ones that never tried to connect.
    Happens since 2.10.0 (#5086).
  • Issues: #5057.
    Fix to the re-bootstrap case when bootstrap.servers is NULL and
    brokers were added manually through rd_kafka_brokers_add.
    Avoids a segmentation fault in this case.
    Happens since 2.10.0 (#5067).

Producer fixes

  • In case of producev or produceva, the producer did not enqueue a leader
    query metadata request immediately, and rather, waited for the 1 second
    timer to kick in. This could cause delays in the sending of the first message
    by up to 1 second.
    Happens since 1.x (#5032).

Consumer fixes

  • Issues: #5051.
    Fix to add locks when updating the metadata cache for the consumer.
    It can cause memory corruption or use-after-free in case
    there's no broker connection and the consumer
    group metadata needs to be updated.
    Happens since 2.10.0 (#5066).

Telemetry fixes

  • Issues: #5106.
    Fix for frequent disconnections on push telemetry requests
    with particular metric configurations.
    A NULL payload is sent in a push telemetry request when
    an empty one is needed. This causes disconnections every time the
    push is sent, only when metrics are requested and
    some metrics are matching the producer but none the consumer
    or the other way around.
    Happens since 2.5.0 (#4912).
  • Issues: #5102.
    Avoid copy outside boundaries when reading metric names in telemetry
    subscription. It can cause that some metrics aren't matched.
    Happens since 2.5.0 (#5105).
  • Issues: #5103.
    Telemetry metrics aren't duplicated when multiple prefixes match them.
    Fixed by keeping track of the metrics that already matched.
    Happens since 2.5.0 (#5104).

Checksums

Release asset checksums:

  • v2.10.1.zip SHA256 7cb72c4f3d162f50d30d81fd7f7ba0f3d9e8ecd09d9b4c5af7933314e24dd0ba
  • v2.10.1.tar.gz SHA256 75f59a2d948276504afb25bcb5713a943785a413b84f9099d324d26b2021f758

v2.10.0

17 Apr 19:28
c024ac1

Choose a tag to compare

librdkafka v2.10.0 is a feature release:

KIP-848 – Now in Preview

  • KIP-848 has transitioned from Early Access to Preview.
  • Added support for regex-based subscriptions.
  • Implemented client-side member ID generation as per KIP-1082.
  • rd_kafka_DescribeConsumerGroups() now supports KIP-848-style consumer groups. Two new fields have been added:
    • Group type – Indicates whether the group is classic or consumer.
    • Target assignment – Applicable only to consumer protocol groups (defaults to NULL).
  • Group configuration is now supported in AlterConfigs, IncrementalAlterConfigs, and DescribeConfigs. (#4939)
  • Added Topic Authorization Error support in the ConsumerGroupHeartbeat response.
  • Removed usage of the partition.assignment.strategy property for the consumer group protocol. An error will be raised if this is set with group.protocol=consumer.
  • Deprecated and disallowed the following properties for the consumer group protocol:
    • session.timeout.ms
    • heartbeat.interval.ms
    • group.protocol.type
      Attempting to set any of these will result in an error.
  • Enhanced handling for subscribe() and unsubscribe() edge cases.

Note

The KIP-848 consumer is currently in Preview and should not be used in production environments. Implementation is feature complete but contract could have minor changes before General Availability.

Enhancements and Fixes

  • Identify brokers only by broker id (#4557, @mfleming)
  • Remove unavailable brokers and their thread (#4557, @mfleming)
  • Commits during a cooperative incremental rebalance aren't causing
    an assignment lost if the generation id was bumped in between (#4908).
  • Fix for librdkafka yielding before timeouts had been reached (#4970)
  • Removed a 500ms latency when a consumer partition switches to a different
    leader (#4970)
  • The mock cluster implementation removes brokers from Metadata response
    when they're not available, this simulates better the actual behavior of
    a cluster that is using KRaft (#4970).
  • Doesn't remove topics from cache on temporary Metadata errors but only
    on metadata cache expiry (#4970).
  • Doesn't mark the topic as unknown if it had been marked as existent earlier
    and topic.metadata.propagation.max.ms hasn't passed still (@marcin-krystianc, #4970).
  • Doesn't update partition leaders if the topic in metadata
    response has errors (#4970).
  • Only topic authorization errors in a metadata response are considered
    permanent and are returned to the user (#4970).
  • The function rd_kafka_offsets_for_times refreshes leader information
    if the error requires it, allowing it to succeed on
    subsequent manual retries (#4970).
  • Deprecated api.version.request, api.version.fallback.ms and
    broker.version.fallback configuration properties (#4970).
  • When consumer is closed before destroying the client, the operations queue
    isn't purged anymore as it contains operations
    unrelated to the consumer group (#4970).
  • When making multiple changes to the consumer subscription in a short time,
    no unknown topic error is returned for topics that are in the new subscription but weren't in previous one (#4970).
  • Prevent metadata cache corruption when topic id changes
    (@kwdubuc, @marcin-krystianc, @GerKr, #4970).
  • Fix for the case where a metadata refresh enqueued on an unreachable broker
    prevents refreshing the controller or the coordinator until that broker
    becomes reachable again (#4970).
  • Remove a one second wait after a partition fetch is restarted following a
    leader change and offset validation (#4970).
  • Fix the Nagle algorithm (TCP_NODELAY) on broker sockets to not be enabled
    by default (#4986).

Fixes

General fixes

  • Issues: #4212
    Identify brokers only by broker id, as happens in Java,
    avoid to find the broker with same hostname and use the same thread
    and connection.
    Happens since 1.x (#4557, @mfleming).
  • Issues: #4557
    Remove brokers not reported in a metadata call, along with their thread.
    Avoids that unavailable brokers are selected for a new connection when
    there's no one available. We cannot tell if a broker was removed
    temporarily or permanently so we always remove it and it'll be added back when
    it becomes available again.
    Happens since 1.x (#4557, @mfleming).
  • Issues: #4970
    librdkafka code using cnd_timedwait was yielding before a timeout occurred
    without the condition being fulfilled because of spurious wake-ups.
    Solved by verifying with a monotonic clock that the expected point in time
    was reached and calling the function again if needed.
    Happens since 1.x (#4970).
  • Issues: #4970
    Doesn't remove topics from cache on temporary Metadata errors but only
    on metadata cache expiry. It allows the client to continue working
    in case of temporary problems to the Kafka metadata plane.
    Happens since 1.x (#4970).
  • Issues: #4970
    Doesn't mark the topic as unknown if it had been marked as existent earlier
    and topic.metadata.propagation.max.ms hasn't passed still. It achieves
    this property expected effect even if a different broker had
    previously reported the topic as existent.
    Happens since 1.x (@marcin-krystianc, #4970).
  • Issues: #4907
    Doesn't update partition leaders if the topic in metadata
    response has errors. It's in line with what Java client does and allows
    to avoid segmentation faults for unknown partitions.
    Happens since 1.x (#4970).
  • Issues: #4970
    Only topic authorization errors in a metadata response are considered
    permanent and are returned to the user. It's in line with what Java client
    does and avoids returning to the user an error that wasn't meant to be
    permanent.
    Happens since 1.x (#4970).
  • Issues: #4964, #4778
    Prevent metadata cache corruption when topic id for the same topic name
    changes. Solved by correctly removing the entry with the old topic id from metadata cache
    to prevent subsequent use-after-free.
    Happens since 2.4.0 (@kwdubuc, @marcin-krystianc, @GerKr, #4970).
  • Issues: #4970
    Fix for the case where a metadata refresh enqueued on an unreachable broker
    prevents refreshing the controller or the coordinator until that broker
    becomes reachable again. Given the request continues to be retried on that
    broker, the counter for refreshing complete broker metadata doesn't reach
    zero and prevents the client from obtaining the new controller or group or transactional coordinator.
    It causes a series of debug messages like:
    "Skipping metadata request: ... full request already in-transit", until
    the broker the request is enqueued on is up again.
    Solved by not retrying these kinds of metadata requests.
    Happens since 1.x (#4970).
  • The Nagle algorithm (TCP_NODELAY) is now disabled by default. It caused a
    large increase in latency for some use cases, for example, when using an
    SSL connection.
    For efficient batching, the application should use linger.ms,
    batch.size etc.
    Happens since: 0.x (#4986).

Consumer fixes

  • Issues: #4059
    Commits during a cooperative incremental rebalance could cause an
    assignment lost if the generation id was bumped by a second join
    group request.
    Solved by not rejoining the group in case an illegal generation error happens
    during a rebalance.
    Happening since v1.6.0 (#4908)
  • Issues: #4970
    When switching to a different leader a consumer could wait 500ms
    (fetch.error.backoff.ms) before starting to fetch again. The fetch backoff wasn't reset when joining the new broker.
    Solved by resetting it, given it's not needed to backoff
    the first fetch on a different node. This way faster leader switches are
    possible.
    Happens since 1.x (#4970).
  • Issues: #4970
    The function rd_kafka_offsets_for_times refreshes leader information
    if the error requires it, allowing it to succeed on
    subsequent manual retries. Similar to the fix done in 2.3.0 in
    rd_kafka_query_watermark_offsets. Additionally, the partition
    current leader epoch is taken from metadata cache instead of
    from passed partitions.
    Happens since 1.x (#4970).
  • Issues: #4970
    When consumer is closed before destroying the client, the operations queue
    isn't purged anymore as it contains operations
    unrelated to the consumer group.
    Happens since 1.x (#4970).
  • Issues: #4970
    When making multiple changes to the consumer subscription in a short time,
    no unknown topic error is returned for topics that are in the new subscription
    but weren't in previous one. This was due to the metadata request relative
    to previous subscription.
    Happens since 1.x (#4970).
  • Issues: #4970
    Remove a one second wait after a partition fetch is restarted following a
    leader change and offset validation. This is done by resetting the fetch
    error backoff and waking up the delegated broker if present.
    Happens since 2.1.0 (#4970).

Note: there was no v2.9.0 librdkafka release,
it was a dependent clients release only

Checksums

Release asset checksums:

  • v2.10.0.zip SHA256 e30944f39b353ee06e70861348011abfc32d9ab6ac850225b0666e9d97b9090d
  • v2.10.0.tar.gz ...
Read more

v2.8.0

07 Jan 21:23
b4c6085

Choose a tag to compare

librdkafka v2.8.0 is a maintenance release:

  • Socket options are now all set before connection (#4893).
  • Client certificate chain is now sent when using ssl.certificate.pem
    or ssl_certificate or ssl.keystore.location (#4894).
  • Avoid sending client certificates whose chain doesn't match with broker
    trusted root certificates (#4900).
  • Fixes to allow to migrate partitions to leaders with same leader epoch,
    or NULL leader epoch (#4901).
  • Support versions of OpenSSL without the ENGINE component (Chris Novakovic, #3535
    and @remicollet, #4911).

Fixes

General fixes

  • Socket options are now all set before connection, as documentation
    says it's needed for socket buffers to take effect, even if in some
    cases they could have effect even after connection.
    Happening since v0.9.0 (#4893).
  • Issues: #3225.
    Client certificate chain is now sent when using ssl.certificate.pem
    or ssl_certificate or ssl.keystore.location.
    Without that, broker must explicitly add any intermediate certification
    authority certificate to its truststore to be able to accept client
    certificate.
    Happens since: 1.x (#4894).

Consumer fixes

  • Issues: #4796.
    Fix to allow to migrate partitions to leaders with NULL leader epoch.
    NULL leader epoch can happen during a cluster roll with an upgrade to a
    version supporting KIP-320.
    Happening since v2.1.0 (#4901).
  • Issues: #4804.
    Fix to allow to migrate partitions to leaders with same leader epoch.
    Same leader epoch can happen when partition is
    temporarily migrated to the internal broker (#4804), or if broker implementation
    never bumps it, as it's not needed to validate the offsets.
    Happening since v2.4.0 (#4901).

Note: there was no v2.7.0 librdkafka release

Checksums

Release asset checksums:

  • v2.8.0.zip SHA256 5525efaad154e277e6ce30ab78bb00dbd882b5eeda6c69c9eeee69b7abee11a4
  • v2.8.0.tar.gz SHA256 5bd1c46f63265f31c6bfcedcde78703f77d28238eadf23821c2b43fc30be3e25

v2.2.1

13 Jan 13:45
2f3d0e6

Choose a tag to compare

Note: given this patch version contains only a single fix, it's suggested to upgrade to latest backward compatible release instead, as it contains all the issued fixes.
Following semver 2.0, all our patch and minor releases are backward compatible and our minor releases may also contain fixes.
Please note that 2.x versions of librdkafka are also backward compatible with 1.x as the major version release was only for the upgrade to OpenSSL 3.x.

librdkafka v2.2.1 is a maintenance release backporting:

  • Fix for idempotent producer fatal errors, triggered after a possibly persisted message state (#4438).
  • Update bundled lz4 (used when ./configure --disable-lz4-ext) to
    v1.9.4, which contains
    bugfixes and performance improvements (#4726).
  • Upgrade OpenSSL to v3.0.13 (while building from source) with various security fixes,
    check the release notes
    (@janjwerner-confluent, #4690).
  • Upgrade zstd to v1.5.6, zlib to v1.3.1, and curl to v8.8.0 (@janjwerner-confluent, #4690).
  • Upgrade Linux dependencies: OpenSSL 3.0.15, CURL 8.10.1 (#4875).

Checksums

Release asset checksums:

  • v2.2.1.zip SHA256 2d7fdb54b17be8442b61649916b94eda1744c21d2325795d92f9ad6dec4e5621
  • v2.2.1.tar.gz SHA256 c6f0ccea730ce8f67333e75cc785cce28a8941d5abf041d7a9b8fef91d4778e8

v2.6.1

18 Nov 17:17
cb8c19c

Choose a tag to compare

librdkafka v2.6.1 is a maintenance release:

  • Fix for a Fetch regression when connecting to Apache Kafka < 2.7 (#4871).
  • Fix for an infinite loop happening with cooperative-sticky assignor
    under some particular conditions (#4800).
  • Fix for retrieving offset commit metadata when it contains
    zeros and configured with strndup (#4876)
  • Fix for a loop of ListOffset requests, happening in a Fetch From Follower
    scenario, if such request is made to the follower (#4616, #4754, @kphelps).
  • Fix to remove fetch queue messages that blocked the destroy of rdkafka
    instances (#4724)
  • Upgrade Linux dependencies: OpenSSL 3.0.15, CURL 8.10.1 (#4875).
  • Upgrade Windows dependencies: MSVC runtime to 14.40.338160.0,
    zstd 1.5.6, zlib 1.3.1, OpenSSL 3.3.2, CURL 8.10.1 (#4872).
  • SASL/SCRAM authentication fix: avoid concatenating
    client side nonce once more, as it's already prepended in server sent nonce (#4895).
  • Allow retrying for status code 429 ('Too Many Requests') in HTTP requests for
    OAUTHBEARER OIDC (#4902).

Fixes

General fixes

  • SASL/SCRAM authentication fix: avoid concatenating
    client side nonce once more, as it's already prepended in
    server sent nonce.
    librdkafka was incorrectly concatenating the client side nonce again, leading to this fix being made on AK side, released with 3.8.1, with endsWith instead of equals.
    Happening since v0.0.99 (#4895).

Consumer fixes

  • Issues: #4870
    Fix for a Fetch regression when connecting to Apache Kafka < 2.7, causing
    fetches to fail.
    Happening since v2.6.0 (#4871)
  • Issues: #4783.
    A consumer configured with the cooperative-sticky partition assignment
    strategy could get stuck in an infinite loop, with corresponding spike of
    main thread CPU usage.
    That happened with some particular orders of members and potential
    assignable partitions.
    Solved by removing the infinite loop cause.
    Happening since: 1.6.0 (#4800).
  • Issues: #4649.
    When retrieving offset metadata, if the binary value contained zeros
    and librdkafka was configured with strndup, part of
    the buffer after first zero contained uninitialized data
    instead of rest of metadata. Solved by avoiding to use
    strndup for copying metadata.
    Happening since: 0.9.0 (#4876).
  • Issues: #4616
    When an out of range on a follower caused an offset reset, the corresponding
    ListOffsets request is made to the follower, causing a repeated
    "Not leader for partition" error. Fixed by sending the request always
    to the leader.
    Happening since 1.5.0 (tested version) or previous ones (#4616, #4754, @kphelps).
  • Issues:
    Fix to remove fetch queue messages that blocked the destroy of rdkafka
    instances. Circular dependencies from a partition fetch queue message to
    the same partition blocked the destroy of an instance, that happened
    in case the partition was removed from the cluster while it was being
    consumed. Solved by purging internal partition queue, after being stopped
    and removed, to allow reference count to reach zero and trigger a destroy.
    Happening since 2.0.2 (#4724).

Checksums

Release asset checksums:

  • v2.6.1.zip SHA256 b575811865d9c0439040ccb2972ae6af963bc58ca39d433243900dddfdda79cf
  • v2.6.1.tar.gz SHA256 0ddf205ad8d36af0bc72a2fec20639ea02e1d583e353163bf7f4683d949e901b

v2.6.0

10 Oct 22:55
7fc5a59

Choose a tag to compare

librdkafka v2.6.0 is a feature release:

  • KIP-460 Admin Leader Election RPC (#4845)
  • [KIP-714] Complete consumer metrics support (#4808).
  • [KIP-714] Produce latency average and maximum metrics support for parity with Java client (#4847).
  • [KIP-848] ListConsumerGroups Admin API now has an optional filter to return only groups
    of given types.
  • Added Transactional id resource type for ACL operations (@JohnPreston, #4856).
  • Fix for permanent fetch errors when using a newer Fetch RPC version with an older
    inter broker protocol (#4806).

Fixes

Consumer fixes

  • Issues: #4806
    Fix for permanent fetch errors when brokers support a Fetch RPC version greater than 12
    but cluster is configured to use an inter broker protocol that is less than 2.8.
    In this case returned topic ids are zero valued and Fetch has to fall back
    to version 12, using topic names.
    Happening since v2.5.0 (#4806)

Checksums

Release asset checksums:

  • v2.6.0.zip SHA256 e9eb7faedb24da3a19d5f056e08630fc2dae112d958f9b714ec6e35cd87c032e
  • v2.6.0.tar.gz SHA256 abe0212ecd3e7ed3c4818a4f2baf7bf916e845e902bb15ae48834ca2d36ac745