-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't produce msg when kill broker #513
Comments
Please reproduce with |
If your replication-factor is 1 and broker 1 is your leader and then goes down then no other broker will be able to take over the topic+partition since it hasn't been replicated. |
I think rdkafka will auto send msg to broker 2 or 3. |
It will only send to the current leader broker, and your leader broker went down without any replicas to take over. |
I use rd_kafka_topic_partition_available to test partition of the broker 1, but result is 1 |
…tions (issue #513) Regression since 0.8.6
Update to latest master and see if that fixes it. |
I test rd_kafka_topic_partition_available, but result still is 1 |
Can you paste the metadata for your topic when the broker is down? |
Sorry, I rebuild librdkafka and use rd_kafka_topic_partition_available to test partition of the broker 1, result is 0. |
But I restart broker 1 and found a certain probability that the all result is 0 when use rd_kafka_topic_partition_available to test all partitions of the broker 1, 2, 3. but the result of "rdkafka_example -b broker -L -t yourtopic" is ok. |
So you are saying partition_available() returns 0 for all partitions when broker 1 is down, even for partitions that have another leader than broker 1? Can you reproduce this with debug=topic,metadata,broker enabled? |
I found a certain probability that the all result is 0 when use rd_kafka_topic_partition_available to test all partitions of the broker 1, 2, 3 when broker 1 is restart |
Running with the above debug would let us troubleshoot and see why. |
|
Could you do some printouts from your partitioner too so we can match the debug log the what you are experiencing? Also, in the above log the producer is destroyed almost immediately. Is this intentional? |
|
That while (true) loop will busy-loop forever if there are no leaders for a topic's partitions, you should give up after |
I need to see the partitioner printouts mixed with the rdkafka debug log, otherwise it is impossible to sync up when it happens. |
see log |
I dont see any log lines from your partitioner ("current invailed partition..") |
For convenience you see, I output rdkafka log to a separate file. |
This is the problem: When broker 2 comes up again and librdkafka queries it for metadata it responds with an empty topic information and error "Leader not available". This makes librdkafka remove leaders for all partitions and that's why partition_available() will return false. I'll try to figure out what's best to do in this case. |
This should be fixed now, can you give it a try? |
Temporarily not reproduce, I will always watch it. |
Perfect, let me know how it goes. |
@edenhill can you guide me how can I specify replication-factor while I create topic using librdkafka I'm using example program rdkafka_example.c |
@mmanoj That's not possible, you can only do that using the kafka-topics.sh script in the kafka distro. |
Can I create using script it and use via program ? what happen if I create it using program and it not replicate ? |
1.There are three broker 1, 2, 3;
2. Topic test_topic_1, It only has a Replication;
3. Run producer;
4. kill broker 1;
5. The producer still send msg to broker 1;
The text was updated successfully, but these errors were encountered: