-
Notifications
You must be signed in to change notification settings - Fork 915
Producer queue not getting emptied #1224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The producer will retry temporary failures up to |
And there is no message loss on queue full, your application will need to handle QueueFull, usually by poll():ing and then retrying. |
Thanks for responding @edenhill , Please suggest how to handle such scenario. |
You either buffer data at the source (through backpressure when you get QueueFull), or you use large producer buffers. |
Description
In producer config, default value for message.timeout.ms is 300000. After this time, producer queue should automatically get cleared, I believe so. Please correct me if I am wrong.
producer = confluent_kafka.Producer(**prod_config)
This leads to message loss when the local producer queue gets full.
For more info, please refer: #341 (comment)
The text was updated successfully, but these errors were encountered: