Caused by java.lang.Exception Failed to send data to Kafka Expiring
flink 写kafka,报错,作业挂掉
- Caused by: java.lang.Exception: Failed to send data to Kafka: Expiring 89 record(s) for system_online_learning_test-1: 30001 ms has passed since last append
Kafka Version : 0.10.2.1,
Kafka Producer error Expiring 10 record(s) for TOPIC:XXXXXX: 6686 ms has passed since batch creation plus linger time org.apache.kafka.common.errors.TimeoutException: Expiring 10 record(s) for TOPIC:XXXXXX: 6686 ms has passed since batch creation plus linger time
Any clue will be appreciated ..
You get this error when the producer can't send data to the broker that it thinks is responsible for the messages according to the metadata that it has. Did the kafka broker die or your producer have connection issues at that time? – Sönke Liebau Oct 15 '17 at 8:13
1
I am also getting this error intermittently throughout the day. Searching for an answer – Shades88 Nov 8 '17 at 9:34
Its stopped occurring when I change my kafka producer "max.request.size": "4713360", "acks": "all", "timeout.ms":"18000", "batch.size": "100000", -- this is size in bytes .. "linger.ms":"100", "retries": "5", "min.insync.replicas":"2", "buffer.memory ":"66554432", "request.timeout.ms":"90000","block.on.buffer.full","true" basically linger.ms and batch.size and block.on.buffer.full plays major role here – Raju Nov 29 '17 at 23:30
This exception is occuring because you are queueing records at a much faster rate than they can be sent.
When you call the send method, the ProducerRecord will be stored in an internal buffer for sending to the broker. The method returns immediately once the ProducerRecord has been buffered, regardless of whether it has been sent.
Records are grouped into batches for sending to the broker, to reduce the transport overheard per message and increase throughput.
Once a record is added into a batch, there is a time limit for sending that batch to ensure that it has been sent within a specified duration. This is controlled by the Producer configuration parameter, request.timeout.ms, which defaults to 30 seconds.
If the batch has been queued longer than the timeout limit, the exception will be thrown. Records in that batch will be removed from the send queue.
Producer configs block.on.buffer.full, metadata.fetch.timeout.ms and timeout.ms have been removed. They were initially deprecated in Kafka 0.9.0.0.
Therefore give a try for increasing request.timeout.ms
Still, if you have any problem related to throughput, you can also refer following blog
- request.timeout.ms 默认30s,调整成120s,写入速度变慢,但异常没有复现。
澄轶: suanec -
http://www.cnblogs.com/suanec/
友链:marsggbo
声援博主:如果您觉得文章对您有帮助,可以点击文章右下角【推荐】一下。您的鼓励是博主的最大动力!
点个关注吧~
http://www.cnblogs.com/suanec/
友链:marsggbo
声援博主:如果您觉得文章对您有帮助,可以点击文章右下角【推荐】一下。您的鼓励是博主的最大动力!
点个关注吧~