在kafka中,默认的消息体大小为1M。在实际的业务场景,有时候需要修改kafka消息体大小。
在本文,将介绍如何修改消息体大小
更重要的是,为什么这样做可以。
修改kafka消息体大小需要注意下面的几个方面:
修改Topic的大小, 每一个topic的大小都是可以配置的。例如使用如下kafka自带的脚本修改topic大小为3M
bin/kafka-topics.sh --zookeeper XXX --alter --topic XXX --config max.message.bytes=3000000
设置kafka broker可接受消息体的大小。它限制了可以发送的最大消息的大小和生产者可以在一个请求中发送的消息数。例如,默认的最大请求大小为1 MB,可以发送的最大消息为1 MB,或者生产者可以将1,024个大小为1 KB的消息批量处理为一个请求。
参考:
This setting controls the size of a produce request sent by the producer. It caps both the size of the largest message that can be sent and the number of messages that the producer can send in one request. For example, with a default maximum request size of 1 MB, the largest message you can send is 1 MB or the producer can batch 1,024 messages of size 1 KB each into one request. In addition, the broker has its own limit on the size of the largest message it will accept (message.max.bytes). It is usually a good idea to have these configurations match, so the producer will not attempt to send messages of a size that will be rejected by the broker.
其决定了每个分区可以返回到消费者的最大大小。显然,此参数必须大于message.max.bytes
参考:
This property controls the maximum number of bytes the server will return per partition. The default is 1 MB, which means that when KafkaConsumer.poll() returns ConsumerRecords, the record object will use at most max.partition.fetch.bytes per partition assigned to the consumer. So if a topic has 20 partitions, and you have 5 consumers, each consumer will need to have 4 MB of memory available for ConsumerRecords. In practice, you will want to allocate more memory as each consumer will need to handle more partitions if other consumers in the group fail. max.partition.fetch.bytes must be larger than the largest message a broker will accept (determined by the max.message.bytes property in the broker configuration)
另外,对于对接kafka的应用程序,其api需要调整其生产或消费的大小
例如,对于消费者,修改fetch.message.max.bytes 属性
Copyright© 2013-2020
All Rights Reserved 京ICP备2023019179号-8