2

I am creating kafka topic as following:

kafka-topics --create --zookeeper xx.xxx.xx:2181 --replication-factor 2 --partitions 200 --topic test6 --config retention.ms=900000 

and then I produce messages with golang using the following library:

 "gopkg.in/confluentinc/confluent-kafka-go.v1/kafka" 

the producer configuration looks like this:

 for _, message := range bigslice { topic := "test6" p.Produce(&kafka.Message{ TopicPartition: kafka.TopicPartition{Topic: &topic}, Value: []byte(message), }, nil) } 

the problem that I've sent more than 200K messages but they all lands in partition 0.

what could be wrong in this situation?

0

2 Answers 2

2

Messages with the same key are being added to the same partition. If this is not the case, then try to include Partition: kafka.PartitionAny:

for _, message := range bigslice { topic := "test6" p.Produce(&kafka.Message{ TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny}, Value: []byte(message), }, nil) } 
Sign up to request clarification or add additional context in comments.

Comments

0

You provide no key when producing, so it goes to the same partition. I suggest you to read at least this https://medium.com/event-driven-utopia/understanding-kafka-topic-partitions-ae40f80552e8

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.