6

I am running a standalone kafka broker on an EC2 4GB RAM instance. In the default settings, Kafka is configured to use 1GB memory -Xmx1G -Xms1G

Since the VM has only 4GB memory, is it possible to configure the JVM settings to use 512MB? How should I do that? Will Kafka run properly with 512MB memory or is 1GB the minimum required?

6
  • 6-8GB is actually the production grade best performance. Not sure why you'd want less than 4 given that all consumer requests of the latest offsets are directly off the heap. You should be able to safely use 3GB if no other process is on those machines Commented Jul 28, 2018 at 22:52
  • I understand. But its a test server and also has mongodb and some node services running, so want to limit the memory consumption by kafka which is consistently using about 900-1000MB Commented Jul 30, 2018 at 8:18
  • Alright, well being a Java server process, I'm fairly sure most of that memory is required. Are you also running Zookeeper on the same machine? Commented Jul 30, 2018 at 13:28
  • Yes. Kafka, zookeeper, mongodb, redis, nodejs (4-5 apps). Its a 4GB EC2 Ubuntu VM. Will be moving to an 8GB one for production and separating the mongodb from the VMs. Want to limit the memory usage for the development environment Commented Aug 1, 2018 at 8:47
  • Hmm. I wouldn't really consider that production. Loose one server, you lose more than half your app stack Commented Aug 1, 2018 at 12:10

1 Answer 1

9

To set your own JVM heap settings, you just have to export KAFKA_HEAP_OPTS and Kafka will pick it up when starting.

For example, to set the heap to 512MB, run

export KAFKA_HEAP_OPTS="-Xmx512m -Xms512M" 
Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.