3

I'm trying to set up kafka in SSL [1-way] mode. I've gone through the official documentation and successfully generated the certificates. I'll note down the behavior for 2 different cases. This setup has only one broker and one zookeeper.

Case-1: Inter-broker communication - Plaintext

Relevant entries in my server.properties file are as follows:

listeners=PLAINTEXT://localhost:9092, SSL://localhost:9093 ssl.keystore.location=/Users/xyz/home/ssl/server.keystore.jks ssl.keystore.password=**** ssl.key.password=**** 

I've added a client-ssl.properties in kafka config dir with following entries:

security.protocol=SSL ssl.truststore.location=/Users/xyz/home/ssl/client.truststore.jks ssl.truststore.password=**** 

If I put bootstrap.servers=localhost:9093 or bootstrap.servers=localhost:9092 in my config/producer.properties file, my console-producers/consumers work fine. Is that the intended behavior? If yes, then why? Because I'm specifically trying to connect to localhost:9093 from producer/consumer in SSL mode.

Case-2: Inter-broker communication - SSL

Relevant entries in my server.properties file are as follows:

security.inter.broker.protocol=SSL listeners=SSL://localhost:9093 ssl.keystore.location=/Users/xyz/home/ssl/server.keystore.jks ssl.keystore.password=**** ssl.key.password=**** 

My client-ssl.properties file remains the same. I put bootstrap.servers=localhost:9093 in producer.properties file. Now, none of my producer/consumer can connect to kafka. I get the following msg:

WARN Error while fetching metadata with correlation id 0 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient) 

What am I doing wrong?

In all these cases I'm using the following commands to start producers/consumers:

./kafka-console-producer.sh --broker-list localhost:9093 --topic test --producer.config ../config/client-ssl.properties ./kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config ../config/client-ssl.properties 

3 Answers 3

3

One important information regarding this: The behavior where the CN has to be equal to the hostname can be deactivated, by adding the following line to server.properties:

 ssl.endpoint.identification.algorithm= 

The default value for this setting is set to https, which ultimately activates the host to CN verification. This is the default since Kafka 2.0.

I've successfully tested a SSL setup (just on the broker side though) with the following properties:

 ############################ SSL Config ################################# ssl.truststore.location=/path/to/kafka.truststore.jks ssl.truststore.password=TrustStorePassword ssl.keystore.location=/path/to/kafka.server.keystore.jks ssl.keystore.password=KeyStorePassword ssl.key.password=PrivateKeyPassword security.inter.broker.protocol=SSL listeners=SSL://localhost:9093 advertised.listeners=SSL://127.0.0.1:9093 ssl.client.auth=required ssl.endpoint.identification.algorithm= 

You can also find a Shell script to generate SSL certificates (with key- and truststores) alongside some documentation in this github project: https://github.com/confluentinc/confluent-platform-security-tools

Sign up to request clarification or add additional context in comments.

Comments

2

Make sure that the common names (CN) in your certificates match your hostname. SSL protocol verify CN against hostname. I guess here you should have CN=localhost. I had a similar issue and that's how I fixed it.

3 Comments

If CN was wrong, I shouldn't have been able to connect in SSL mode with security.inter.broker.protocol in PLAINTEXT right?
Well I could connect in SSL mode with plaintext inter broker protocol. It looks like CN is checked against hostname only when security.inter.broker.protocol=SSL
Recently I had to repeat the same stuffs for my current project and I found out from the answer by @Venkateswararao T that when it asks you to provide the first name and last name, it has to be localhost. I used the ssl-generator from here
0

Well, both the given answers point out to the right direction, but some more details need to be added to end this confusion.

I generated the certs using this bash script from confluent, and when I looked inside the file, it made sense. I'm pasting the relevant section here:

echo " NOTE: currently in Kafka, the Common Name (CN) does not need to be the FQDN of" echo " this host. However, at some point, this may change. As such, make the CN" echo " the FQDN. Some operating systems call the CN prompt 'first / last name'" 

There you go. When you're generating the certs, make sure to put localhost (or FQDN) when it asks for first / last name. Do remember that you need to use the same endpoint to expose the broker.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.