Kafka with Spring Cloud Stream on Docker – part 2

This is a continuation of part 1 on Kafka, Spring Cloud Stream on Docker. In this post, we will be talking about setting up a Spring boot project and configuring binder for Kafka and produce messages. Before we delve deeper, it is important to understand few concepts in Spring Cloud Stream and how it works.

SOURCE
When a microservice is ready to publish a message, it will publish the message using a source. Source identifies the contract between the message producer and the message destination.

CHANNEL
A channel is an abstraction over the actual queue or topic. A channel name is always associated with a target queue name in configuration, which makes it easy to switch queues by changing configuration.

BINDER
Its Spring’s code that talks to a specific message platform, like RabbitMQ, Kafka etc.

To get started, add @EnableBinding annotation to the bootstrap class of your Spring boot project, you created in part 1. This will turn Spring boot project into a Spring Cloud Stream project.

Next, configure your application.yml as below.


The above binding maps to a channel called output with Kafka topic called KafkaDemoTopic. It specifies Kafka is the underlying message platform. You can switch this to RabbitMQ, ActiveMQ etc depending on your actual implementation. It also specifies where Kafka and Zookeeper are running. Couple of things to watch out for.

You may get above error if port specifications are not correct. I didn’t specify ports as spring already knows default ports for Zookeeper and Kafka.

This is a common error, if you forget to add message converters.

Once, you have the above set up, create a message producer to post the messages to the topic as below.

That’s all you have to do. Check your Topic for messages.

Please watch the video below for additional details.

Download working source code from kafkaspringboot folder.


Kafka with Spring Cloud Stream on Docker – part 1

Kafka with Spring Cloud Stream gives you the power of Kafka with familiarity and added abstraction of Spring framework. An additional advantage of using Spring Cloud Stream is that you can switch and plug other middle ware from Kafka to RabbitMQ or other supported implementations very easy.

To get going create a Spring boot project from Spring Initializr website and add cloud-stream and Kafka as dependencies. This will bring in the following dependencies.

As soon as Spring Cloud Stream detects above Kafka binder in its classpath, it uses it and knows Kafka is used as the middleware.

Kafka Docker Image set up

While there are some options for the image, I found Spotify Kafka image to be easy to use, primarily because it comes bundled with Zookeeper and Kafka together in a single image. Run the following command from your Docker machine to install and run.

Note that if you are running on a remote machine, add an entry to your host file or modify it as below. ‘dmira’ is the name of the server.

If you do not do it, you will get WARN Error while fetching metadata with correlation id 0 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient) . By adding –add-host to Docker run command, you are resolving the machine name to localhost. This is important because ADVERTISED_HOST is used as the hostname to publish to ZooKeeper for clients to use.

Now test to see if you can connect to Kafka on the Docker server from your local machine by creating a testTopic.

You should get a confirmation message. You can verify the list of topics with

In the next part, we will discuss how to set up Spring Cloud Stream with Kafka and post messages.

The following is the video version of this post.