This is a cache of https://developer.ibm.com/tutorials/apache-kafkas-producer-api-and-consumer-api/. It is a snapshot of the page as it appeared on 2025-11-15T02:47:31.071+0000.
Develop Java programs to produce and consume messages to and from Apache Kafka - IBM Developer
Apache Kafka is an event streaming platform that helps developers implement an event-driven architecture. Rather than the point-to-point communication of REST APIs, Kafka's model is one of applications producing messages (events) to a pipeline and then those messages (events) can be consumed by consumers. Producers are unaware of who is consuming their data and how. Similarly, consumers can consume messages at any point from the queue and are not tied to producers. This architecture leads to the decoupling between producers and consumers that event driven architecture relies on.
The quickstart provided on the Kafka website does an excellent job of explaining how the different components of Kafka work by interacting with it manually by running shell scripts in the command line. In this tutorial, I give an overview of how to interact with Kafka programmatically using the Kafka producer and consumer APIs.
Learning objectives
The objective of this tutorial is to demonstrate how to write Java programs to produce and consume messages to and from Apache Kafka. Because creating and maintaining a Kafka cluster can require quite an investment of time and computational power, I'll demonstrate IBM Event Streams on IBM cloud, which is a fully managed Kafka instance.
After completing this tutorial, you will understand:
What Apache Kafka is
How to produce messages to Kafka programmatically
How to consumer messages from Kafka programmatically
What IBM Event Streams is
How to set up a Kafka cluster using IBM Event Streams
Before we begin, let's review some of the key Kafka concepts.
Events are stored in topics, and topics are further broken down into partitions.
Although logically speaking a topiccan be seen as a stream of records, in practice a topic is composed of a number of partitions. The records in a topic are distributed across its partitions in order to increase throughput, which means that consumers can read from multiple partitions in parallel.
Records in a partition are reference by a unique ID called an offset. A consumer can consume records beginning from any offset. Also, a tuple (topic, partition, offset) can be used to reference any record in the Kafka cluster.
In Kafka, producers are applications that write messages to a topic and consumers are applications that read records from a topic.
Kafka provides 2 APIs to communicate with your Kafka cluster though your code:
The producer and consumer APIs were originally written for Java applications, but since then APIs for many more languages have been made available including (but not limited to) c/c++, Go, and Python.
In this tutorial, we cover the simplest case of a Kafka implementation with a single producer and a single consumer writing messages to and reading messages from a single topic. In a production environment, you will likely have multiple Kafka brokers, producers, and consumer groups. This is what makes Kafka a powerful technology for implementing an event-driven architecture.
Steps
This tutorial is broadly segmented into 3 main steps. First, you'll create a Kafka cluster. As mentioned earlier, we will be using the Event Streams service on IBM cloud for this. Next, you'll write a Java program that can produce messages to our Kafka cluster. Finally, you'll write a consumer application that can read those same messages.
Both the producing and consuming applications are written in Java, so they can be run from within an IDE. I will be using Eclipse, but any IDE should be fine.
Step 1: Deploy a basic Kafka instance with IBM Event Streams on IBM cloud
While it is easy to get Kafka running on your machine for experimentation using the Apache Kafka quickstart, managing a Kafka cluster with multiple servers in production can be quite cumbersome. IBM Event Streams on IBM cloud is a managed Kafka service that allows developers to create Kafka clusters without having to worry about provisioning and maintaining a Kafka cluster.
To allow your Java applications to access your topic, you'll need the credentials and API key for this service. Make sure to note these values which you use later in this tutorial.
Step 2: creating a producer application using the Kafka Producer API
First, you need to create a Java project in your preferred IDE. Then, download the latest version of the Apache Kafka clients from the Maven repository to add to your maven project.
Next, create a Java properties object (producerProps in this case) and store all the properties of the producer in that object. These properties include our Kafka brokers, the security parameters to connect to Event Streams, and the key and value serializers for serializing our messages before sending them to Kafka.
A list of Kafka brokers can be found in the service credentials we created while creating our Event Streams cluster.
Next, provide the SASL credentials to be able to connect to Event Streams. Make sure to replace USERNAME and PASSWORD with the values you noted for your service credentials in step 1.
Finally, specify a key and value serializer for serializing the messages before sending them to Kafka. The "acks" parameter specifies when a request is considered complete. Setting it to "all" results in blocking on the full commit of a record.
We will use a Kafkaconsumer to consume messages, where each message is represented by a consumerRecord. Every consumer belongs to a consumer group. We will place our consumer in a group called G1. Once that is done, we can subscribe to a list of topics. Next, call poll() in a loop, receiving a batch of messages to process, where each message is represented by a consumerRecord.
consumerProps.put("group.id", "G1");
consumer<String, String> consumer = new Kafkaconsumer<>(consumerProps);
consumer.subscribe(Arrays.asList("getting-started"));
while (true) {
consumerRecords<String, String> records = consumer.poll(100);
for (consumerRecord<String, String> record : records)
System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
}
copy codecopied!
This consumer gets messages from the “getting-started” topic in our Kafka cluster and prints them to the console. You can run this code from within the IDE similar to how we ran the producer code. When it is run, it should display an output like this:
Summary and next steps
In this tutorial, you provisioned a managed Kafka cluster using IBM Event Streams on IBM cloud. Then, you used that cluster to produce and consume records using the Java producer and consumer APIs.
Besides the producer and consumer APIs, you might find these two Kafka APIs useful:
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review yourcookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.