Tools for managing Kafka topics in CI/CD pipelines

SPOUD
8 min readMar 28, 2024

--

Kafka can become the central nervous system for your organization. Multiple teams and departments can share and process events using the same Kafka cluster.

How should this central resource be managed?

Neither should developers be blocked by an almighty gatekeeper who needs to check every single change, nor should everyone be allowed to administer all cluster resources as they wish.

In this post we answer a small aspect of this question — namely ways to automate topic and schema management with available tools.

General considerations

Before you design and implement the topic management process in your company, you need to make a fundamental decisions. Should the process be more in a centralized or distributed fashion?

Centralized approach

With a centralized approach, you establish a single repository for the entire organization that holds all the topic and schema configurations as code. Every development team needs to open a pull request in this repository to create, change or delete Kafka topics and schemas. Company-wide rules apply to all topics and can be enforced in a central place. Only the CI pipeline linked to the repository should be allowed to make changes to topic resources.

The benefits of the central approach are:

  • a high degree of control, it is easy to enforce certain naming conventions and policies (with manual reviews or automated rule checking)
  • visibility and discoverability — the repository serves as a very basic catalogue of available topics and the data that can be consumed
  • inexperienced developer teams can rely on the knowledge of central repository maintainers

The downside is:

  • requires coordination — topics and applications are tied together and deployments of topic changes and project deployments/upgrades must be done in the right order as the application does not have permission to create the needed resources itself
  • manual reviews only work for a low volume of pull requests, automation of rule/policy checking is required to scale this approach
  • being too strict slows down development — e.g., consider making exceptions for Kafka Connect to allow that Connectors can auto-create some topics (such as reporter topics)

Distributed approach

With a distributed approach, you delegate responsibilities to the development teams to give them a higher autonomy. Instead of a central repository you will likely have topic configurations in multiple repositories.

The benefits are:

  • high degree of autonomy for developer teams
  • scales well to support a large number of teams

The downside is:

  • enforcing policies and naming conventions for topics is more difficult
  • using an additional tool to get an overview of all topics and schemas is recommended (Confluent Control Center, AKHQ, Agoora, …) to increase discoverability
  • more autonomy of teams also requires more knowledge in these teams about topic partitioning, replication, etc.

Overview of topic management tools

The tools are not listed in any particular order. Graphical tools not part of this list, only configuration based tooling is considered (infrastructure as code).

kdef

Kdef is a command line tool that can configure topics, ACLs, broker and cluster configs. It does not support registering schemas with a schema registry.

Topic definitions can be written in YAML or JSON and look similar to Kubernetes resource descriptors:

apiVersion: v1
kind: topic
metadata:
name: tutorial_topic2
spec:
configs:
retention.ms: "86400000"
partitions: 3
replicationFactor: 2

Definitions can be applied with the kdef command line tool by invoking kdef apply "*.yml".

Using this tool, it is beneficial to be familiar with go and willing to contribute pull requests for missing features as you may discover certain things that are not currently possible with it (such as automating deletion of topics).

topicctl

Topicctl is a command line tool and REPL with support for Kafka topic management. The tool is written in Go. Topicctl also lets you specify placement strategies to control how partitions are distributed in the cluster.

Topicctl does not support schema management with a schema registry.

A topic definition looks like this:

meta:
name: topic-default
cluster: local-cluster
environment: local-env
region: local-region
description: |
Topic that uses default (any) strategy for assigning partition brokers.

spec:
partitions: 3
replicationFactor: 2
retentionMinutes: 100
placement:
strategy: in-rack
settings:
cleanup.policy: delete
max.message.bytes: 5542880

Jikkou

Jikkou is a command line tool, implemented in Java. Its support for templating with Jinja expressions and extensibility with custom validations, actions and transformations make it a good tooling choice.

It uses Kubernetes style descriptors in yaml format to configure topics and schemas.

Example for a topic definition:

# file:./kafka-topics.yaml
apiVersion: 'kafka.jikkou.io/v1beta2'
kind: 'KafkaTopic'
metadata:
name: 'my-first-topic-with-jikkou'
labels: { }
annotations: { }
spec:
partitions: 12
replicas: 3
configs:
min.insync.replicas: 2

Configurations can then be applied with the command $ jikkou apply --files ./kafka-topics.yaml

Schemas can be managed with a SchemaRegistrySubject resource:

---
apiVersion: "schemaregistry.jikkou.io/v1beta2"
kind: "SchemaRegistrySubject"
metadata:
name: "PersonProto"
labels: { }
annotations:
schemaregistry.jikkou.io/normalize-schema: true
spec:
compatibilityLevel: "FULL_TRANSITIVE"
schemaType: "PROTOBUF"
schema: |
syntax = "proto3";

package example;
message Person {
// The person's unique ID (required)
int32 id = 1;
// The person's legal firstname (required)
string firstname = 2;
// The person's legal lastname (required)
string lastname = 2;
// The person's age (optional)
int32 age = 3;
// The person's height measures in centimeters (optional)
int32 height = 4;
}

julie

Julie is a command line tool, implemented in Java. The project is currently in hibernation state, so you should be prepared to adapt it yourself if changes are needed.

Multiple topics and schemas can be configured together in a single yaml file. This enforces certain hierarchical naming patterns (which are recommended anyway to use prefixed type ACLs/RBAC rules). Schemas can be registered, but the schema references feature is not supported. Julie can be configured to auto-delete topics (and other resources) from your cluster — but be careful to restrict this to certain prefixes if you intend to use this.

A configuration for two topics context.foo.baz with Avro schemas for the key and value and context.bar.bar without a schema looks like this:

context: "context"
projects:
- name: "foo"
topics:
- name: "baz" # => topicName: context.foo.baz
dataType: "avro"
schemas:
- key.schema.file: "schemas/baz-key.avsc"
value.schema.file: "schemas/baz-value.avsc"
- name: "bar"
topics:
- name: "bar" # => topicName: context.bar.bar
config:
replication.factor: "2"
num.partitions: "3"

Let applications (or init containers) create topics

This option can be suitable for setups where only few teams are sharing a cluster and dependencies between applications and topics are not complex.

You can think of this approach as liquibase or flyway for Kafka. However, Kafka topics are usually accessed by multiple applications, thus more coordination is needed. It must be clear which application acts as the “owner” of a topic.

For example, with Spring Boot, you can automatically let your application create the required topics by providing Beans of type NewTopic like shown below.

@Bean
public NewTopic topic1() {
return TopicBuilder.name("thing1")
.partitions(10)
.replicas(3)
.compact()
.build();
}

You can learn more about topic configuration in Spring in the official spring-kafka reference documentation.

The same can be done in any other framework by using the AdminClient directly on application start. If you are deploying your application on Kubernetes, then a separate Init Container can be used to achieve the same behavior. Such an Init Container could use the kafka-topics CLI tool or any of the other listed tools here.

For schema registration you could use the auto.register.schemas option or use an HTTP client with the Schema Registry REST API on application start.

Benefits:

  • easy to use for developer teams
  • topic configuration is versioned together with the application code and won’t be out of sync

The downsides to this approach are:

  • enforcing policies and conventions (e.g. topic naming) in such a distributed setting can be difficult
  • increases complexity if multiple frameworks and programming languages are used
  • you need to establish clear rules for topic ownership (e.g. producers create and own the topics)

Terraform

Confluent’s terraform provider lets you configure any confluent cloud resources in a terraform project. The provider can not be used to manage clusters on Kubernetes, bare metal or from other Kafka SaaS providers.

A topic definition for an “orders” topic looks like this:

resource "confluent_kafka_topic" "orders" {
kafka_cluster {
id = confluent_kafka_cluster.basic-cluster.id
}
topic_name = "orders"
rest_endpoint = confluent_kafka_cluster.basic-cluster.rest_endpoint
credentials {
key = confluent_api_key.app-manager-kafka-api-key.id
secret = confluent_api_key.app-manager-kafka-api-key.secret
}

lifecycle {
prevent_destroy = true
}
}

To register a schema with terraform, you can use the confluent_schema resource:

esource "confluent_schema" "avro-purchase" {
subject_name = "avro-purchase-value"
format = "AVRO"
schema = file("./schemas/avro/purchase.avsc")

lifecycle {
prevent_destroy = true
}
}

To ensure that certain policies and rules are enforced, you can combine it with a tool like checkov or tflint with custom rules.

Strimzi

Strimzi is a Kubernetes operator to manage Kafka Clusters and various other Kafka related resources like topics with k8s descriptor files.

Example topic configuration as yaml descriptor:

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: my-topic
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 1
replicas: 1
config:
retention.ms: 7200000
segment.bytes: 1073741824

At present there is no resource kind for schema registry schemas. If you use Strimzi to manage topics, then keep in mind that you need to manage Schemas in a different way.

One approach could be to delegate schema registration to the applications by using something like the Schema Registry Maven Plugin, the auto.register.schemas option of a producer or mechanisms provided by an application framework.

Confluent for Kubernetes (CFK)

CFK is a Kubernetes operator to manage Confluent Clusters and of course also Kafka topics and various other resources with k8s descriptor files.

CFK is not limited to the Cluster deployed on Kubernetes, but can also manage resources in Confluent Cloud.

The configuration for a “payment” topic could look like this:

---
apiVersion: platform.confluent.io/v1beta1
kind: KafkaTopic
metadata:
name: payment
namespace: confluent
spec:
replicas: 1
partitionCount: 1
configs:
cleanup.policy: "delete"

Then provide a ConfigMap with the Avro schema and a Schema resource to configure a schema for the values in this topic.

---
apiVersion: v1
kind: ConfigMap
metadata:
name: schema-config
namespace: confluent
data:
schema: |
{
"namespace": "io.confluent.examples.clients.basicavro",
"type": "record",
"name": "Payment",
"fields": [
{"name": "id", "type": "string"},
{"name": "amount", "type": "double"},
{"name": "email", "type": "string"}
]
}
---
apiVersion: platform.confluent.io/v1beta1
kind: Schema
metadata:
name: payment-value
namespace: confluent
spec:
data:
configRef: schema-config
format: avro

If you already use CFK to manage your cluster, then this is a simple way to implement topic management either centralized or decentralized. The tricky part is managing and delegating the right permissions in the organization.

Summary

  • You should decide which management approach matches your company’s culture — centralized with strict policies and conventions, or a distributed approach with high autonomy for developer teams.
  • Think about ways to enforce policies or configure alerts when misconfigured topics are detected.
  • Consider custom tooling if none of the tools match your needs.
  • If you already use Kafka with Strimzi or CFK, then these operators could be used without any additional tooling.
  • Out of the listed command line tools, Jikkou seems to be the most extensible one.

At SPOUD, we offer data-streaming related services and consulting to our clients. Feel free to contact us if you need support on your event streaming journey!

--

--

No responses yet