Countless organizations today are developing event-driven, cloud-native applications to solve problems across many domains. Typically, these applications are comprised of microservices using the Kafka Client or Kafka Streams APIs and deployed in Kubernetes or other cloud container services such as Azure Container Instances or Amazon Elastic Container Service. With today’s technology, we are finally on the verge of solving the age-old problem of code that works in one environment, such as the developer’s local machine, but not in another environment. But to get there, we need to handle our application configuration in a sufficiently flexible way. This article presents a set of requirements and an approach for handling application configuration for Kafka Client and Kafka Streams services in the Java ecosystem (Java, Kotlin, Scala).
Java developers typically use Spring Boot to meet their application-configuration needs. Spring Boot offers an excellent system for configuring cloud-native microservices. Spring for Apache Kafka also offers a convenience layer above the normal Kafka client APIs with abstractions such as KafkaTemplate
and KafkaListener
. This is often an enticing way to set up an application quickly and is most appropriate when adding Kafka interaction to an application whose core functionality does not involve Kafka. For example, if you’re developing a backend for a single page application and need to occasionally send a Kafka message with some metrics, adding Spring for Apache Kafka onto a Spring Web Flux application is an excellent combination.
But in event-driven microservices, a Kafka-interacting process may be very lightweight. Such a microservice may have no other core functionality other than reading and writing data to Kafka with some transformation of that data. The Kafka Streams API is a powerful abstraction for implementing such a service. Other common features of the Spring ecosystem (such as the web framework and dependency injection) may not be needed at all.
The real downside of Spring for Apache Kafka is that it obscures the interaction between your application and Kafka. This means there’s one more thing for the developer to learn, and be responsible for, and can make it harder to reason about, and tune for performance and delivery guarantees. Furthermore, the Spring Boot application configuration system doesn’t work well with the Kafka Client and Streams APIs. The Kafka APIs require configuration in a Java Properties
object, as is customarily done in the Kafka ecosystem. Spring Boot doesn’t provide a way to pull all configured properties under a specified prefix into a separate properties object. Setting up a Properties
object requires code to pluck each configuration property by name from Spring configuration to your Kafka Properties
. So, if you want to use the Kafka Client and Kafka Streams APIs directly, how do you handle application configuration?
Fortunately, there’s a simple way to manage Kafka Client and Streams configuration with Spring-like flexibility without writing a lot of boilerplate code. Before we show you how, let’s establish some requirements for an application configuration approach that will give us a system compatible with Kafka Client and Streams APIs as well as with container orchestration systems—and which prioritizes developer productivity.
- Application configuration data must be stored in Java properties files or easily converted to a
Properties
object - It must be easy to create environment-specific configuration overrides without recompiling the code and rebuilding container images.
- Local development must be easy and convenient.
Ideally, a developer can simply run the application locally and have it use reasonable defaults for development (such as connecting to localhost for the Kafka bootstrap servers). If the developer needs to override something for local testing purposes, she should be able to do so without modifying the code and risk committing temporary changes to source control. For cloud deployment, one or more alternative locations of application configuration must be accessible that effectively override any built-in default configuration values. The application should not be built separately for every environment it needs to run in. Spring Boot readily meets requirements 2 and 3 but not 1.
An alternative to Spring Boot for application configuration is Apache Commons Configuration. It provides file location strategies that read configuration from both the Java classpath and filesystem locations. The Java classpath can be used to store built-in configuration with local-development-friendly defaults. Environment overrides come from external files mounted via Docker volumes, Kubernetes ConfigMaps, etc. Apache Commons Configuration also offers a CompositeConfiguration
that can set up a priority order for where to look for configuration.
We are now ready to see it in action (the complete code can be found on github):
Here, we’ve written a simple static method called loadConfiguration()
that takes a variable number of configuration file names/paths and returns a Properties object that can be provided to a KafkaStreams
, KafkaConsumer
, or KafkaProducer
constructor. The configFileNames
argument can contain just the file name (e.g. default-app-config.properties) in which case it will be found in the classpath of the application or it can contain a fully-specified path (e.g. /app-config/env-app-config.properties) which is useful for specifying a well-known location to mount a Docker volume or Kubernetes ConfigMap. The loadConfiguration()
method could easily be placed in a common library to be used across all your Kafka-enabled microservices should standardization of the approach across the organization be necessary.
The loadConfiguration()
method can then be simply called with a hierarchy of configuration file sources in priority order (high priority sources override lower priority sources). For example, let’s suppose we’re setting up for a Kubernetes deployment and want to allow for Kubernetes Secrets, Kubernetes ConfigMaps, and default built-in configuration in that order of priority:
The Kubernetes secrets file would be mounted to /app-config/env-secrets-app-config.properties. The Kubernetes ConfigMap would be mounted to /app-config/env-app-config.properties and the default-app-config.properties would be part of the source code and built into the jar file (normally in the src/main/resources directory for standard maven and gradle builds).
This setup would work equally well when running the application in any container orchestration system that allows mounting a directory of configuration files as well as running directly in VMs or on bare metal.
Finally, we have the streamsConfig
object that we can simply pass into our KafkaStreams object:
This configuration system makes no assumptions about what configuration values should be provided. They simply read from the properties files with overrides applied in priority order and passed on to the Kafka API. So, there are no code updates when a Kafka configuration property needs to be tweaked. It’s a configuration-file update. This system could readily be extended with other files for a non-Kafka configuration, if this were needed for the service.
We have therefore succeeded in meeting all our application-configuration requirements without a large amount of boilerplate code. Spring Boot and Spring for Apache Kafka offer one way of making a configurable Kafka service. Directly using Kafka APIs with Apache Commons Configuration offers another. There is no single correct way to implement a Kafka service, but it’s essential to know what tools are available so you can make an informed decision. Finally, if you need help developing event-driven applications or services with Apache Kafka, please reach out with any questions. We’d love to hear from you!
Related Content

Jason Lenthe
Consultant
Jason Lenthe is a Solutions Architect at Anexinet and a certified developer and administrator for Apache Kafka with over 20 years of software development experience.
Let’s get the conversation started
Reach out now to begin your digital transformation
+ 16,659
ZOOM MEETINGS
+ 9,789
HAPPY CLIENTS
+ 5,075
FINISHED PROJECTS
+ 133,967,432
LINES OF CODE
© 2000 - 2021 Anexinet Corp., All rights reserved | Privacy Policy | Cookie Policy