Which statements are correct about partitions? (Choose two.)
B, C
Which secure communication is supported between the REST proxy and REST clients?
A
Which valid security protocols are included for broker listeners? (Choose three.)
A, B, D
By default, what do Kafka broker network connections have?
A
Explanation:
By default, Kafka brokers use the PLAINTEXT protocol for network communication. This means:
● No encryption – data is sent in plain text.
● No authentication – any client can connect without verifying identity.
● No authorization – there are no access control checks by default.
Security features like TLS, SASL, and ACLs must be explicitly configured.
Which of the following are Kafka Connect internal topics? (Choose three.)
A, C, E
Explanation:
connect-configs stores connector configurations.
connect-status tracks the status of connectors and tasks (e.g., RUNNING, FAILED).
connect-offsets stores source connector offsets for reading from external systems.
You are using Confluent Schema Registry to provide a RESTful interface for storing and retrieving
schemas.
Which types of schemas are supported? (Choose three.)
A, C, E
Explanation:
Avro is the original and most commonly used schema format supported by Schema Registry.
Confluent Schema Registry supports JSON Schema for validation and compatibility checks.
Protocol Buffers (Protobuf) are supported for schema management in Schema Registry.
Multiple clients are sharing a Kafka cluster.
As an administrator, how would you ensure that Kafka resources are distributed fairly to all clients?
A
Explanation:
Kafka quotas allow administrators to control and limit the rate of data production and consumption
per client (producer/consumer), ensuring fair use of broker resources among multiple clients.
A customer has a use case for a ksqlDB persistent query. You need to make sure that duplicate
messages are not processed and messages are not skipped.
Which property should you use?
A
Explanation:
processing.guarantee=exactly_once ensures that messages are processed exactly once by ksqlDB,
preventing both duplicates and message loss.
If a broker's JVM garbage collection takes too long, what can occur?
B
Explanation:
If the broker's JVM garbage collection (GC) pause is too long, it may fail to send heartbeats to
ZooKeeper within the expected interval. As a result, ZooKeeper considers the broker dead, and the
broker may be removed from the cluster, triggering leader elections and partition reassignments.
You are managing a Kafka cluster with five brokers (broker id '0', '1','2','3','4') and three ZooKeepers.
There are 100 topics, five partitions for each topic, and replication factor three on the cluster. Broker
id ‘0’ is currently the Controller, and this broker suddenly fails.
Which statements are correct? (Choose three.)
A, B, C
Explanation:
Kafka relies on ZooKeeper’s ephemeral nodes to detect if a broker (controller) goes down and to
elect a new controller.
The controller manages partition leadership assignments and handles leader election when a broker
fails.
The epoch number ensures coordination and avoids outdated controllers acting on stale data.
When a broker goes down, what will the Controller do?
B
Explanation:
When a broker goes down, the Controller detects the failure and triggers a leader election for all
partitions that had their leader on the failed broker. The leader is chosen from the in-sync replicas
(ISRs) of each partition.
Which technology can be used to perform event stream processing? (Choose two.)
B, C
Explanation:
Kafka Streams is a client library for building real-time applications that process and analyze data
stored in Kafka.
ksqlDB enables event stream processing using SQL-like queries, allowing real-time transformation
and analysis of Kafka topics.
How can load balancing of Kafka clients across multiple brokers be accomplished?
A
Explanation:
Partitions are the primary mechanism for achieving load balancing in Kafka. When a topic has
multiple partitions, Kafka clients (producers and consumers) can distribute the load across brokers
hosting these partitions.
A company is setting up a log ingestion use case where they will consume logs from numerous
systems. The company wants to tune Kafka for the utmost throughput.
In this scenario, what acknowledgment setting makes the most sense?
A
Explanation:
acks=0 provides the highest throughput because the producer does not wait for any
acknowledgment from the broker. This minimizes latency and maximizes performance.
However, it comes at the cost of no durability guarantees — messages may be lost if the broker fails
before writing them. This setting is suitable when throughput is critical and occasional data loss is
acceptable, such as in some log ingestion use cases where logs are also stored elsewhere.
Your Kafka cluster has four brokers. The topic t1 on the cluster has two partitions, and it has a
replication factor of three. You create a Consumer Group with four consumers, which subscribes to
t1.
In the scenario above, how many Controllers are in the Kafka cluster?
A
Explanation:
In a Kafka cluster, only one broker acts as the Controller at any given time. The Controller is
responsible for managing cluster metadata, such as partition leadership and broker status. Even if the
cluster has multiple brokers (in this case, four), only one is elected as the Controller, and others serve
as regular brokers. If the current Controller fails, another broker is automatically elected to take its
place.