confluent ccaak practice test

Confluent Certified Administrator for Apache Kafka

Last exam update: Nov 18 ,2025
Page 1 out of 4. Viewing questions 1-15 out of 54

Question 1

Which statements are correct about partitions? (Choose two.)

  • A. A partition in Kafka will be represented by a single segment on a disk.
  • B. A partition is comprised of one or more segments on a disk.
  • C. All partition segments reside in a single directory on a broker disk.
  • D. A partition size is determined after the largest segment on a disk.
Mark Question:
Answer:

B, C

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

Which secure communication is supported between the REST proxy and REST clients?

  • A. TLS (HTTPS)
  • B. MD5
  • C. SCRAM
  • D. Kerberos
Mark Question:
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

Which valid security protocols are included for broker listeners? (Choose three.)

  • A. PLAINTEXT
  • B. SSL
  • C. SASL
  • D. SASL_SSL
  • E. GSSAPI
Mark Question:
Answer:

A, B, D

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 4

By default, what do Kafka broker network connections have?

  • A. No encryption, no authentication and no authorization
  • B. Encryption, but no authentication or authorization
  • C. No encryption, no authorization, but have authentication
  • D. Encryption and authentication, but no authorization
Mark Question:
Answer:

A


Explanation:
By default, Kafka brokers use the PLAINTEXT protocol for network communication. This means:
● No encryption – data is sent in plain text.
● No authentication – any client can connect without verifying identity.
● No authorization – there are no access control checks by default.
Security features like TLS, SASL, and ACLs must be explicitly configured.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

Which of the following are Kafka Connect internal topics? (Choose three.)

  • A. connect-confiqs
  • B. connect-distributed
  • C. connect-status
  • D. connect-standalone
  • E. connect-offsets
Mark Question:
Answer:

A, C, E


Explanation:
connect-configs stores connector configurations.
connect-status tracks the status of connectors and tasks (e.g., RUNNING, FAILED).
connect-offsets stores source connector offsets for reading from external systems.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 6

You are using Confluent Schema Registry to provide a RESTful interface for storing and retrieving
schemas.
Which types of schemas are supported? (Choose three.)

  • A. Avro
  • B. gRPC
  • C. JSON
  • D. Thrift
  • E. Protobuf
Mark Question:
Answer:

A, C, E


Explanation:
Avro is the original and most commonly used schema format supported by Schema Registry.
Confluent Schema Registry supports JSON Schema for validation and compatibility checks.
Protocol Buffers (Protobuf) are supported for schema management in Schema Registry.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 7

Multiple clients are sharing a Kafka cluster.
As an administrator, how would you ensure that Kafka resources are distributed fairly to all clients?

  • A. Quotas
  • B. Consumer Groups
  • C. Rebalancing
  • D. ACLs
Mark Question:
Answer:

A


Explanation:
Kafka quotas allow administrators to control and limit the rate of data production and consumption
per client (producer/consumer), ensuring fair use of broker resources among multiple clients.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

A customer has a use case for a ksqlDB persistent query. You need to make sure that duplicate
messages are not processed and messages are not skipped.
Which property should you use?

  • A. processing.guarantee=exactly_once
  • B. ksql.streams auto offset.reset=earliest
  • C. ksql.streams auto.offset.reset=latest
  • D. ksql.fail.on.production.error=false
Mark Question:
Answer:

A


Explanation:
processing.guarantee=exactly_once ensures that messages are processed exactly once by ksqlDB,
preventing both duplicates and message loss.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

If a broker's JVM garbage collection takes too long, what can occur?

  • A. There will be a trigger of the broker's log cleaner thread.
  • B. ZooKeeper believes the broker to be dead.
  • C. There is backpressure to, and pausing of, Kafka clients.
  • D. Log files written to disk are loaded into the page cache.
Mark Question:
Answer:

B


Explanation:
If the broker's JVM garbage collection (GC) pause is too long, it may fail to send heartbeats to
ZooKeeper within the expected interval. As a result, ZooKeeper considers the broker dead, and the
broker may be removed from the cluster, triggering leader elections and partition reassignments.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

You are managing a Kafka cluster with five brokers (broker id '0', '1','2','3','4') and three ZooKeepers.
There are 100 topics, five partitions for each topic, and replication factor three on the cluster. Broker
id ‘0’ is currently the Controller, and this broker suddenly fails.
Which statements are correct? (Choose three.)

  • A. Kafka uses ZooKeeper's ephemeral node feature to elect a controller.
  • B. The Controller is responsible for electing Leaders among the partitions and replicas.
  • C. The Controller uses the epoch number to prevent a split brain scenario.
  • D. The broker id is used as the epoch number to prevent a split brain scenario.
  • E. The number of Controllers should always be equal to the number of brokers alive in the cluster.
  • F. The Controller is responsible for reassigning partitions to the consumers in a Consumer Group.
Mark Question:
Answer:

A, B, C


Explanation:
Kafka relies on ZooKeeper’s ephemeral nodes to detect if a broker (controller) goes down and to
elect a new controller.
The controller manages partition leadership assignments and handles leader election when a broker
fails.
The epoch number ensures coordination and avoids outdated controllers acting on stale data.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
F
50%
Discussions
vote your answer:
A
B
C
D
E
F
0 / 1000

Question 11

When a broker goes down, what will the Controller do?

  • A. Wait for a follower to take the lead.
  • B. Trigger a leader election among the remaining followers to distribute leadership.
  • C. Become the leader for the topic/partition that needs a leader, pending the broker return in the cluster.
  • D. Automatically elect the least loaded broker to become the leader for every orphan's partitions.
Mark Question:
Answer:

B


Explanation:
When a broker goes down, the Controller detects the failure and triggers a leader election for all
partitions that had their leader on the failed broker. The leader is chosen from the in-sync replicas
(ISRs) of each partition.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

Which technology can be used to perform event stream processing? (Choose two.)

  • A. Confluent Schema Registry
  • B. Apache Kafka Streams
  • C. Confluent ksqlDB
  • D. Confluent Replicator
Mark Question:
Answer:

B, C


Explanation:
Kafka Streams is a client library for building real-time applications that process and analyze data
stored in Kafka.
ksqlDB enables event stream processing using SQL-like queries, allowing real-time transformation
and analysis of Kafka topics.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

How can load balancing of Kafka clients across multiple brokers be accomplished?

  • A. Partitions
  • B. Replicas
  • C. Offsets
  • D. Connectors
Mark Question:
Answer:

A


Explanation:
Partitions are the primary mechanism for achieving load balancing in Kafka. When a topic has
multiple partitions, Kafka clients (producers and consumers) can distribute the load across brokers
hosting these partitions.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

A company is setting up a log ingestion use case where they will consume logs from numerous
systems. The company wants to tune Kafka for the utmost throughput.
In this scenario, what acknowledgment setting makes the most sense?

  • A. acks=0
  • B. acks=1
  • C. acks=all
  • D. acks=undefined
Mark Question:
Answer:

A


Explanation:
acks=0 provides the highest throughput because the producer does not wait for any
acknowledgment from the broker. This minimizes latency and maximizes performance.
However, it comes at the cost of no durability guarantees — messages may be lost if the broker fails
before writing them. This setting is suitable when throughput is critical and occasional data loss is
acceptable, such as in some log ingestion use cases where logs are also stored elsewhere.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

Your Kafka cluster has four brokers. The topic t1 on the cluster has two partitions, and it has a
replication factor of three. You create a Consumer Group with four consumers, which subscribes to
t1.
In the scenario above, how many Controllers are in the Kafka cluster?

  • A. One
  • B. two
  • C. three
  • D. Four
Mark Question:
Answer:

A


Explanation:
In a Kafka cluster, only one broker acts as the Controller at any given time. The Controller is
responsible for managing cluster metadata, such as partition leadership and broker status. Even if the
cluster has multiple brokers (in this case, four), only one is elected as the Controller, and others serve
as regular brokers. If the current Controller fails, another broker is automatically elected to take its
place.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2