confluent ccdak practice test

Confluent Certified Developer for Apache Kafka

Last exam update: Dec 02 ,2025
Page 1 out of 10. Viewing questions 1-15 out of 150

Question 1

Where are the ACLs stored in a Kafka cluster by default?

  • A. Inside the broker's data directory
  • B. Under Zookeeper node /kafka-acl/
  • C. In Kafka topic __kafka_acls
  • D. Inside the Zookeeper's data directory
Mark Question:
Answer:

A


Explanation:
ACLs are stored in Zookeeper node /kafka-acls/ by default.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

is KSQL ANSI SQL compliant?

  • A. Yes
  • B. No
Mark Question:
Answer:

B


Explanation:
KSQL is not ANSI SQL compliant, for now there are no defined standards on streaming SQL languages

User Votes:
A
50%
B
50%
Discussions
vote your answer:
A
B
0 / 1000

Question 3

What information isn't stored inside of Zookeeper? (select two)

  • A. Schema Registry schemas
  • B. Consumer offset
  • C. ACL inforomation
  • D. Controller registration
  • E. Broker registration info
Mark Question:
Answer:

B


Explanation:
Consumer offsets are stored in a Kafka topic __consumer_offsets, and the Schema Registry stored
schemas in the _schemas topic.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 4

Which KSQL queries write to Kafka?

  • A. COUNT and JOIN
  • B. SHOW STREAMS and EXPLAIN <query> statements
  • C. CREATE STREAM WITH <topic> and CREATE TABLE WITH <topic>
  • D. CREATE STREAM AS SELECT and CREATE TABLE AS SELECT
Mark Question:
Answer:

C, D


Explanation:
SHOW STREAMS and EXPLAIN <query> statements run against the KSQL server that the KSQL client is
connected to. They don't communicate directly with Kafka. CREATE STREAM WITH <topic> and
CREATE TABLE WITH <topic> write metadata to the KSQL command topic. Persistent queries based
on CREATE STREAM AS SELECT and CREATE TABLE AS SELECT read and write to Kafka topics. Non-
persistent queries based on SELECT that are stateless only read from Kafka topics, for example
SELECT … FROM foo WHERE …. Non-persistent queries that are stateful read and write to Kafka,
for example, COUNT and JOIN. The data in Kafka is deleted automatically when you terminate the
query with CTRL-C.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

There are two consumers C1 and C2 belonging to the same group G subscribed to topics T1 and T2.
Each of the topics has 3 partitions. How will the partitions be assigned to consumers with Partition
Assigner being Round Robin Assigner?

  • A. C1 will be assigned partitions 0 and 2 from T1 and partition 1 from T2. C2 will have partition 1 from T1 and partitions 0 and 2 from T2.
  • B. Two consumers cannot read from two topics at the same time
  • C. C1 will be assigned partitions 0 and 1 from T1 and T2, C2 will be assigned partition 2 from T1 and T2.
  • D. All consumers will read from all partitions
Mark Question:
Answer:

A


Explanation:
The correct option is the only one where the two consumers share an equal number of partitions
amongst the two topics of three partitions. An interesting article to read
ishttps://medium.com/@anyili0928/what-i-have-learned-from-kafka-partition-assignment-strategy-
799fdf15d3ab

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

A client connects to a broker in the cluster and sends a fetch request for a partition in a topic. It gets
an exception Not Leader For Partition Exception in the response. How does client handle this
situation?

  • A. Get the Broker id from Zookeeper that is hosting the leader replica and send request to it
  • B. Send metadata request to the same broker for the topic and select the broker hosting the leader replica
  • C. Send metadata request to Zookeeper for the topic and select the broker hosting the leader replica
  • D. Send fetch request to each Broker in the cluster
Mark Question:
Answer:

B


Explanation:
In case the consumer has the wrong leader of a partition, it will issue a metadata request. The
Metadata request can be handled by any node, so clients know afterwards which broker are the
designated leader for the topic partitions. Produce and consume requests can only be sent to the
node hosting partition leader.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

What is the risk of increasing max.in.flight.requests.per.connection while also enabling retries in a
producer?

  • A. At least once delivery is not guaranteed
  • B. Message order not preserved
  • C. Reduce throughput
  • D. Less resilient
Mark Question:
Answer:

B


Explanation:
Some messages may require multiple retries. If there are more than 1 requests in flight, it may result
in messages received out of order. Note an exception to this rule is if you enable the producer
settingenable.idempotence=true which takes care of the out of ordering case on its own.
Seehttps://issues.apache.org/jira/browse/KAFKA-5494

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

A Kafka producer application wants to send log messages to a topic that does not include any key.
What are the properties that are mandatory to configure for the producer configuration? (select
three)

  • A. bootstrap.servers
  • B. partition
  • C. key.serializer
  • D. value.serializer
  • E. key
  • F. value
Mark Question:
Answer:

A, C, D


Explanation:
Both key and value serializer are mandatory.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
F
50%
Discussions
vote your answer:
A
B
C
D
E
F
0 / 1000

Question 9

To import data from external databases, I should use

  • A. Confluent REST Proxy
  • B. Kafka Connect Sink
  • C. Kafka Streams
  • D. Kafka Connect Source
Mark Question:
Answer:

D


Explanation:
Kafka Connect Sink is used to export data from Kafka to external databases and Kafka Connect Source
is used to import from external databases into Kafka.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

You are running a Kafka Streams application in a Docker container managed by Kubernetes, and upon
application restart, it takes a long time for the docker container to replicate the state and get back to
processing the dat
a. How can you improve dramatically the application restart?

  • A. Mount a persistent volume for your RocksDB
  • B. Increase the number of partitions in your inputs topic
  • C. Reduce the Streams caching property
  • D. Increase the number of Streams threads
Mark Question:
Answer:

A


Explanation:
Although any Kafka Streams application is stateless as the state is stored in Kafka, it can take a while
and lots of resources to recover the state from Kafka. In order to speed up recovery, it is advised to
store the Kafka Streams state on a persistent volume, so that only the missing part of the state needs
to be recovered.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

What client protocol is supported for the schema registry? (select two)

  • A. HTTP
  • B. HTTPS
  • C. JDBC
  • D. Websocket
  • E. SASL
Mark Question:
Answer:

A, B


Explanation:
clients can interact with the schema registry using the HTTP or HTTPS interface

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 12

If I produce to a topic that does not exist, and the broker setting auto.create.topic.enable=true, what
will happen?

  • A. Kafka will automatically create the topic with 1 partition and 1 replication factor
  • B. Kafka will automatically create the topic with the indicated producer settings num.partitions and default.replication.factor
  • C. Kafka will automatically create the topic with the broker settings num.partitions and default.replication.factor
  • D. Kafka will automatically create the topic with num.partitions=#of brokers and replication.factor=3
Mark Question:
Answer:

C


Explanation:
The broker settings comes into play when a topic is auto created

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

You want to perform table lookups against a KTable everytime a new record is received from the
KStream. What is the output of KStream-KTable join?

  • A. KTable
  • B. GlobalKTable
  • C. You choose between KStream or KTable
  • D. Kstream
Mark Question:
Answer:

D


Explanation:
Here KStream is being processed to create another KStream.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

Using the Confluent Schema Registry, where are Avro schema stored?

  • A. In the Schema Registry embedded SQL database
  • B. In the Zookeeper node /schemas
  • C. In the message bytes themselves
  • D. In the _schemas topic
Mark Question:
Answer:

D


Explanation:
The Schema Registry stores all the schemas in the _schemas Kafka topic

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

Which of the following setting increases the chance of batching for a Kafka Producer?

  • A. Increase batch.size
  • B. Increase message.max.bytes
  • C. Increase the number of producer threads
  • D. Increase linger.ms
Mark Question:
Answer:

D


Explanation:
linger.ms forces the producer to wait to send messages, hence increasing the chance of creating
batches

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2