Confluent ccdak practice test

Certified Developer for Apache Kafka Exam


Question 1

In Kafka, every broker... (select three)

  • A. contains all the topics and all the partitions
  • B. knows all the metadata for all topics and partitions
  • C. is a controller
  • D. knows the metadata for the topics and partitions it has on its disk
  • E. is a bootstrap broker
  • F. contains only a subset of the topics and the partitions
Answer:

B, E, F

Explanation:
Kafka topics are divided into partitions and spread across brokers. Each brokers knows about all the
metadata and each broker is a bootstrap broker, but only one of them is elected controller

Discussions

Question 2

To continuously export data from Kafka into a target database, I should use

  • A. Kafka Producer
  • B. Kafka Streams
  • C. Kafka Connect Sink
  • D. Kafka Connect Source
Answer:

C

Explanation:
Kafka Connect Sink is used to export data from Kafka to external databases and Kafka Connect Source
is used to import from external databases into Kafka.

Discussions

Question 3

A Zookeeper configuration has tickTime of 2000, initLimit of 20 and syncLimit of 5. What's the
timeout value for followers to connect to Zookeeper?

  • A. 20 sec
  • B. 10 sec
  • C. 2000 ms
  • D. 40 sec
Answer:

D

Explanation:
tick time is 2000 ms, and initLimit is the config taken into account when establishing a connection to
Zookeeper, so the answer is 2000 * 20 = 40000 ms = 40s

Discussions

Question 4

In Avro, adding an element to an enum without a default is a __ schema evolution

  • A. breaking
  • B. full
  • C. backward
  • D. forward
Answer:

A

Explanation:
Since Confluent 5.4.0, Avro 1.9.1 is used. Since default value was added to enum complex type , the
schema resolution changed from:
(<1.9.1) if both are enums:** if the writer's symbol is not present in the reader's enum, then an error
is signalled. **(>=1.9.1) if both are enums:
if the writer's symbol is not present in the reader's enum and the reader has a default value, then
that value is used, otherwise an error is signalled.

Discussions

Question 5

There are five brokers in a cluster, a topic with 10 partitions and replication factor of 3, and a quota of
producer_bytes_rate of 1 MB/sec has been specified for the client. What is the maximum throughput
allowed for the client?

  • A. 10 MB/s
  • B. 0.33 MB/s
  • C. 1 MB/s
  • D. 5 MB/s
Answer:

D

Explanation:
Each producer is allowed to produce @ 1MB/s to a broker. Max throughput 5 * 1MB, because we
have 5 brokers.

Discussions

Question 6

A topic "sales" is being produced to in the Americas region. You are mirroring this topic using Mirror
Maker to the European region. From there, you are only reading the topic for analytics purposes.
What kind of mirroring is this?

  • A. Passive-Passive
  • B. Active-Active
  • C. Active-Passive
Answer:

C

Explanation:
This is active-passing as the replicated topic is used for read-only purposes only

Discussions

Question 7

What is true about replicas ?

  • A. Produce requests can be done to the replicas that are followers
  • B. Produce and consume requests are load-balanced between Leader and Follower replicas
  • C. Leader replica handles all produce and consume requests
  • D. Follower replica handles all consume requests
Answer:

C

Explanation:
Replicas are passive - they don't handle produce or consume request. Produce and consume requests
get sent to the node hosting partition leader.

Discussions

Question 8

If I want to send binary data through the REST proxy, it needs to be base64 encoded. Which
component needs to encode the binary data into base 64?

  • A. The Producer
  • B. The Kafka Broker
  • C. Zookeeper
  • D. The REST Proxy
Answer:

A

Explanation:
The REST Proxy requires to receive data over REST that is already base64 encoded, hence it is the
responsibility of the producer

Discussions

Question 9

What's is true about Kafka brokers and clients from version 0.10.2 onwards?

  • A. Clients and brokers must have the exact same version to be able to communicate
  • B. A newer client can talk to a newer broker, but an older client cannot talk to a newer broker
  • C. A newer client can talk to a newer broker, and an older client can talk to a newer broker
  • D. A newer client can't talk to a newer broker, but an older client can talk to a newer broker
Answer:

C

Explanation:
Kafka's new bidirectional client compatibility introduced in 0.10.2 allows this. Read more
herehttps://www.confluent.io/blog/upgrading-apache-kafka-clients-just-got-easier/

Discussions

Question 10

How will you set the retention for the topic named “my-topic” to 1 hour?

  • A. Set the broker config log.retention.ms to 3600000
  • B. Set the consumer config retention.ms to 3600000
  • C. Set the topic config retention.ms to 3600000
  • D. Set the producer config retention.ms to 3600000
Answer:

C

Explanation:
retention.ms can be configured at topic level while creating topic or by altering topic. It shouldn't be
set at the broker level (log.retention.ms) as this would impact all the topics in the cluster, not just the
one we are interested in

Discussions
To page 2