Weekend Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: simple70

Pass the Confluent Certified Administrator CCAAK Questions and answers with ValidTests

Exam CCAAK All Questions
Exam CCAAK Premium Access

View all detail and faqs for the CCAAK exam

Viewing page 1 out of 2 pages
Viewing questions 1-10 out of questions
Questions # 1:

Which valid security protocols are included for broker listeners? (Choose three.)

Options:

A.

PLAINTEXT

B.

SSL

C.

SASL

D.

SASL_SSL

E.

GSSAPI

Expert Solution
Questions # 2:

A developer is working for a company with internal best practices that dictate that there is no single point of failure for all data stored.

What is the best approach to make sure the developer is complying with this best practice when creating Kafka topics?

Options:

A.

Set ‘min.insync.replicas’ to 1.

B.

Use the parameter --partitions=3 when creating the topic.

C.

Make sure the topics are created with linger.ms=0 so data is written immediately and not held in batch.

D.

Set the topic replication factor to 3.

Expert Solution
Questions # 3:

You want to increase Producer throughput for the messages it sends to your Kafka cluster by tuning the batch size (‘batch.size’) and the time the Producer waits before sending a batch (‘linger.ms’).

According to best practices, what should you do?

Options:

A.

Decrease ‘batch.size’ and decrease ‘linger.ms’

B.

Decrease ‘batch.size’ and increase ‘linger.ms’

C.

Increase ‘batch.size’ and decrease ‘linger.ms’

D.

Increase ‘batch.size’ and increase ‘linger.ms’

Expert Solution
Questions # 4:

You are managing a cluster with a large number of topics, and each topic has a lot of partitions. A team wants to significantly increase the number of partitions for some topics.

Which parameters should you check before increasing the partitions?

Options:

A.

Check the producer batch size and buffer size.

B.

Check if compression is being used.

C.

Check the max open file count on brokers.

D.

Check if acks=all is being used.

Expert Solution
Questions # 5:

Your organization has a mission-critical Kafka cluster that must be highly available. A Disaster Recovery (DR) cluster has been set up using Replicator, and data is continuously being replicated from source cluster to the DR cluster. However, you notice that the message on offset 1002 on source cluster does not seem to match with offset 1002 on the destination DR cluster.

Which statement is correct?

Options:

A.

The DR cluster is lagging behind updates; once the DR cluster catches up, the messages will match.

B.

The message on DR cluster got over-written accidently by another application.

C.

The offsets for the messages on the source, destination cluster may not match.

D.

The message was updated on source cluster, but the update did not flow into destination DR cluster and errored.

Expert Solution
Questions # 6:

Which use cases would benefit most from continuous event stream processing? (Choose three.)

Options:

A.

Fraud detection

B.

Context-aware product recommendations for e-commerce

C.

End-of-day financial settlement processing

D.

Log monitoring/application fault detection

E.

Historical dashboards

Expert Solution
Questions # 7:

An employee in the reporting department needs assistance because their data feed is slowing down. You start by quickly checking the consumer lag for the clients on the data stream.

Which command will allow you to quickly check for lag on the consumers?

Options:

A.

bin/kafka-consumer-lag.sh

B.

bin/kafka-consumer-groups.sh

C.

bin/kafka-consumer-group-throughput.sh

D.

bin/kafka-reassign-partitions.sh

Expert Solution
Questions # 8:

A Kafka cluster with three brokers has a topic with 10 partitions and a replication factor set to three. Each partition stores 25 GB data per day and data retention is set to 24 hours.

How much storage will be consumed by the topic on each broker?

Options:

A.

75 GB

B.

250 GB

C.

300 GB

D.

750 GB

Expert Solution
Questions # 9:

A customer has a use case for a ksqlDB persistent query. You need to make sure that duplicate messages are not processed and messages are not skipped.

Which property should you use?

Options:

A.

processing.guarantee=exactly_once

B.

ksql.streams auto offset.reset=earliest

C.

ksql.streams auto.offset.reset=latest

D.

ksql.fail.on.production.error=false

Expert Solution
Questions # 10:

Kafka Connect is running on a two node cluster in distributed mode. The connector is a source connector that pulls data from Postgres tables (users/payment/orders), writes to topics with two partitions, and with replication factor two. The development team notices that the data is lagging behind.

What should be done to reduce the data lag*?

The Connector definition is listed below:

{

"name": "confluent-postgresql-source",

"connector class": "PostgresSource",

"topic.prefix": "postgresql_",

& nbsp;& nbsp;& nbsp;…

"db.name": "postgres",

"table.whitelist": "users.payment.orders”,

"timestamp.column.name": "created_at",

"output.data format": "JSON",

"db.timezone": "UTC",

"tasks.max": "1"

}

Options:

A.

Increase the number of Connect Nodes.

B.

Increase the number of Connect Tasks (tasks max value).

C.

Increase the number of partitions.

D.

Increase the replication factor and increase the number of Connect Tasks.

Expert Solution
Viewing page 1 out of 2 pages
Viewing questions 1-10 out of questions