microsoft dp-420 practice test

Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB (beta)

Note: This exam has case studies

Question 1 Topic 3, Mixed Questions

You have an Azure Cosmos DB Core (SQL) API account that is used by 10 web apps.
You need to analyze the data stored in the account by using Apache Spark to create machine learning models. The solution
must NOT affect the performance of the web apps.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.olap as the data source.
  • B. Create a private endpoint connection to the account.
  • C. In an Azure Synapse Analytics serverless SQL pool, create a view that uses OPENROWSET and the CosmosDB provider.
  • D. Enable Azure Synapse Link for the account and Analytical store on the container.
  • E. In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.oltp as the data source.
Answer:

A D

Explanation:
Explore analytical store with Apache Spark 1. Navigate to the Data hub.
2. Select the Linked tab (1), expand the Azure Cosmos DB group (if you don't see this, select the Refresh button above),
then expand the WoodgroveCosmosDb account (2). Right-click on the transactions container (3), select New notebook (4),
then select Load to DataFrame (5).

3. In the generated code within Cell 1 (3), notice that the spark.read format is set to cosmos.olap. This instructs Synapse
Link to use the container's analytical store. If we wanted to connect to the transactional store, like to read from the change
feed or write to the container, we'd use cosmos.oltp instead.

Reference: https://github.com/microsoft/MCW-Cosmos-DB-Real-Time-Advanced-Analytics/blob/main/Hands-
on%20lab/HOL%20step-by%20step%20-%20Cosmos%20DB%20real-time%20advanced%20analytics.md

Discussions

Question 2 Topic 3, Mixed Questions

HOTSPOT
You have an Azure Cosmos DB Core (SQL) API account named account1.
In account1, you run the following query in a container that contains 100GB of data.
SELECT *
FROM c
WHERE LOWER(c.categoryid) = "hockey"
You view the following metrics while performing the query.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:

Answer:

Explanation:
Box 1: No
Each physical partition should have its own index, but since no index is used, the query is not cross-partition.
Box 2: No
Index utilization is 0% and Index Look up time is also zero.
Box 3: Yes
A partition key index will be created, and the query will perform across the partitions.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-query-container

Discussions

Question 3 Topic 3, Mixed Questions

HOTSPOT
You plan to deploy two Azure Cosmos DB Core (SQL) API accounts that will each contain a single database. The accounts
will be configured as shown in the following table.

How should you provision the containers within each account to minimize costs? To answer, select the appropriate options in
the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Answer:

Explanation:
Box 1: Serverless capacity mode
Azure Cosmos DB serverless best fits scenarios where you expect intermittent and unpredictable traffic with long idle times.
Because provisioning capacity in such situations isn't required and may be costprohibitive, Azure Cosmos DB serverless
should be considered in the following use-cases:
Getting started with Azure Cosmos DB

Running applications with bursty, intermittent traffic that is hard to forecast, or low (<10%) average-to-peak traffic ratio

Developing, testing, prototyping and running in production new applications where the traffic pattern is unknown


Integrating with serverless compute services like Azure Functions
Box 2: Provisioned throughput capacity mode and autoscale throughput The use cases of autoscale include:
Variable or unpredictable workloads: When your workloads have variable or unpredictable spikes in usage, autoscale

helps by automatically scaling up and down based on usage. Examples include retail websites that have different traffic
patterns depending on seasonality; IOT workloads that have spikes at various times during the day; line of business
applications that see peak usage a few times a month or year, and more. With autoscale, you no longer need to manually
provision for peak or average capacity.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/serverless https://docs.microsoft.com/en-us/azure/cosmos-
db/provision-throughput-autoscale#use-cases-of-autoscale

Discussions

Question 4 Topic 3, Mixed Questions

You have a database in an Azure Cosmos DB Core (SQL) API account.
You need to create an Azure function that will access the database to retrieve records based on a variable named
accountnumber. The solution must protect against SQL injection attacks.
How should you define the command statement in the function?

  • A. cmd = "SELECT * FROM Persons p WHERE p.accountnumber = 'accountnumber'"
  • B. cmd = "SELECT * FROM Persons p WHERE p.accountnumber = LIKE @accountnumber"
  • C. cmd = "SELECT * FROM Persons p WHERE p.accountnumber = @accountnumber"
  • D. cmd = "SELECT * FROM Persons p WHERE p.accountnumber = '" + accountnumber + "'"
Answer:

C

Explanation:
Azure Cosmos DB supports queries with parameters expressed by the familiar @ notation. Parameterized SQL provides
robust handling and escaping of user input, and prevents accidental exposure of data through SQL injection.
For example, you can write a query that takes lastName and address.state as parameters, and execute it for various values
of lastName and address.state based on user input.
SELECT *
FROM Families f
WHERE f.lastName = @lastName AND f.address.state = @addressState
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-parameterized-queries

Discussions

Question 5 Topic 3, Mixed Questions

HOTSPOT
You have an Azure Cosmos DB Core (SQL) API account used by an application named App1.
You open the Insights pane for the account and see the following chart.

Use the drop-down menus to select the answer choice that answers each question based on the information presented in
the graphic.
NOTE: Each correct selection is worth one point.
Hot Area:

Answer:

Explanation:
Box 1: incorrect connection URLs
400 Bad Request: Returned when there is an error in the request URI, headers, or body. The response body will contain an
error message explaining what the specific problem is.
The HyperText Transfer Protocol (HTTP) 400 Bad Request response status code indicates that the server cannot or will not
process the request due to something that is perceived to be a client error (for example, malformed request syntax, invalid
request message framing, or deceptive request routing).
Box 2: 6 thousand
201 Created: Success on PUT or POST. Object created or updated successfully.
Note:
200 OK: Success on GET, PUT, or POST. Returned for a successful response.
404 Not Found: Returned when a resource does not exist on the server. If you are managing or querying an index, check the
syntax and verify the index name is specified correctly.
Reference:
https://docs.microsoft.com/en-us/rest/api/searchservice/http-status-codes

Discussions

Question 6 Topic 3, Mixed Questions

You have a database in an Azure Cosmos DB Core (SQL) API account. The database is backed up every two hours.
You need to implement a solution that supports point-in-time restore.
What should you do first?

  • A. Enable Continuous Backup for the account.
  • B. Configure the Backup & Restore settings for the account.
  • C. Create a new account that has a periodic backup policy.
  • D. Configure the Point In Time Restore settings for the account.
Answer:

A

Explanation:
When creating a new Azure Cosmos DB account, in the Backup policy tab, choose continuous mode to enable the point in
time restore functionality for the new account. With the point-in-time restore, data is restored to a new account, currently you
can't restore to an existing account.

Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/provision-account-continuous-backup

Discussions

Question 7 Topic 3, Mixed Questions

You have an Azure Cosmos DB Core (SQL) API account.
You configure the diagnostic settings to send all log information to a Log Analytics workspace.
You need to identify when the provisioned request units per second (RU/s) for resources within the account were modified.
You write the following query.
AzureDiagnostics
| where Category == "ControlPlaneRequests"
What should you include in the query?

  • A. | where OperationName startswith "AccountUpdateStart"
  • B. | where OperationName startswith "SqlContainersDelete"
  • C. | where OperationName startswith "MongoCollectionsThroughputUpdate"
  • D. | where OperationName startswith "SqlContainersThroughputUpdate"
Answer:

A

Explanation:
The following are the operation names in diagnostic logs for different operations:
RegionAddStart, RegionAddComplete
RegionRemoveStart, RegionRemoveComplete
AccountDeleteStart, AccountDeleteComplete
RegionFailoverStart, RegionFailoverComplete
AccountCreateStart, AccountCreateComplete
*AccountUpdateStart*, AccountUpdateComplete
VirtualNetworkDeleteStart, VirtualNetworkDeleteComplete DiagnosticLogUpdateStart, DiagnosticLogUpdateComplete
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/audit-control-plane-logs

Discussions

Question 8 Topic 3, Mixed Questions

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to provide a user named User1 with the ability to insert items into container1 by using role-based access control
(RBAC). The solution must use the principle of least privilege.
Which roles should you assign to User1?

  • A. CosmosDB Operator only
  • B. DocumentDB Account Contributor and Cosmos DB Built-in Data Contributor
  • C. DocumentDB Account Contributor only
  • D. Cosmos DB Built-in Data Contributor only
Answer:

A

Explanation:
Cosmos DB Operator: Can provision Azure Cosmos accounts, databases, and containers. Cannot access any data or use
Data Explorer.
Incorrect Answers:
B: DocumentDB Account Contributor can manage Azure Cosmos DB accounts. Azure Cosmos DB is formerly known as
DocumentDB.
C: DocumentDB Account Contributor: Can manage Azure Cosmos DB accounts.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/role-based-access-control

Discussions

Question 9 Topic 3, Mixed Questions

You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to write a dataset.
The data flow will use 2,000 Apache Spark partitions. You need to ensure that the ingestion from each Spark partition is
balanced to optimize throughput.
Which sink setting should you configure?

  • A. Throughput
  • B. Write throughput budget
  • C. Batch size
  • D. Collection action
Answer:

C

Explanation:
Batch size: An integer that represents how many objects are being written to Cosmos DB collection in each batch. Usually,
starting with the default batch size is sufficient. To further tune this value, note:
Cosmos DB limits single request's size to 2MB. The formula is "Request Size = Single Document Size * Batch Size". If you
hit error saying "Request size is too large", reduce the batch size value. The larger the batch size, the better throughput the
service can achieve, while make sure you allocate enough RUs to empower your workload.
Incorrect Answers:
A: Throughput: Set an optional value for the number of RUs you'd like to apply to your CosmosDB collection for each
execution of this data flow. Minimum is 400.
B: Write throughput budget: An integer that represents the RUs you want to allocate for this Data Flow write operation, out of
the total throughput allocated to the collection.
D: Collection action: Determines whether to recreate the destination collection prior to writing.
None: No action will be done to the collection.
Recreate: The collection will get dropped and recreated
Reference: https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db

Discussions

Question 10 Topic 3, Mixed Questions

You need to configure an Apache Kafka instance to ingest data from an Azure Cosmos DB Core (SQL) API account. The
data from a container named telemetry must be added to a Kafka topic named iot. The solution must store the data in a
compact binary format.
Which three configuration items should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector"
  • B. "key.converter": "org.apache.kafka.connect.json.JsonConverter"
  • C. "key.converter": "io.confluent.connect.avro.AvroConverter"
  • D. "connect.cosmos.containers.topicmap": "iot#telemetry"
  • E. "connect.cosmos.containers.topicmap": "iot"
  • F. "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSinkConnector"
Answer:

C D F

Explanation:
C: Avro is binary format, while JSON is text.
F: Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos
DB sink connector allows you to export data from Apache Kafka topics to an Azure Cosmos DB database. The connector
polls data from Kafka to write to containers in the database based on the topics subscription.
D: Create the Azure Cosmos DB sink connector in Kafka Connect. The following JSON body defines config for the sink
connector. Extract:
"connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector",
"key.converter": "org.apache.kafka.connect.json.AvroConverter" "connect.cosmos.containers.topicmap": "hotels#kafka"
Incorrect Answers:
B: JSON is plain text.
Note, full example:
{
"name": "cosmosdb-sink-connector",
"config": {
"connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector",
"tasks.max": "1",
"topics": [
"hotels"
],
"value.converter": "org.apache.kafka.connect.json.AvroConverter",
"value.converter.schemas.enable": "false",
"key.converter": "org.apache.kafka.connect.json.AvroConverter",
"key.converter.schemas.enable": "false",
"connect.cosmos.connection.endpoint": "https://.documents.azure.com:443/",
"connect.cosmos.master.key": "",
"connect.cosmos.databasename": "kafkaconnect",
"connect.cosmos.containers.topicmap": "hotels#kafka"
} }
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/sql/kafka-connector-sink https://www.confluent.io/blog/kafka-
connect-deep-dive-converters-serialization-explained/

Discussions
To page 2