microsoft dp-420 practice test

Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB (beta)

Note: Test Case questions are at the end of the exam
Last exam update: Apr 19 ,2024
Page 1 out of 4. Viewing questions 1-15 out of 51

Question 1 Topic 3, Mixed Questions

You have an Azure Cosmos DB Core (SQL) API account that is used by 10 web apps.
You need to analyze the data stored in the account by using Apache Spark to create machine learning models. The solution
must NOT affect the performance of the web apps.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.olap as the data source.
  • B. Create a private endpoint connection to the account.
  • C. In an Azure Synapse Analytics serverless SQL pool, create a view that uses OPENROWSET and the CosmosDB provider.
  • D. Enable Azure Synapse Link for the account and Analytical store on the container.
  • E. In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.oltp as the data source.
Answer:

A D

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
Explore analytical store with Apache Spark 1. Navigate to the Data hub.
2. Select the Linked tab (1), expand the Azure Cosmos DB group (if you don't see this, select the Refresh button above),
then expand the WoodgroveCosmosDb account (2). Right-click on the transactions container (3), select New notebook (4),
then select Load to DataFrame (5).

3. In the generated code within Cell 1 (3), notice that the spark.read format is set to cosmos.olap. This instructs Synapse
Link to use the container's analytical store. If we wanted to connect to the transactional store, like to read from the change
feed or write to the container, we'd use cosmos.oltp instead.

Reference: https://github.com/microsoft/MCW-Cosmos-DB-Real-Time-Advanced-Analytics/blob/main/Hands-
on%20lab/HOL%20step-by%20step%20-%20Cosmos%20DB%20real-time%20advanced%20analytics.md

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 2 Topic 3, Mixed Questions

HOTSPOT
You have an Azure Cosmos DB Core (SQL) API account named account1.
In account1, you run the following query in a container that contains 100GB of data.
SELECT *
FROM c
WHERE LOWER(c.categoryid) = "hockey"
You view the following metrics while performing the query.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:

Answer:


Explanation:
Box 1: No
Each physical partition should have its own index, but since no index is used, the query is not cross-partition.
Box 2: No
Index utilization is 0% and Index Look up time is also zero.
Box 3: Yes
A partition key index will be created, and the query will perform across the partitions.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-query-container

Discussions
0 / 1000

Question 3 Topic 3, Mixed Questions

HOTSPOT
You plan to deploy two Azure Cosmos DB Core (SQL) API accounts that will each contain a single database. The accounts
will be configured as shown in the following table.

How should you provision the containers within each account to minimize costs? To answer, select the appropriate options in
the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Answer:


Explanation:
Box 1: Serverless capacity mode
Azure Cosmos DB serverless best fits scenarios where you expect intermittent and unpredictable traffic with long idle times.
Because provisioning capacity in such situations isn't required and may be costprohibitive, Azure Cosmos DB serverless
should be considered in the following use-cases:
Getting started with Azure Cosmos DB

Running applications with bursty, intermittent traffic that is hard to forecast, or low (<10%) average-to-peak traffic ratio

Developing, testing, prototyping and running in production new applications where the traffic pattern is unknown


Integrating with serverless compute services like Azure Functions
Box 2: Provisioned throughput capacity mode and autoscale throughput The use cases of autoscale include:
Variable or unpredictable workloads: When your workloads have variable or unpredictable spikes in usage, autoscale

helps by automatically scaling up and down based on usage. Examples include retail websites that have different traffic
patterns depending on seasonality; IOT workloads that have spikes at various times during the day; line of business
applications that see peak usage a few times a month or year, and more. With autoscale, you no longer need to manually
provision for peak or average capacity.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/serverless https://docs.microsoft.com/en-us/azure/cosmos-
db/provision-throughput-autoscale#use-cases-of-autoscale

Discussions
0 / 1000

Question 4 Topic 3, Mixed Questions

You have a database in an Azure Cosmos DB Core (SQL) API account.
You need to create an Azure function that will access the database to retrieve records based on a variable named
accountnumber. The solution must protect against SQL injection attacks.
How should you define the command statement in the function?

  • A. cmd = "SELECT * FROM Persons p WHERE p.accountnumber = 'accountnumber'"
  • B. cmd = "SELECT * FROM Persons p WHERE p.accountnumber = LIKE @accountnumber"
  • C. cmd = "SELECT * FROM Persons p WHERE p.accountnumber = @accountnumber"
  • D. cmd = "SELECT * FROM Persons p WHERE p.accountnumber = '" + accountnumber + "'"
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Azure Cosmos DB supports queries with parameters expressed by the familiar @ notation. Parameterized SQL provides
robust handling and escaping of user input, and prevents accidental exposure of data through SQL injection.
For example, you can write a query that takes lastName and address.state as parameters, and execute it for various values
of lastName and address.state based on user input.
SELECT *
FROM Families f
WHERE f.lastName = @lastName AND f.address.state = @addressState
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-parameterized-queries

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5 Topic 3, Mixed Questions

HOTSPOT
You have an Azure Cosmos DB Core (SQL) API account used by an application named App1.
You open the Insights pane for the account and see the following chart.

Use the drop-down menus to select the answer choice that answers each question based on the information presented in
the graphic.
NOTE: Each correct selection is worth one point.
Hot Area:

Answer:


Explanation:
Box 1: incorrect connection URLs
400 Bad Request: Returned when there is an error in the request URI, headers, or body. The response body will contain an
error message explaining what the specific problem is.
The HyperText Transfer Protocol (HTTP) 400 Bad Request response status code indicates that the server cannot or will not
process the request due to something that is perceived to be a client error (for example, malformed request syntax, invalid
request message framing, or deceptive request routing).
Box 2: 6 thousand
201 Created: Success on PUT or POST. Object created or updated successfully.
Note:
200 OK: Success on GET, PUT, or POST. Returned for a successful response.
404 Not Found: Returned when a resource does not exist on the server. If you are managing or querying an index, check the
syntax and verify the index name is specified correctly.
Reference:
https://docs.microsoft.com/en-us/rest/api/searchservice/http-status-codes

Discussions
0 / 1000

Question 6 Topic 3, Mixed Questions

You have a database in an Azure Cosmos DB Core (SQL) API account. The database is backed up every two hours.
You need to implement a solution that supports point-in-time restore.
What should you do first?

  • A. Enable Continuous Backup for the account.
  • B. Configure the Backup & Restore settings for the account.
  • C. Create a new account that has a periodic backup policy.
  • D. Configure the Point In Time Restore settings for the account.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
When creating a new Azure Cosmos DB account, in the Backup policy tab, choose continuous mode to enable the point in
time restore functionality for the new account. With the point-in-time restore, data is restored to a new account, currently you
can't restore to an existing account.

Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/provision-account-continuous-backup

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7 Topic 3, Mixed Questions

You have an Azure Cosmos DB Core (SQL) API account.
You configure the diagnostic settings to send all log information to a Log Analytics workspace.
You need to identify when the provisioned request units per second (RU/s) for resources within the account were modified.
You write the following query.
AzureDiagnostics
| where Category == "ControlPlaneRequests"
What should you include in the query?

  • A. | where OperationName startswith "AccountUpdateStart"
  • B. | where OperationName startswith "SqlContainersDelete"
  • C. | where OperationName startswith "MongoCollectionsThroughputUpdate"
  • D. | where OperationName startswith "SqlContainersThroughputUpdate"
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The following are the operation names in diagnostic logs for different operations:
RegionAddStart, RegionAddComplete
RegionRemoveStart, RegionRemoveComplete
AccountDeleteStart, AccountDeleteComplete
RegionFailoverStart, RegionFailoverComplete
AccountCreateStart, AccountCreateComplete
*AccountUpdateStart*, AccountUpdateComplete
VirtualNetworkDeleteStart, VirtualNetworkDeleteComplete DiagnosticLogUpdateStart, DiagnosticLogUpdateComplete
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/audit-control-plane-logs

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8 Topic 3, Mixed Questions

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to provide a user named User1 with the ability to insert items into container1 by using role-based access control
(RBAC). The solution must use the principle of least privilege.
Which roles should you assign to User1?

  • A. CosmosDB Operator only
  • B. DocumentDB Account Contributor and Cosmos DB Built-in Data Contributor
  • C. DocumentDB Account Contributor only
  • D. Cosmos DB Built-in Data Contributor only
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Cosmos DB Operator: Can provision Azure Cosmos accounts, databases, and containers. Cannot access any data or use
Data Explorer.
Incorrect Answers:
B: DocumentDB Account Contributor can manage Azure Cosmos DB accounts. Azure Cosmos DB is formerly known as
DocumentDB.
C: DocumentDB Account Contributor: Can manage Azure Cosmos DB accounts.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/role-based-access-control

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9 Topic 3, Mixed Questions

You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to write a dataset.
The data flow will use 2,000 Apache Spark partitions. You need to ensure that the ingestion from each Spark partition is
balanced to optimize throughput.
Which sink setting should you configure?

  • A. Throughput
  • B. Write throughput budget
  • C. Batch size
  • D. Collection action
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Batch size: An integer that represents how many objects are being written to Cosmos DB collection in each batch. Usually,
starting with the default batch size is sufficient. To further tune this value, note:
Cosmos DB limits single request's size to 2MB. The formula is "Request Size = Single Document Size * Batch Size". If you
hit error saying "Request size is too large", reduce the batch size value. The larger the batch size, the better throughput the
service can achieve, while make sure you allocate enough RUs to empower your workload.
Incorrect Answers:
A: Throughput: Set an optional value for the number of RUs you'd like to apply to your CosmosDB collection for each
execution of this data flow. Minimum is 400.
B: Write throughput budget: An integer that represents the RUs you want to allocate for this Data Flow write operation, out of
the total throughput allocated to the collection.
D: Collection action: Determines whether to recreate the destination collection prior to writing.
None: No action will be done to the collection.
Recreate: The collection will get dropped and recreated
Reference: https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10 Topic 3, Mixed Questions

You need to configure an Apache Kafka instance to ingest data from an Azure Cosmos DB Core (SQL) API account. The
data from a container named telemetry must be added to a Kafka topic named iot. The solution must store the data in a
compact binary format.
Which three configuration items should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector"
  • B. "key.converter": "org.apache.kafka.connect.json.JsonConverter"
  • C. "key.converter": "io.confluent.connect.avro.AvroConverter"
  • D. "connect.cosmos.containers.topicmap": "iot#telemetry"
  • E. "connect.cosmos.containers.topicmap": "iot"
  • F. "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSinkConnector"
Answer:

C D F

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
F
50%

Explanation:
C: Avro is binary format, while JSON is text.
F: Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos
DB sink connector allows you to export data from Apache Kafka topics to an Azure Cosmos DB database. The connector
polls data from Kafka to write to containers in the database based on the topics subscription.
D: Create the Azure Cosmos DB sink connector in Kafka Connect. The following JSON body defines config for the sink
connector. Extract:
"connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector",
"key.converter": "org.apache.kafka.connect.json.AvroConverter" "connect.cosmos.containers.topicmap": "hotels#kafka"
Incorrect Answers:
B: JSON is plain text.
Note, full example:
{
"name": "cosmosdb-sink-connector",
"config": {
"connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector",
"tasks.max": "1",
"topics": [
"hotels"
],
"value.converter": "org.apache.kafka.connect.json.AvroConverter",
"value.converter.schemas.enable": "false",
"key.converter": "org.apache.kafka.connect.json.AvroConverter",
"key.converter.schemas.enable": "false",
"connect.cosmos.connection.endpoint": "https://.documents.azure.com:443/",
"connect.cosmos.master.key": "",
"connect.cosmos.databasename": "kafkaconnect",
"connect.cosmos.containers.topicmap": "hotels#kafka"
} }
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/sql/kafka-connector-sink https://www.confluent.io/blog/kafka-
connect-deep-dive-converters-serialization-explained/

Discussions
vote your answer:
A
B
C
D
E
F
0 / 1000

Question 11 Topic 3, Mixed Questions

You plan to create an Azure Cosmos DB Core (SQL) API account that will use customer-managed keys stored in Azure Key
Vault.
You need to configure an access policy in Key Vault to allow Azure Cosmos DB access to the keys.
Which three permissions should you enable in the access policy? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. Wrap Key
  • B. Get
  • C. List
  • D. Update
  • E. Sign
  • F. Verify
  • G. Unwrap Key
Answer:

A B G

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
F
50%
G
50%

Explanation:
To Configure customer-managed keys for your Azure Cosmos account with Azure Key Vault: Add an access policy to your
Azure Key Vault instance:
1. From the Azure portal, go to the Azure Key Vault instance that you plan to use to host your encryption keys. Select
Access Policies from the left menu:

2. Select + Add Access Policy.
3. Under the Key permissions drop-down menu, select Get, Unwrap Key, and Wrap Key permissions:

Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-cmk

Discussions
vote your answer:
A
B
C
D
E
F
G
0 / 1000

Question 12 Topic 3, Mixed Questions

HOTSPOT
You have an Azure Cosmos DB Core (SQL) API account named account1.
You have the Azure virtual networks and subnets shown in the following table.

The vnet1 and vnet2 networks are connected by using a virtual network peer.
The Firewall and virtual network settings for account1 are configured as shown in the exhibit.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:

Answer:


Explanation:
Box 1: Yes
VM1 is on vnet1.subnet1 which has the Endpoint Status enabled.
Box 2: No
Only virtual network and their subnets added to Azure Cosmos account have access. Their peered VNets cannot access the
account until the subnets within peered virtual networks are added to the account.
Box 3: No
Only virtual network and their subnets added to Azure Cosmos account have access.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-vnet-service-endpoint

Discussions
0 / 1000

Question 13 Topic 3, Mixed Questions

HOTSPOT
You have a database in an Azure Cosmos DB SQL API Core (SQL) account that is used for development.
The database is modified once per day in a batch process.
You need to ensure that you can restore the database if the last batch process fails. The solution must minimize costs.
How should you configure the backup settings? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Answer:


Discussions
0 / 1000

Question 14 Topic 3, Mixed Questions

You have the following query.
SELECT * FROM
WHERE c.sensor = "TEMP1"
AND c.value < 22
AND c.timestamp >= 1619146031231
You need to recommend a composite index strategy that will minimize the request units (RUs) consumed by the query.
What should you recommend?

  • A. a composite index for (sensor ASC, value ASC) and a composite index for (sensor ASC, timestamp ASC)
  • B. a composite index for (sensor ASC, value ASC, timestamp ASC) and a composite index for (sensor DESC, value DESC, timestamp DESC)
  • C. a composite index for (value ASC, sensor ASC) and a composite index for (timestamp ASC, sensor ASC)
  • D. a composite index for (sensor ASC, value ASC, timestamp ASC)
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
If a query has a filter with two or more properties, adding a composite index will improve performance.
Consider the following query:
SELECT * FROM c WHERE c.name = Tim and c.age > 18
In the absence of a composite index on (name ASC, and age ASC), we will utilize a range index for this query. We can
improve the efficiency of this query by creating a composite index for name and age.
Queries with multiple equality filters and a maximum of one range filter (such as >,<, <=, >=, !=) will utilize the composite
index.
Reference: https://azure.microsoft.com/en-us/blog/three-ways-to-leverage-composite-indexes-in-azure-cosmos-db/

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15 Topic 3, Mixed Questions

HOTSPOT
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
The following is a sample of a document in container1.
{
"studentId": "631282",
"firstName": "James",
"lastName": "Smith",
"enrollmentYear": 1990,
"isActivelyEnrolled": true,
"address": {
"street": "",
"city": "",
"stateProvince": "",
"postal": "",
}
}
The container1 container has the following indexing policy.
{
"indexingMode": "consistent",
"includePaths": [
{
"path": "/*"
},
{
"path": "/address/city/?"
}
],
"excludePaths": [
{
"path": "/address/*"
},
{
"path": "/firstName/?"
}
]
}
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:

Answer:


Explanation:
Box 1: Yes
"path": "/*" is in includePaths.
Include the root path to selectively exclude paths that don't need to be indexed. This is the recommended approach as it lets
Azure Cosmos DB proactively index any new property that may be added to your model.
Box 2: No
"path": "/firstName/?" is in excludePaths.
Box 3: Yes
"path": "/address/city/?" is in includePaths
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy

Discussions
0 / 1000
To page 2