HOTSPOT
You are designing a recovery strategy for your Azure SQL Databases.
The recovery strategy must use default automated backup settings. The solution must include a Point-in time restore
recovery strategy.
You need to recommend which backups to use and the order in which to restore backups.
What should you recommend? To answer, select the appropriate configuration in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Explanation:
All Basic, Standard, and Premium databases are protected by automatic backups. Full backups are taken every week,
differential backups every day, and log backups every 5 minutes.
Reference:
https://azure.microsoft.com/sv-se/blog/azure-sql-database-point-in-time-restore/
You are designing a highly available Azure Data Lake Storage solution that will include geo-zoneredundant storage (GZRS).
You need to monitor for replication delays that can affect the recovery point objective (RPO).
What should you include in the monitoring solution?
D
Explanation:
It's important to note that account failover often results in some data loss, because geo-replication always involves latency.
The secondary endpoint is typically behind the primary endpoint. So, when you initiate a failover, any data that has not yet
been replicated to the secondary region will be lost.
We [Microsoft] recommend that you always check the Last Sync Time property before initiating a failover to evaluate how far
the secondary is behind the primary.
Incorrect Answers:
B: Success E2E Latency: The end-to-end latency of successful requests made to a storage service or the specified API
operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send
the response, and receive acknowledgment of the response.
Reference:
https://azure.microsoft.com/en-us/blog/account-failover-now-in-public-preview-for-azure-storage/
https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-supported
HOTSPOT
You have an Azure Data Lake Storage Gen2 account named account1 that stores logs as shown in the following table.
You do not expect that the logs will be accessed during the retention periods.
You need to recommend a solution for account1 that meets the following requirements:
Automatically deletes the logs at the end of each retention period Minimizes storage costs
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Explanation:
Box 1: Store the infrastructure in the Cool access tier and the application logs in the Archive access tier.
Cool - Optimized for storing data that is infrequently accessed and stored for at least 30 days.
Archive - Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency
requirements, on the order of hours.
Box 2: Azure Blob storage lifecycle management rules
Blob storage lifecycle management offers a rich, rule-based policy that you can use to transition your data to the best access
tier and to expire data at the end of its lifecycle.
Reference:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers
HOTSPOT
You are designing a solution that uses Azure Cosmos DB to store and serve data.
You need to design the Azure Cosmos DB storage to meet the following requirements:
Provide high availability.
Provide a recovery point objective (RPO) of less than 15 minutes.
Provide a recovery time objective (RTO) of less than two minutes. Minimize data loss in the event of a disaster.
What should you include in the design? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Explanation:
Box 1: Multiple
For higher write availability, configure your Azure Cosmos account to have multiple write regions.
Box 2: Bounded staleness
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/high-availability https://docs.microsoft.com/en-
us/azure/cosmos-db/consistency-levels#consistency-levels-and-throughput
You plan to implement an Azure Data Lake Gen2 storage account.
You need to ensure that the data lake will remain available if a data center fails in the primary Azure region. The solution
must minimize costs.
Which type of replication should you use for the storage account?
A
Explanation:
Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary
region using LRS. It then copies your data asynchronously to a single physical location in the secondary region.
Incorrect Answers:
B: Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary
region. For applications requiring high availability, Microsoft recommends using ZRS in the primary region, and also
replicating to a secondary region.
C: Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the
primary region. LRS is the least expensive replication option, but is not recommended for applications requiring high
availability.
D: GZRS is more expensive compared to GRS.
Reference: https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a
unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while
others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in
the review screen.
A company is developing a solution to manage inventory data for a group of automotive repair shops. The solution will use
Azure Synapse Analytics as the data store.
Shops will upload data every 10 days.
Data corruption checks must run each time data is uploaded. If corruption is detected, the corrupted data must be removed.
You need to ensure that upload processes and data corruption checks do not impact reporting and analytics processes that
use the data warehouse.
Proposed solution: Configure database-level auditing in Azure Synapse Analytics and set retention to 10 days.
Does the solution meet the goal?
B
Explanation:
Instead, create a user-defined restore point before data is uploaded. Delete the restore point after data corruption checks
complete.
Reference: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/backup-and-restore
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a
unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while
others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in
the review screen.
A company is developing a solution to manage inventory data for a group of automotive repair shops. The solution will use
Azure Synapse Analytics as the data store.
Shops will upload data every 10 days.
Data corruption checks must run each time data is uploaded. If corruption is detected, the corrupted data must be removed.
You need to ensure that upload processes and data corruption checks do not impact reporting and analytics processes that
use the data warehouse.
Proposed solution: Create a user-defined restore point before data is uploaded. Delete the restore point after data corruption
checks complete.
Does the solution meet the goal?
A
Explanation:
User-Defined Restore Points
This feature enables you to manually trigger snapshots to create restore points of your data warehouse before and after
large modifications. This capability ensures that restore points are logically consistent, which provides additional data
protection in case of any workload interruptions or user errors for quick recovery time.
Note: A data warehouse restore is a new data warehouse that is created from a restore point of an existing or deleted data
warehouse. Restoring your data warehouse is an essential part of any business continuity and disaster recovery strategy
because it re-creates your data after accidental corruption or deletion.
Reference: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/backup-and-restore
You manage a solution that uses Azure HDInsight clusters.
You need to implement a solution to monitor cluster performance and status.
Which technology should you use?
E
Explanation:
Ambari is the recommended tool for monitoring utilization across the whole cluster. The Ambari dashboard shows easily
glanceable widgets that display metrics such as CPU, network, YARN memory, and HDFS disk usage. The specific metrics
shown depend on cluster type. The "Hosts" tab shows metrics for individual nodes so you can ensure the load on your
cluster is evenly distributed. The Apache Ambari project is aimed at making Hadoop management simpler by developing
software for provisioning, managing, and monitoring Apache Hadoop clusters. Ambari provides an intuitive, easy-to-use
Hadoop management web UI backed by its RESTful APIs.
Reference:
https://azure.microsoft.com/en-us/blog/monitoring-on-hdinsight-part-1-an-overview/ https://ambari.apache.org/
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a
unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while
others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in
the review screen.
A company is developing a solution to manage inventory data for a group of automotive repair shops. The solution will use
Azure Synapse Analytics as the data store.
Shops will upload data every 10 days.
Data corruption checks must run each time data is uploaded. If corruption is detected, the corrupted data must be removed.
You need to ensure that upload processes and data corruption checks do not impact reporting and analytics processes that
use the data warehouse.
Proposed solution: Insert data from shops and perform the data corruption check in a transaction.
Rollback transfer if corruption is detected.
Does the solution meet the goal?
B
Explanation:
Instead, create a user-defined restore point before data is uploaded. Delete the restore point after data corruption checks
complete.
Reference: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/backup-and-restore
HOTSPOT
You are planning the deployment of two separate Azure Cosmos DB databases named db1 and db2.
You need to recommend a deployment strategy that meets the following requirements:
Costs for both databases must be minimized.
Db1 must meet an availability SLA of 99.99% for both reads and writes.
Db2 must meet an availability SLA of 99.99% for writes and 99.999% for reads.
Which deployment strategy should you recommend for each database? To answer, select the appropriate options in the
answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Explanation:
Db1: A single read/write region
Db2: A single write region and multi read regions
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/high-availability