Case study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete
each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure
that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study.
Case studies might contain exhibits and other resources that provide more information about the scenario that is described
in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make
changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of
the case study before you answer the questions. Clicking these buttons displays information such as business requirements,
existing environment, and problem statements. If the case study has an All Information tab, note that the information
displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click
the Question button to return to the question. Overview
XYZ is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers. Some of the databases
are used by Internal users, some by external partners and external distributions. Below is the List of applications, tiers and
their individual requirements:
Below are the current requirements of the company:
For Tier 4 and Tier 5 databases, the backup strategy must include the following:
- Transactional log backup every hour
- Differential backup every day
- Full backup every week
Backup strategies must be in place for all standalone Azure SQL databases using methods available with Azure SQL
databases
Tier 1 database must implement the following data masking logic:
- For Data type XYZ-A Mask 4 or less string data type characters
- For Data type XYZ-B Expose the first letter and mask the domain
- For Data type XYZ-C Mask everything except characters at the beginning and the end
All certificates and keys are internally managed in on-premise data stores
For Tier 2 databases, if there are any conflicts between the data transfer from on-premise, preference should be given to
on-premise data.
Monitoring must be setup on every database
Applications with Tiers 6 through 8 must ensure that unexpected resource storage usage is immediately reported to IT data
engineers.
Azure SQL Data warehouse would be used to gather data from multiple internal and external databases. The Azure SQL
Data warehouse must be optimized to use data from its cache
The below metrics must be available when it comes to the cache:
- Metric XYZ-A Low cache hit %, high cache usage %
- Metric XYZ-B Low cache hit %, low cache usage %
- Metric XYZ-C high cache hit %, high cache usage %
The reporting data for external partners must be stored in Azure storage. The data should be made available during regular
business hours in connecting regions.
The reporting for Tier 9 needs to be moved to Event Hubs.
The reporting for Tier 10 needs to be moved to Azure Blobs.
The following issues have been identified in the setup:
The External partners have control over the data formats, types and schemas.
For External based clients, the queries can't be changed or optimized.
The database development staff are familiar with T-SQL language.
Because of the size and amount of data, some applications and reporting features are not performing at SLA levels.
The data for the external applications needs to be encrypted at rest. You decide to implement the following steps:
- Use the Always Encrypted Wizard in SQL Server Management Studio
- Select the column that needs to be encrypted
- Set the encryption type to Deterministic
- Configure the master key to be used from Azure Key vault- Confirm the configuration and deploy the solution
Would these steps fulfil the requirement?
B
Explanation:
As per the case study, all keys and certificates need to be managed in on-premise data stores. Reference:
https://docs.microsoft.com/en-us/azure/sql -database/sql-database-always-encrypted
Case study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete
each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure
that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study.
Case studies might contain exhibits and other resources that provide more information about the scenario that is described
in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make
changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of
the case study before you answer the questions. Clicking these buttons displays information such as business requirements,
existing environment, and problem statements. If the case study has an All Information tab, note that the information
displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click
the Question button to return to the question. Overview
XYZ is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers. Some of the databases
are used by Internal users, some by external partners and external distributions. Below is the List of applications, tiers and
their individual requirements:
Below are the current requirements of the company:
For Tier 4 and Tier 5 databases, the backup strategy must include the following:
- Transactional log backup every hour
- Differential backup every day
- Full backup every week
Backup strategies must be in place for all standalone Azure SQL databases using methods available with Azure SQL
databases
Tier 1 database must implement the following data masking logic:
- For Data type XYZ-A Mask 4 or less string data type characters
- For Data type XYZ-B Expose the first letter and mask the domain
- For Data type XYZ-C Mask everything except characters at the beginning and the end
All certificates and keys are internally managed in on-premise data stores
For Tier 2 databases, if there are any conflicts between the data transfer from on-premise, preference should be given to
on-premise data.
Monitoring must be setup on every database
Applications with Tiers 6 through 8 must ensure that unexpected resource storage usage is immediately reported to IT data
engineers.
Azure SQL Data warehouse would be used to gather data from multiple internal and external databases. The Azure SQL
Data warehouse must be optimized to use data from its cache
The below metrics must be available when it comes to the cache:
- Metric XYZ-A Low cache hit %, high cache usage %
- Metric XYZ-B Low cache hit %, low cache usage %
- Metric XYZ-C high cache hit %, high cache usage %
The reporting data for external partners must be stored in Azure storage. The data should be made available during regular
business hours in connecting regions.
The reporting for Tier 9 needs to be moved to Event Hubs.
The reporting for Tier 10 needs to be moved to Azure Blobs.
The following issues have been identified in the setup:
The External partners have control over the data formats, types and schemas.
For External based clients, the queries can't be changed or optimized.
The database development staff are familiar with T-SQL language.
Because of the size and amount of data, some applications and reporting features are not performing at SLA levels.
The data for the external applications needs to be encrypted at rest. You decide to implement the following steps:
- Use the Always Encrypted Wizard in SQL Server Management Studio
- Select the column that needs to be encrypted
- Set the encryption type to Deterministic
- Configure the master key to be used from the Windows Certificate Store- Confirm the configuration and deploy the solution
Would these steps fulfill the requirement?
A
Explanation:
Reference:
https://docs.microsoft.com/en-us/azure/ sql-database/sql-database-always-encrypted
Case study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete
each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure
that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study.
Case studies might contain exhibits and other resources that provide more information about the scenario that is described
in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make
changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of
the case study before you answer the questions. Clicking these buttons displays information such as business requirements,
existing environment, and problem statements. If the case study has an All Information tab, note that the information
displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click
the Question button to return to the question. Overview
XYZ is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers. Some of the databases
are used by Internal users, some by external partners and external distributions. Below is the List of applications, tiers and
their individual requirements:
Below are the current requirements of the company:
For Tier 4 and Tier 5 databases, the backup strategy must include the following:
- Transactional log backup every hour
- Differential backup every day
- Full backup every week
Backup strategies must be in place for all standalone Azure SQL databases using methods available with Azure SQL
databases
Tier 1 database must implement the following data masking logic:
- For Data type XYZ-A Mask 4 or less string data type characters
- For Data type XYZ-B Expose the first letter and mask the domain
- For Data type XYZ-C Mask everything except characters at the beginning and the end
All certificates and keys are internally managed in on-premise data stores
For Tier 2 databases, if there are any conflicts between the data transfer from on-premise, preference should be given to
on-premise data.
Monitoring must be setup on every database
Applications with Tiers 6 through 8 must ensure that unexpected resource storage usage is immediately reported to IT data
engineers.
Azure SQL Data warehouse would be used to gather data from multiple internal and external databases. The Azure SQL
Data warehouse must be optimized to use data from its cache
The below metrics must be available when it comes to the cache:
- Metric XYZ-A Low cache hit %, high cache usage %
- Metric XYZ-B Low cache hit %, low cache usage %
- Metric XYZ-C high cache hit %, high cache usage %
The reporting data for external partners must be stored in Azure storage. The data should be made available during regular
business hours in connecting regions.
The reporting for Tier 9 needs to be moved to Event Hubs.
The reporting for Tier 10 needs to be moved to Azure Blobs.
The following issues have been identified in the setup:
The External partners have control over the data formats, types and schemas.
For External based clients, the queries can't be changed or optimized.
The database development staff are familiar with T-SQL language.
Because of the size and amount of data, some applications and reporting features are not performing at SLA levels.
The data for the external applications needs to be encrypted at rest. You decide to implement the following steps:
- Use the Always Encrypted Wizard in SQL Server Management Studio
- Select the column that needs to be encrypted
- Set the encryption type to Randomized
- Configure the master key to be used from the Windows Certificate Store- Confirm the configuration and deploy the solution
Would these steps fulfill the requirement?
B
Explanation:
As per the documentation, the encryption type needs to set as Deterministic when enabling Always Encrypted:
Column Selection
Click Next on the Introduction page to open the Column Selection page. On this page, you will select which columns you
want to encrypt, the type of encryption, and what column encryption key (CEK) to use.
Encrypt SSN and BirthDate information for each patient. The SSN column will use deterministic encryption, which supports
equality lookups, joins, and group by. The BirthDate column will use randomized encryption, which does not support
operations.
Set the Encryption Type for the SSN column to Deterministic and the BirthDate column to Randomized. Click Next.
Reference:
https://docs.microsoft.com/en-us/azure/sq l-datab ase/sql-database-always-encrypted
Case study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete
each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure
that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study.
Case studies might contain exhibits and other resources that provide more information about the scenario that is described
in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make
changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of
the case study before you answer the questions. Clicking these buttons displays information such as business requirements,
existing environment, and problem statements. If the case study has an All Information tab, note that the information
displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click
the Question button to return to the question. Overview
XYZ is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers. Some of the databases
are used by Internal users, some by external partners and external distributions. Below is the List of applications, tiers and
their individual requirements:
Below are the current requirements of the company:
For Tier 4 and Tier 5 databases, the backup strategy must include the following:
- Transactional log backup every hour
- Differential backup every day
- Full backup every week
Backup strategies must be in place for all standalone Azure SQL databases using methods available with Azure SQL
databases
Tier 1 database must implement the following data masking logic:
- For Data type XYZ-A Mask 4 or less string data type characters
- For Data type XYZ-B Expose the first letter and mask the domain
- For Data type XYZ-C Mask everything except characters at the beginning and the end
All certificates and keys are internally managed in on-premise data stores
For Tier 2 databases, if there are any conflicts between the data transfer from on-premise, preference should be given to
on-premise data.
Monitoring must be setup on every database
Applications with Tiers 6 through 8 must ensure that unexpected resource storage usage is immediately reported to IT data
engineers.
Azure SQL Data warehouse would be used to gather data from multiple internal and external databases. The Azure SQL
Data warehouse must be optimized to use data from its cache
The below metrics must be available when it comes to the cache:
- Metric XYZ-A Low cache hit %, high cache usage %
- Metric XYZ-B Low cache hit %, low cache usage %
- Metric XYZ-C high cache hit %, high cache usage %
The reporting data for external partners must be stored in Azure storage. The data should be made available during regular
business hours in connecting regions.
The reporting for Tier 9 needs to be moved to Event Hubs.
The reporting for Tier 10 needs to be moved to Azure Blobs.
The following issues have been identified in the setup:
The External partners have control over the data formats, types and schemas.
For External based clients, the queries can't be changed or optimized.
The database development staff are familiar with T-SQL language.
Because of the size and amount of data, some applications and reporting features are not performing at SLA levels.
Which of the following can be used to process and query the ingested data for the Tier 9 data?
D
Explanation:
One way is to use Azure Stream Analytics. The Microsoft documentation mentions the following:
Process data from your event hub using Azure Stream Analytics
The Azure Stream Analytics service makes it easy to ingest, process, and analyze streaming data from Azure Event Hubs,
enabling powerful insights to drive real-time actions. This integration allows you to quickly create a hot-path analytics
pipeline. You can use the Azure portal to visualize incoming data and write a Stream Analytics query. Once your query is
ready, you can move it into production in only a few clicks.
Option Azure Notification Hubs is incorrect since this is a Notification service.
Option Apache Cache for Redis is incorrect since this is a cache service.
Option Azure Functions is incorrect since this is a serverless compute service. Reference:
https://docs.microsoft.com/en-us/azure/event-hub s/process-data-azure-stream-analytics
A company wants to make use of Azure Data Lake Gen 2 storage account. This would be used to store Big Data related to
an application. The company wants to implement logging.
They decide to create an Azure Automation runbook which would be used to copy events.
Would this fulfill the requirement?
B
Explanation:
You need to make use of Azure Data Lake storage diagnostics for this purpose. Reference:
https://docs.microsoft.com/en-us/azure/data-l ake-store/data-lake-store-diagnostic-logs
A company has currently setup an Azure HDInsight cluster. The cluster is used to process log files generated from 100 web
servers. Generally, around a few GB of log data is generated each day. All of the log files are stored in a single folder in
Azure Data Lake Storage Gen 2.
Which of the following are changes that can be carried out that can be used to improve the performance of the process?
(Choose two.)
A B
Explanation:
There is an article in the Microsoft documentation on improving the performance of Azure Data Lake Gen 2 storage. One is
with regard to the file size and the other is with regards to organizing the data into folders.
Structure your data set
When data is stored in Data Lake Storage Gen2, the file size, number of files, and folder structure have an impact on
performance. The following section describes best practices in these areas.
File size
Typically, analytics engines such as HDInsight and Azure Data Lake Analytics have a per-file overhead. If you store your
data as many small files, this can negatively affect performance. In general, organize your data into larger sized files for
better performance (256MB to 100GB in size). Some engines and applications might have trouble efficiently processing files
that are greater than 100GB in size.
Sometimes, data pipelines have limited control over the raw data which has lots of small files. It is recommended to have a
"cooking" process that generates larger files to use for downstream applications.
Organizing time series data in folders
For Hive workloads, partition pruning of time-series data can help some queries read only a subset of the data which
improves performance.
Those pipelines that ingest time-series data, often place their files with a very structured naming for files and folders.
Since this is clear from the Microsoft documentation, all other options are incorrect. Reference:
https://docs.microsoft.com/en-us/azure/storage/blobs/data-la ke-storage-performance-tuning-guidance
A company has a set of Azure SQL Databases. They want to ensure that their IT Security team is informed when any
security related operation occurs on the database. You need to configure Azure Monitor while ensuring administrative efforts
are reduced.
Which of the following actions would you perform for this requirement? (Choose three.)
A B D
Explanation:
You can setup alerts based on all the security conditions in Azure Monitor. When any security operation is performed, an
alert can be sent to the IT Security team. Option Ensure to query audit log entries as the condition is incorrect since you
need to monitor all security related events. Reference:
https://docs.microsoft.com/en-us/azure/sql-databa se/sql-database-insights-alerts-portal
A company needs to configure synchronization of data between their on-premise Microsoft SQL Server database and Azure
SQL database. The synchronization process must include the following:
- Be able to perform an initial data synchronization to the Azure SQL Database with minimal downtime. - Be able to perform
bi-directional synchronization after the initial synchronization is complete
Which of the following would you consider as the synchronization solution?
D
Explanation:
Azure SQL Data Sync can be used to synchronize data between the on-premise SQL Server and the Azure SQL database.
The Microsoft documentation mentions the following:
When to use Data Sync
Data Sync is useful in cases where data needs to be kept updated across several Azure SQL databases or SQL Server
databases. Here are the main use cases for Data Sync:
Hybrid Data Synchronization: With Data Sync, you can keep data synchronized between your on-premises databases and
Azure SQL databases to enable hybrid applications. This capability may appealto customers who are considering moving to
the cloud and would like to put some of their application in Azure.
Distributed Applications: In many cases, it's beneficial to separate different workloads across different databases. For
example, if you have a large production database, but you also need to run areporting or analytics workload on this data, it's
helpful to have a second database for this additional workload. This approach minimizes the performance impact on your
production workload. You can use Data Sync to keep these two databases synchronized.
Globally Distributed Applications: Many businesses span several regions and even several countries/regions. To minimize
network latency, it's best to have your data in a region close to you. With DataSync, you can easily keep databases in
regions around the world synchronized.
Option Data Migration Assistant is incorrect since this is just used to assess databases for the migration process.
Option Backup and restore is incorrect since this would just be the initial setup activity.
Option SQL Server Agent job is incorrect since this is used to run administrative tasks on on-premise SQL databases.
Reference:
https://docs.microsoft.com/en-us/azure/sql-d atabase/sql-database-sync-data
A company wants to migrate a set of on-premise Microsoft SQL Server databases to Azure. They want to migrate the
databases as a simple life and shift process by using backup and restore processes.
Which of the following would they use in Azure to host the SQL databases?
D
Explanation:
For easy migration of on-premise databases, consider migrating to Azure SQL Database managed instance. The Microsoft
documentation mentions the following:
What is Azure SQL Database managed instance?
Managed instance is a new deployment option of Azure SQL Database, providing near 100% compatibility with the latest
SQL Server on-premises (Enterprise Edition) Database Engine, providing a native virtual network (VNet) implementation that
addresses common security concerns, and a business model favorable for on-premises SQL Server customers. The
managed instance deployment model allows existing SQL Server customers to lift and shift their on-premises applications to
the cloud with minimal application and database changes. At the same time, the managed instance deployment option
preserves all PaaS capabilities (automatic patching and version updates, automated backups, high-availability), that
drastically reduces management overhead and TCO.
Option Azure SQL Database single database is incorrect since this is a better option if you just want to host a single
database on the Azure platform. Option Azure SQL data warehouse is incorrect since this is data warehousing solution
available on the Azure platform.
Option Azure Cosmos DB is incorrect since this is a NoSQL based database solution. Reference:
https://docs.microsoft.com/en-us/azure/sql-da tabase/sql-database-managed-instance
A company wants to integrate their on-premise Microsoft SQL Server data with Azure SQL database. Here the data must be
transformed incrementally.
Which of the following can be you to configure a pipeline to copy the data?
C
Explanation:
You can make use of Azure Data Factory which makes use of Azure Blob storage. An example of this is also given in the
Microsoft documentation:
Move data from an on-premises SQL server to SQL Azure with Azure Data Factory.
This article shows how to move data from an on-premises SQL Server Database to a SQL Azure Database via Azure Blob
Storage using the Azure Data Factory (ADF). For a table that summarizes various options for moving data to an Azure SQL
Database, see Move data to an Azure SQL Database for Azure Machine Learning.
All other options are incorrect since you need to use the Azure Data Factory UI tool to develop a pipeline. Reference:
https://docs.microsoft.com/en-us/azur e/machine-learning/team-data-science-process/move-sql-azure-adf