Download Microsoft.DP-201.TestKing.2019-04-27.31q.vcex

Download Exam

File Info

Exam Designing an Azure Data Solution
Number DP-201
File Name Microsoft.DP-201.TestKing.2019-04-27.31q.vcex
Size 986 KB
Posted Apr 27, 2019
Download Microsoft.DP-201.TestKing.2019-04-27.31q.vcex

How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase

Coupon: MASTEREXAM
With discount: 20%






Demo Questions

Question 1

You need to recommend a solution for storing the image tagging data. 
What should you recommend? 


  1. Azure File Storage
  2. Azure Cosmos DB
  3. Azure Blob Storage
  4. Azure SQL Database
  5. Azure SQL Data Warehouse
Correct answer: C
Explanation:
Image data must be stored in a single data store at minimum cost. Note: Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data.Unstructured data is data that does not adhere to a particular data model or definition, such as text or binary data. Blob storage is designed for:Serving images or documents directly to a browser. Storing files for distributed access. Streaming video and audio. Writing to log files. Storing data for backup and restore, disaster recovery, and archiving. Storing data for analysis by an on-premises or Azure-hosted service. References:https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction
Image data must be stored in a single data store at minimum cost. 
Note: Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data.
Unstructured data is data that does not adhere to a particular data model or definition, such as text or binary data. 
Blob storage is designed for:
  • Serving images or documents directly to a browser. 
  • Storing files for distributed access. 
  • Streaming video and audio. 
  • Writing to log files. 
  • Storing data for backup and restore, disaster recovery, and archiving. 
  • Storing data for analysis by an on-premises or Azure-hosted service. 
References:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction



Question 2

You need to design the solution for analyzing customer data. 


  1. Azure Databricks
  2. Azure Data Lake Storage
  3. Azure SQL Data Warehouse
  4. Azure Cognitive Services
  5. Azure Batch
Correct answer: A
Explanation:
Customer data must be analyzed using managed Spark clusters. You create spark clusters through Azure Databricks. References:https://docs.microsoft.com/en-us/azure/azure-databricks/quickstart-create-databricks-workspace-portal
Customer data must be analyzed using managed Spark clusters. 
You create spark clusters through Azure Databricks. 
References:
https://docs.microsoft.com/en-us/azure/azure-databricks/quickstart-create-databricks-workspace-portal



Question 3

You need to design a solution to meet the SQL Server storage requirements for CONT_SQL3. 
Which type of disk should you recommend?


  1. Standard SSD Managed Disk
  2. Premium SSD Managed Disk
  3. Ultra SSD Managed Disk
Correct answer: C
Explanation:
CONT_SQL3 requires an initial scale of 35000 IOPS. Ultra SSD Managed Disk Offerings     The following table provides a comparison of ultra solid-state-drives (SSD) (preview), premium SSD, standard SSD, and standard hard disk drives (HDD) for managed disks to help you decide what to use.     References:https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disks-types
CONT_SQL3 requires an initial scale of 35000 IOPS. 
Ultra SSD Managed Disk Offerings 
  
The following table provides a comparison of ultra solid-state-drives (SSD) (preview), premium SSD, standard SSD, and standard hard disk drives (HDD) for managed disks to help you decide what to use. 
  
References:
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disks-types



Question 4

You need to recommend an Azure SQL Database service tier. 
What should you recommend?


  1. Business Critical
  2. General Purpose
  3. Premium
  4. Standard
  5. Basic
Correct answer: C
Explanation:
The data engineers must set the SQL Data Warehouse compute resources to consume 300 DWUs. Note: There are three architectural models that are used in Azure SQL Database:General Purpose/Standard Business Critical/Premium Hyperscale Incorrect Answers:A: Business Critical service tier is designed for the applications that require low-latency responses from the underlying SSD storage (1-2 ms in average), fast recovery if the underlying infrastructure fails, or need to off-load reports, analytics, and read-only queries to the free of charge readable secondary replica of the primary database. References:https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tier-business-critical
The data engineers must set the SQL Data Warehouse compute resources to consume 300 DWUs. 
Note: There are three architectural models that are used in Azure SQL Database:
  • General Purpose/Standard 
  • Business Critical/Premium 
  • Hyperscale 
Incorrect Answers:
A: Business Critical service tier is designed for the applications that require low-latency responses from the underlying SSD storage (1-2 ms in average), fast recovery if the underlying infrastructure fails, or need to off-load reports, analytics, and read-only queries to the free of charge readable secondary replica of the primary database. 
References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tier-business-critical



Question 5

You need to recommend the appropriate storage and processing solution?
What should you recommend?


  1. Enable auto-shrink on the database.
  2. Flush the blob cache using Windows PowerShell.
  3. Enable Apache Spark RDD (RDD) caching.
  4. Enable Databricks IO (DBIO) caching.
  5. Configure the reading speed using Azure Data Studio.
Correct answer: C
Explanation:
Scenario: You must be able to use a file system view of data stored in a blob. You must build an architecture that will allow Contoso to use the DB FS filesystem layer over a blob store. Databricks File System (DBFS) is a distributed file system installed on Azure Databricks clusters. Files in DBFS persist to Azure Blob storage, so you won’t lose data even after you terminate a cluster. The Databricks Delta cache, previously named Databricks IO (DBIO) caching, accelerates data reads by creating copies of remote files in nodes’ local storage using a fast intermediate data format. The data is cached automatically whenever a file has to be fetched from a remote location. Successive reads of the same data are then performed locally, which results in significantly improved reading speed. References:https://docs.databricks.com/delta/delta-cache.html#delta-cache
Scenario: You must be able to use a file system view of data stored in a blob. You must build an architecture that will allow Contoso to use the DB FS filesystem layer over a blob store. 
Databricks File System (DBFS) is a distributed file system installed on Azure Databricks clusters. Files in DBFS persist to Azure Blob storage, so you won’t lose data even after you terminate a cluster. 
The Databricks Delta cache, previously named Databricks IO (DBIO) caching, accelerates data reads by creating copies of remote files in nodes’ local storage using a fast intermediate data format. The data is cached automatically whenever a file has to be fetched from a remote location. Successive reads of the same data are then performed locally, which results in significantly improved reading speed. 
References:
https://docs.databricks.com/delta/delta-cache.html#delta-cache



Question 6

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. 
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. 
You are designing an HDInsight/Hadoop cluster solution that uses Azure Data Lake Gen1 Storage. 
The solution requires POSIX permissions and enables diagnostics logging for auditing. 
You need to recommend solutions that optimize storage. 
Proposed Solution: Implement compaction jobs to combine small files into larger files.
Does the solution meet the goal?


  1. Yes
  2. No
Correct answer: A
Explanation:
Depending on what services and workloads are using the data, a good size to consider for files is 256 MB or greater. If the file sizes cannot be batched when landing in Data Lake Storage Gen1, you can have a separate compaction job that combines these files into larger ones. Note: POSIX permissions and auditing in Data Lake Storage Gen1 comes with an overhead that becomes apparent when working with numerous small files. As a best practice, you must batch your data into larger files versus writing thousands or millions of small files to Data Lake Storage Gen1. Avoiding small file sizes can have multiple benefits, such as:Lowering the authentication checks across multiple files Reduced open file connections Faster copying/replication Fewer files to process when updating Data Lake Storage Gen1 POSIX permissions References:https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-best-practices
Depending on what services and workloads are using the data, a good size to consider for files is 256 MB or greater. If the file sizes cannot be batched when landing in Data Lake Storage Gen1, you can have a separate compaction job that combines these files into larger ones. 
Note: POSIX permissions and auditing in Data Lake Storage Gen1 comes with an overhead that becomes apparent when working with numerous small files. As a best practice, you must batch your data into larger files versus writing thousands or millions of small files to Data Lake Storage Gen1. Avoiding small file sizes can have multiple benefits, such as:
  • Lowering the authentication checks across multiple files 
  • Reduced open file connections 
  • Faster copying/replication 
  • Fewer files to process when updating Data Lake Storage Gen1 POSIX permissions 
References:
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-best-practices



Question 7

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. 
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. 
You are designing an Azure SQL Database that will use elastic pools. You plan to store data about customers in a table. Each record uses a value for CustomerID. 
You need to recommend a strategy to partition data based on values in CustomerID. 
Proposed Solution: Separate data into customer regions by using vertical partitioning.
Does the solution meet the goal?


  1. Yes
  2. No
Correct answer: B
Explanation:
Vertical partitioning is used for cross-database queries. Instead we should use Horizontal Partitioning, which also is called charding. References:https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-query-overview
Vertical partitioning is used for cross-database queries. Instead we should use Horizontal Partitioning, which also is called charding. 
References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-query-overview



Question 8

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. 
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. 
You are designing an Azure SQL Database that will use elastic pools. You plan to store data about customers in a table. Each record uses a value for CustomerID. 
You need to recommend a strategy to partition data based on values in CustomerID. 
Proposed Solution: Separate data into customer regions by using horizontal partitioning.
Does the solution meet the goal? 


  1. Yes
  2. No
Correct answer: B
Explanation:
We should use Horizontal Partitioning through Sharding, not divide through regions. Note: Horizontal Partitioning - Sharding: Data is partitioned horizontally to distribute rows across a scaled out data tier. With this approach, the schema is identical on all participating databases. This approach is also called “sharding”. Sharding can be performed and managed using (1) the elastic database tools libraries or (2) self-sharding. An elastic query is used to query or compile reports across many shards. References:https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-query-overview
We should use Horizontal Partitioning through Sharding, not divide through regions. 
Note: Horizontal Partitioning - Sharding: Data is partitioned horizontally to distribute rows across a scaled out data tier. With this approach, the schema is identical on all participating databases. This approach is also called “sharding”. Sharding can be performed and managed using (1) the elastic database tools libraries or (2) self-sharding. An elastic query is used to query or compile reports across many shards. 
References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-query-overview



Question 9

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. 
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. 
You are designing an Azure SQL Database that will use elastic pools. You plan to store data about customers in a table. Each record uses a value for CustomerID. 
You need to recommend a strategy to partition data based on values in CustomerID. 
Proposed Solution: Separate data into shards by using horizontal partitioning.
Does the solution meet the goal?


  1. Yes 
  2. No
Correct answer: A
Explanation:
Horizontal Partitioning - Sharding: Data is partitioned horizontally to distribute rows across a scaled out data tier. With this approach, the schema is identical on all participating databases. This approach is also called “sharding”. Sharding can be performed and managed using (1) the elastic database tools libraries or (2) self-sharding. An elastic query is used to query or compile reports across many shards. References:https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-query-overview
Horizontal Partitioning - Sharding: Data is partitioned horizontally to distribute rows across a scaled out data tier. With this approach, the schema is identical on all participating databases. This approach is also called “sharding”. Sharding can be performed and managed using (1) the elastic database tools libraries or (2) self-sharding. An elastic query is used to query or compile reports across many shards. 
References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-query-overview



Question 10

You are evaluating data storage solutions to support a new application. 
You need to recommend a data storage solution that represents data by using nodes and relationships in graph structures. 
Which data storage solution should you recommend?


  1. Blob Storage
  2. Cosmos DB
  3. Data Lake Store
  4. HDInsight
Correct answer: B
Explanation:
For large graphs with lots of entities and relationships, you can perform very complex analyses very quickly. Many graph databases provide a query language that you can use to traverse a network of relationships efficiently. Relevant Azure service: Cosmos DBReferences:https://docs.microsoft.com/en-us/azure/architecture/guide/technology-choices/data-store-overview
For large graphs with lots of entities and relationships, you can perform very complex analyses very quickly. Many graph databases provide a query language that you can use to traverse a network of relationships efficiently. 
Relevant Azure service: Cosmos DB
References:
https://docs.microsoft.com/en-us/azure/architecture/guide/technology-choices/data-store-overview









CONNECT US

Facebook

Twitter

PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount!



HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files