Download Microsoft.DP-420.CertDumps.2024-04-08.55q.vcex

Download Exam

File Info

Exam Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB (beta)
Number DP-420
File Name Microsoft.DP-420.CertDumps.2024-04-08.55q.vcex
Size 2 MB
Posted Apr 08, 2024
Download Microsoft.DP-420.CertDumps.2024-04-08.55q.vcex

How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase

Coupon: MASTEREXAM
With discount: 20%






Demo Questions

Question 1

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account. You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.  
Solution: You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the input and Azure Blob Storage as the output.  
Does this meet the goal?


  1. Yes
  2. No
Correct answer: B
Explanation:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/changefeed-ecommerce-solution
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/changefeed-ecommerce-solution



Question 2

You plan to create an Azure Cosmos DB Core (SQL) API account that will use customer-managed keys stored in Azure Key Vault. You need to configure an access policy in Key Vault to allow Azure Cosmos DB access to the keys. Which three permissions should you enable in the access policy?  
(Each correct answer presents part of the solution. Choose three.)


  1. Wrap Key
  2. Get
  3. List
  4. Update
  5. Sign
  6. Verify
  7. Unwrap Key
Correct answer: ABG
Explanation:
https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-cmk
https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-cmk



Question 3

You are troubleshooting the current issues caused by the application updates. Which action can address the application updates issue without affecting the functionality of the application?


  1. Enable time to live for the con-product container.
  2. Set the default consistency level of account1 to strong.
  3. Set the default consistency level of account1 to bounded staleness.
  4. Add a custom indexing policy to the con-product container.
Correct answer: C
Explanation:
https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels
https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels



Question 4

You need to implement a trigger in Azure Cosmos DB Core (SQL) API that will run before an item is inserted into a container. Which two actions should you perform to ensure that the trigger runs?  
(Each correct answer presents part of the solution. Choose two.)


  1. Append pre to the name of the JavaScript function trigger.
  2. For each create request, set the access condition in RequestOptions.
  3. Register the trigger as a pre-trigger.
  4. For each create request, set the consistency level to session in RequestOptions.
  5. For each create request, set the trigger name in RequestOptions.
Correct answer: C
Explanation:
C: When triggers are registered, you can specify the operations that it can run with. F: When executing, pre-triggers are passed in the RequestOptions object by specifying PreTriggerInclude and then passing the name of the trigger in a List object.  https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs
C: When triggers are registered, you can specify the operations that it can run with. 
F: When executing, pre-triggers are passed in the RequestOptions object by specifying PreTriggerInclude and then passing the name of the trigger in a List object.  
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs



Question 5

You have an application named App1 that reads the data in an Azure Cosmos DB Core (SQL) API account. App1 runs the same read queries every minute. The default consistency level for the account is set to eventual. You discover that every query consumes request units (RUs) instead of using the cache. You verify the IntegratedCacheiteItemHitRate metric and the IntegratedCacheQueryHitRate metric. Both metrics have values of 0. You verify that the dedicated gateway cluster is provisioned and used in the connection string. You need to ensure that App1 uses the Azure Cosmos DB integrated cache. What should you configure?


  1. the indexing policy of the Azure Cosmos DB container
  2. the consistency level of the requests from App1
  3. the connectivity mode of the App1 CosmosClient
  4. the default consistency level of the Azure Cosmos DB account
Correct answer: C
Explanation:
Because the integrated cache is specific to your Azure Cosmos DB account and requires significant CPU and memory, it requires a dedicated gateway node. Connect to Azure Cosmos DB using gateway mode.  https://docs.microsoft.com/en-us/azure/cosmos-db/integrated-cache-faq
Because the integrated cache is specific to your Azure Cosmos DB account and requires significant CPU and memory, it requires a dedicated gateway node. Connect to Azure Cosmos DB using gateway mode.  
https://docs.microsoft.com/en-us/azure/cosmos-db/integrated-cache-faq



Question 6

You need to select the partition key for con-iot1. The solution must meet the IoT telemetry requirements. What should you select?


  1. the timestamp
  2. the humidity
  3. the temperature
  4. the device ID
Correct answer: D
Explanation:
https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/iot-using-cosmos-db
https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/iot-using-cosmos-db



Question 7

You have an Azure Cosmos DB Core (SQL) API account that is used by 10 web apps. You need to analyze the data stored in the account by using Apache Spark to create machine learning models.  
The solution must NOT affect the performance of the web apps. Which two actions should you perform? (Each correct answer presents part of the solution. Choose two.)


  1. In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.olap as the data source.
  2. Create a private endpoint connection to the account.
  3. In an Azure Synapse Analytics serverless SQL pool, create a view that uses OPENROWSET and the CosmosDB provider.
  4. Enable Azure Synapse Link for the account and Analytical store on the container.
  5. In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.oltp as the data source.
Correct answer: AD
Explanation:
https://github.com/microsoft/MCW-Cosmos-DB-Real-Time-Advanced-Analytics/blob/main/Hands-on%20lab/HOL%20step-by%20step%20-%20Cosmos%20DB%20real-time%20advanced%20analytics.md
https://github.com/microsoft/MCW-Cosmos-DB-Real-Time-Advanced-Analytics/blob/main/Hands-on%20lab/HOL%20step-by%20step%20-%20Cosmos%20DB%20real-time%20advanced%20analytics.md



Question 8

You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to write a dataset. The data flow will use 2,000 Apache Spark partitions. You need to ensure that the ingestion from each Spark partition is balanced to optimize throughput. Which sink setting should you configure?


  1. Throughput.
  2. Write throughput budget.
  3. Batch size.
  4. Collection action.
Correct answer: C
Explanation:
Batch size: An integer that represents how many objects are being written to Cosmos DB collection in each batch. Usually, starting with the default batch size is sufficient. To further tune this value, Note: Cosmos DB limits single request's size to 2MB. The formula is "Request Size = Single Document Size * Batch Size". If you hit error saying "Request size is too large", reduce the batch size value. The larger the batch size, the better throughput the service can achieve, while make sure you allocate enough RUs to empower your workload.  Incorrect: Not A: Throughput: Set an optional value for the number of RUs you'd like to apply to your CosmosDB collection for each execution of this data flow. Minimum is 400.  Not B: Write throughput budget: An integer that represents the RUs you want to allocate for this Data Flow write operation, out of the total throughput allocated to the collection.  Not D: Collection action: Determines whether to recreate the destination collection prior to writing. None: No action will be done to the collection. Recreate: The collection will get dropped and recreated. https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db
Batch size: An integer that represents how many objects are being written to Cosmos DB collection in each batch. Usually, starting with the default batch size is sufficient. To further tune this value, Note: Cosmos DB limits single request's size to 2MB. The formula is "Request Size = Single Document Size * Batch Size". If you hit error saying "Request size is too large", reduce the batch size value. The larger the batch size, the better throughput the service can achieve, while make sure you allocate enough RUs to empower your workload.  
Incorrect: 
Not A: Throughput: Set an optional value for the number of RUs you'd like to apply to your CosmosDB collection for each execution of this data flow. Minimum is 400.  
Not B: Write throughput budget: An integer that represents the RUs you want to allocate for this Data Flow write operation, out of the total throughput allocated to the collection.  
Not D: 
  • Collection action: Determines whether to recreate the destination collection prior to writing. 
  • None: No action will be done to the collection. 
  • Recreate: The collection will get dropped and recreated. 
https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db



Question 9

You need to identify which connectivity mode to use when implementing App2. The solution must support the planned changes and meet the business requirements. Which connectivity mode should you identify?


  1. Direct mode over HTTPS
  2. Gateway mode (using HTTPS)
  3. Direct mode over TCP
Correct answer: C
Explanation:
https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-private-endpoints
https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-private-endpoints



Question 10

You configure multi-region writes for account1. You need to ensure that App1 supports the new configuration for account1. The solution must meet the business requirements and the product catalog requirements. What should you do?


  1. Set the default consistency level of accountl to bounded staleness.
  2. Create a private endpoint connection.
  3. Modify the connection policy of App1.
  4. Increase the number of request units per second (RU/s) allocated to the con-product and con productVendor containers.
Correct answer: D
Explanation:
https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels
https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels









CONNECT US

Facebook

Twitter

PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount!



HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files