Download Microsoft.AI-100.PrepAway.2021-05-20.156q.vcex

Download Exam

File Info

Exam Designing and Implementing an Azure AI Solution
Number AI-100
File Name Microsoft.AI-100.PrepAway.2021-05-20.156q.vcex
Size 5 MB
Posted May 20, 2021
Download Microsoft.AI-100.PrepAway.2021-05-20.156q.vcex

How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase

Coupon: MASTEREXAM
With discount: 20%






Demo Questions

Question 1

You are designing an Al solution that will analyze millions of pictures by using Azure HDlnsight Hadoop   cluster.  
You need to recommend a solution for storing the pictures. The solution must minimize costs.  
Which storage solution should you recommend?


  1. an Azure Data Lake Storage Gent
  2. Azure File Storage
  3. Azure Blob storage
  4. Azure Table storage WPQW?
Correct answer: C
Explanation:
Data Lake will be a bit more expensive although they are in close range of each other. Blob storage has more options for pricing depending upon things like how frequently you need to access your data (cold vs  hot storage).  Reference: http://bloq.praqmaticworks.com/azure-data-lake-vs-azure-blob-storaqe-in-data-warehousinq
Data Lake will be a bit more expensive although they are in close range of each other. Blob storage has more options for pricing depending upon things like how frequently you need to access your data (cold vs  hot storage).  
Reference: 
http://bloq.praqmaticworks.com/azure-data-lake-vs-azure-blob-storaqe-in-data-warehousinq



Question 2

You are configuring data persistence for a Microsoft Bot Framework application. The application requires a NoSQL cloud data store.  
You need to identify a storage solution for the application. The solution must minimize costs.  
What should you identity?


  1. Azure Blob storage
  2. Azure Cosmos DB
  3. Azure HDlnsight
  4. Azure Table storage
Correct answer: D
Explanation:
Table Storage is a NoSQL key-value store for rapid development using massive semi-structured datasets You can develop applications on Cosmos DB using popular NoSQL APls.  Both services have a different scenario and pricing model.  While Azure Storage Tables is aimed at high capacity on a single region (optional secondary read only region but no failover), indexing by PK/RK and storage-optimized pricing; Azure Cosmos DB Tables aims for high throughput (single-digit millisecond latency), global distribution (multiple failover), SLA-backed predictive performance with automatic indexing of each attribute/property and a pricing model focused on throughput.  References: https://db-enqines.com/en/svstem/Microsoft+Azure+Cosmos+DB%3BMicrosoft+Azure+Table+Storaqe
Table Storage is a NoSQL key-value store for rapid development using massive semi-structured datasets You can develop applications on Cosmos DB using popular NoSQL APls.  
Both services have a different scenario and pricing model.  
While Azure Storage Tables is aimed at high capacity on a single region (optional secondary read only region but no failover), indexing by PK/RK and storage-optimized pricing; Azure Cosmos DB Tables aims for high throughput (single-digit millisecond latency), global distribution (multiple failover), SLA-backed predictive performance with automatic indexing of each attribute/property and a pricing model focused on throughput.  
References: 
https://db-enqines.com/en/svstem/Microsoft+Azure+Cosmos+DB%3BMicrosoft+Azure+Table+Storaqe



Question 3

Your company recently deployed several hardware devices that contain sensors.  
The sensors generate new data on an hourly basis. The data generated is stored on-premises and retained for several years.  
During the past two months, the sensors generated 300 GB of data.  
You plan to move the data to Azure and then perform advanced analytics on the data.  
You need to recommend an Azure storage solution for the data.  
Which storage solution should you recommend?


  1. Azure Queue storage
  2. Azure Cosmos DB
  3. Azure Blob storage
  4. Azure SQL Database
Correct answer: C
Explanation:
References: https://docs.microsoft.com/en-us/azure/architecture/data-quide/technologv—choices/data—storaqe
References: 
https://docs.microsoft.com/en-us/azure/architecture/data-quide/technologv—choices/data—storaqe



Question 4

You plan to design an application that will use data from Azure Data Lake and perform sentiment analysis by using Azure Machine Learning algorithms.  
The developers of the application use a mix of Windows- and Linux-based environments. The developers contribute to shared GitHub repositories.  
You need all the developers to use the same tool to develop the application.  
What is the best tool to use? More than one answer choice may achieve the goal.


  1. Microsoft Visual Studio Code
  2. Azure Notebooks
  3. Azure Machine Learning Studio
  4. Microsoft Visual Studio
Correct answer: C
Explanation:
References: https://qithub.com/MicrosoftDocs/azure-docs/blob/master/articles/machine-learning/studio/alqorithm-choice.md
References: 
https://qithub.com/MicrosoftDocs/azure-docs/blob/master/articles/machine-learning/studio/alqorithm-choice.md



Question 5

You have several Al applications that use an Azure Kubernetes Service (AKS) cluster. The cluster supports a maximum of 32 nodes.  
You discover that occasionally and unpredictably, the application requires more than 32 nodes.  
You need to recommend a solution to handle the unpredictable application load.  
Which scaling methods should you recommend? (Choose two.)


  1. horizontal pod autoscaler
  2. cluster autoscaler
  3. AKS cluster virtual 32 node autoscaling
  4. Azure Container Instances
Correct answer: AB
Explanation:
B: To keep up with application demands in Azure Kubernetes Service (AKS), you may need to adjust the number of nodes that run your workloads. The cluster autoscaler component can watch for pods in your cluster that can't be scheduled because of resource constraints. When issues are detected, the number of nodes is increased to meet the application demand. Nodes are also regularly checked for a lack of running pods, with the number of nodes then decreased as needed. This ability to automatically scale up or down the number of nodes in your AKS cluster lets you run an efficient, cost-effective cluster.  A: You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application.  Reference: httlos://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler
B: To keep up with application demands in Azure Kubernetes Service (AKS), you may need to adjust the number of nodes that run your workloads. The cluster autoscaler component can watch for pods in your cluster that can't be scheduled because of resource constraints. When issues are detected, the number of nodes is increased to meet the application demand. Nodes are also regularly checked for a lack of running pods, with the number of nodes then decreased as needed. This ability to automatically scale up or down the number of nodes in your AKS cluster lets you run an efficient, cost-effective cluster.  
A: You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application.  
Reference: 
httlos://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler



Question 6

You are designing an Al solution in Azure that will perform image classification.  
You need to identify which processing platform will provide you with the ability to update the logic over time.  
The solution must have the lowest latency for inferencing without having to batch.  
Which compute target should you identify?


  1. graphics processing units (GPUs)
  2. field-programmable gate arrays (FPGAs)
  3. central processing units (CPUs)
  4. application-specific integrated circuits (ASICs)
Correct answer: B
Explanation:
FPGAs, such as those available on Azure, provide performance close to ASICs. They are also flexible and reconfigurable over time, to implement new logic.  Incorrect Answers: D: ASICs are custom circuits, such as Google's TensorFlow Processor Units (TPU), provide the highest efficiency. They can't be reconfigured as your needs change.  References: httos://docs.microsoft.com/en-us/azure/m achine-Iearnind/service/concept-accelerate-with-qas
FPGAs, such as those available on Azure, provide performance close to ASICs. They are also flexible and reconfigurable over time, to implement new logic.  
Incorrect Answers: 
D: ASICs are custom circuits, such as Google's TensorFlow Processor Units (TPU), provide the highest efficiency. They can't be reconfigured as your needs change.  
References: 
httos://docs.microsoft.com/en-us/azure/m achine-Iearnind/service/concept-accelerate-with-qas



Question 7

You have a solution that runs on a five-node Azure Kubernetes Service (AKS) cluster. The cluster uses an N-series virtual machine.  
An Azure Batch AI process runs once a day and rarely on demand.  
You need to recommend a solution to maintain the cluster configuration when the cluster is not in use. The solution must not incur any compute costs.  
What should you include in the recommendation?


  1. Downscale the cluster to one node
  2. Downscale the cluster to zero nodes
  3. Delete the cluster
Correct answer: A
Explanation:
An AKS cluster has one or more nodes.  References: https://docs.microsoft.com/en-us/azure/aks/concepts-clusters-workloads
An AKS cluster has one or more nodes.  
References: 
https://docs.microsoft.com/en-us/azure/aks/concepts-clusters-workloads



Question 8

Your company has recently deployed 5,000 Internet-connected sensors for a planned Al solution.  
You need to recommend a computing solution to perform a real-time analysis of the data generated by the sensors.  
Which computing solution should you recommend?


  1. an Azure HDlnsight Storm cluster
  2. Azure Notification Hubs  
  3. an Azure HDInsight Hadoop cluster
  4. an Azure HDInsight R cluster
Correct answer: A
Explanation:
Azure HDInsight makes it easy, fast, and cost-effective to process massive amounts of data.  You can use HDInsight to process streaming data that's received in real time from a variety of devices.  References: https://docs.microsoft.com/en-us/azure/hdinsiqht/hadoop/apache-hadoop-introduction
Azure HDInsight makes it easy, fast, and cost-effective to process massive amounts of data.  
You can use HDInsight to process streaming data that's received in real time from a variety of devices.  
References: 
https://docs.microsoft.com/en-us/azure/hdinsiqht/hadoop/apache-hadoop-introduction



Question 9

You deploy an application that performs sentiment analysis on the data stored in Azure Cosmos DB.  
Recently, you loaded a large amount of data to the database. The data was for a customer named Contoso, Ltd.  
You discover that queries for the Contoso data are slow to complete, and the queries slow the entire application.  
You need to reduce the amount of time it takes for the queries to complete. The solution must minimize costs.  
What is the best way to achieve the goal? More than one answer choice may achieve the goal. Select the BEST answer.


  1. Change the request units.
  2. Change the partitioning strategy.
  3. Change the transaction isolation level. 
  4. Migrate the data to the Cosmos DB database. 
Correct answer: B
Explanation:
Throughput provisioned for a container is divided evenly among physical partitions.  Incorrect: Not A: Increasing request units would also improve throughput, but at a cost. Reference: httbs://docs.microsoft.com/en-us/azure/architecture/best-practices/data—partitioni_ng 
Throughput provisioned for a container is divided evenly among physical partitions.  
Incorrect: 
Not A: Increasing request units would also improve throughput, but at a cost. 
Reference: 
httbs://docs.microsoft.com/en-us/azure/architecture/best-practices/data—partitioni_ng 



Question 10

You plan to implement a new data warehouse for a planned Al solution.  
You have the following information regarding the data warehouse: 
  • The data files will be available in one week.  
  • Most queries that will be executed against the data warehouse will be ad-hoc queries.  
  • The schemas of data files that will be loaded to the data warehouse will change often.  
  • One month after the planned implementation, the data warehouse will contain 15 TB of data.  
You need to recommend a database solution to support the planned implementation.  
What two solutions should you include in the recommendation? Each correct answer is a complete solution.  
NOTE: Each correct selection is worth one point.


  1. Apache Hadoop
  2. Apache Spark
  3. A Microsoft Azure SQL database
  4. An Azure virtual machine that runs Microsoft SQL Server 
Correct answer: AB









CONNECT US

Facebook

Twitter

PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount!



HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files