Download Microsoft.DP-100.CertKey.2019-03-23.24q.vcex

Download Exam

File Info

Exam Designing and Implementing a Data Science Solution on Azure
Number DP-100
File Name Microsoft.DP-100.CertKey.2019-03-23.24q.vcex
Size 774 KB
Posted Mar 23, 2019
Download Microsoft.DP-100.CertKey.2019-03-23.24q.vcex

How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase

Coupon: MASTEREXAM
With discount: 20%






Demo Questions

Question 1

You are developing a hands-on workshop to introduce Docker for Windows to attendees. 
You need to ensure that workshop attendees can install Docker on their devices. 
Which two prerequisite components should attendees install on the devices? Each correct answer presents part of the solution. 
NOTE: Each correct selection is worth one point.


  1. Microsoft Hardware-Assisted Virtualization Detection Tool
  2. Kitematic
  3. BIOS-enabled virtualization
  4. VirtualBox
  5. Windows 10 64-bit Professional
Correct answer: CE
Explanation:
C: Make sure your Windows system supports Hardware Virtualization Technology and that virtualization is enabled.Ensure that hardware virtualization support is turned on in the BIOS settings. For example:     E: To run Docker, your machine must have a 64-bit operating system running Windows 7 or higher. References:https://docs.docker.com/toolbox/toolbox_install_windows/https://blogs.technet.microsoft.com/canitpro/2015/09/08/step-by-step-enabling-hyper-v-for-use-on-windows-10/
C: Make sure your Windows system supports Hardware Virtualization Technology and that virtualization is enabled.
Ensure that hardware virtualization support is turned on in the BIOS settings. For example: 
  
E: To run Docker, your machine must have a 64-bit operating system running Windows 7 or higher. 
References:
https://docs.docker.com/toolbox/toolbox_install_windows/
https://blogs.technet.microsoft.com/canitpro/2015/09/08/step-by-step-enabling-hyper-v-for-use-on-windows-10/



Question 2

Your team is building a data engineering and data science development environment. 
The environment must support the following requirements:
  • support Python and Scala 
  • compose data storage, movement, and processing services into automated data pipelines 
  • the same tool should be used for the orchestration of both data engineering and data science 
  • support workload isolation and interactive workloads 
  • enable scaling across a cluster of machines 
You need to create the environment. 
What should you do?


  1. Build the environment in Apache Hive for HDInsight and use Azure Data Factory for orchestration.
  2. Build the environment in Azure Databricks and use Azure Data Factory for orchestration.
  3. Build the environment in Apache Spark for HDInsight and use Azure Container Instances for orchestration.
  4. Build the environment in Azure Databricks and use Azure Container Instances for orchestration.
Correct answer: B
Explanation:
In Azure Databricks, we can create two different types of clusters. Standard, these are the default clusters and can be used with Python, R, Scala and SQL High-concurrency Azure Databricks is fully integrated with Azure Data Factory. Incorrect Answers:D: Azure Container Instances is good for development or testing. Not suitable for production workloads.References:https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/data-science-and-machine-learning
In Azure Databricks, we can create two different types of clusters. 
  • Standard, these are the default clusters and can be used with Python, R, Scala and SQL 
  • High-concurrency 
Azure Databricks is fully integrated with Azure Data Factory. 
Incorrect Answers:
D: Azure Container Instances is good for development or testing. Not suitable for production workloads.
References:
https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/data-science-and-machine-learning



Question 3

You plan to build a team data science environment. Data for training models in machine learning pipelines will be over 20 GB in size. 
You have the following requirements:
  • Models must be built using Caffe2 or Chainer frameworks. 
  • Data scientists must be able to use a data science environment to build the machine learning pipelines and train models on their personal devices in both connected and disconnected network environments. 
Personal devices must support updating machine learning pipelines when connected to a network. 
You need to select a data science environment. 
Which environment should you use?


  1. Azure Machine Learning Service
  2. Azure Machine Learning Studio
  3. Azure Databricks
  4. Azure Kubernetes Service (AKS)
Correct answer: A
Explanation:
The Data Science Virtual Machine (DSVM) is a customized VM image on Microsoft’s Azure cloud built specifically for doing data science. Caffe2 and Chainer are supported by DSVM. DSVM integrates with Azure Machine Learning. Incorrect Answers:B: Use Machine Learning Studio when you want to experiment with machine learning models quickly and easily, and the built-in machine learning algorithms are sufficient for your solutions.References:https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/overview
The Data Science Virtual Machine (DSVM) is a customized VM image on Microsoft’s Azure cloud built specifically for doing data science. Caffe2 and Chainer are supported by DSVM. 
DSVM integrates with Azure Machine Learning. 
Incorrect Answers:
B: Use Machine Learning Studio when you want to experiment with machine learning models quickly and easily, and the built-in machine learning algorithms are sufficient for your solutions.
References:
https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/overview



Question 4

You are implementing a machine learning model to predict stock prices. 
The model uses a PostgreSQL database and requires GPU processing. 
You need to create a virtual machine that is pre-configured with the required tools. 
What should you do?


  1. Create a Data Science Virtual Machine (DSVM) Windows edition.
  2. Create a Geo Al Data Science Virtual Machine (Geo-DSVM) Windows edition.
  3. Create a Deep Learning Virtual Machine (DLVM) Linux edition.
  4. Create a Deep Learning Virtual Machine (DLVM) Windows edition.
  5. Create a Data Science Virtual Machine (DSVM) Linux edition.
Correct answer: E
Explanation:
Incorrect Answers:A, C: PostgreSQL (CentOS) is only available in the Linux Edition.B: The Azure Geo AI Data Science VM (Geo-DSVM) delivers geospatial analytics capabilities from Microsoft's Data Science VM. Specifically, this VM extends the AI and data science toolkits in the Data Science VM by adding ESRI's market-leading ArcGIS Pro Geographic Information System.D: DLVM is a template on top of DSVM image. In terms of the packages, GPU drivers etc are all there in the DSVM image. Mostly it is for convenience during creation where we only allow DLVM to be created on GPU VM instances on Azure.References:https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/overview
Incorrect Answers:
A, C: PostgreSQL (CentOS) is only available in the Linux Edition.
B: The Azure Geo AI Data Science VM (Geo-DSVM) delivers geospatial analytics capabilities from Microsoft's Data Science VM. Specifically, this VM extends the AI and data science toolkits in the Data Science VM by adding ESRI's market-leading ArcGIS Pro Geographic Information System.
D: DLVM is a template on top of DSVM image. In terms of the packages, GPU drivers etc are all there in the DSVM image. Mostly it is for convenience during creation where we only allow DLVM to be created on GPU VM instances on Azure.
References:
https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/overview



Question 5

You are developing deep learning models to analyze semi-structured, unstructured, and structured data types. 
You have the following data available for model building:
  • Video recordings of sporting events 
  • Transcripts of radio commentary about events 
  • Logs from related social media feeds captured during sporting events 
You need to select an environment for creating the model. 
Which environment should you use?


  1. Azure Cognitive Services
  2. Azure Data Lake Analytics
  3. Azure HDInsight with Spark MLib
  4. Azure Machine Learning Studio
Correct answer: A
Explanation:
Azure Cognitive Services expand on Microsoft’s evolving portfolio of machine learning APIs and enable developers to easily add cognitive features – such as emotion and video detection; facial, speech, and vision recognition; and speech and language understanding – into their applications. The goal of Azure Cognitive Services is to help developers create applications that can see, hear, speak, understand, and even begin to reason. The catalog of services within Azure Cognitive Services can be categorized into five main pillars - Vision, Speech, Language, Search, and Knowledge. References:https://docs.microsoft.com/en-us/azure/cognitive-services/welcome
Azure Cognitive Services expand on Microsoft’s evolving portfolio of machine learning APIs and enable developers to easily add cognitive features – such as emotion and video detection; facial, speech, and vision recognition; and speech and language understanding – into their applications. The goal of Azure Cognitive Services is to help developers create applications that can see, hear, speak, understand, and even begin to reason. The catalog of services within Azure Cognitive Services can be categorized into five main pillars - Vision, Speech, Language, Search, and Knowledge. 
References:
https://docs.microsoft.com/en-us/azure/cognitive-services/welcome



Question 6

You must store data in Azure Blob Storage to support Azure Machine Learning. 
You need to transfer the data into Azure Blob Storage. 
What are three possible ways to achieve the goal? Each correct answer presents a complete solution. 
NOTE: Each correct selection is worth one point.


  1. Bulk Insert SQL Query
  2. AzCopy
  3. Python script
  4. Azure Storage Explorer
  5. Bulk Copy Program (BCP)
Correct answer: BCD
Explanation:
You can move data to and from Azure Blob storage using different technologies: Azure Storage-Explorer AzCopy Python SSIS References:https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/move-azure-blob
You can move data to and from Azure Blob storage using different technologies: 
Azure Storage-Explorer 
AzCopy 
Python 
SSIS 
References:
https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/move-azure-blob



Question 7

You are moving a large dataset from Azure Machine Learning Studio to a Weka environment. 
You need to format the data for the Weka environment. 
Which module should you use?


  1. Convert to CSV
  2. Convert to Dataset
  3. Convert to ARFF
  4. Convert to SVMLight
Correct answer: C
Explanation:
Use the Convert to ARFF module in Azure Machine Learning Studio, to convert datasets and results in Azure Machine Learning to the attribute-relation file format used by the Weka toolset. This format is known as ARFF. The ARFF data specification for Weka supports multiple machine learning tasks, including data preprocessing, classification, and feature selection. In this format, data is organized by entites and their attributes, and is contained in a single text file. References:https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/convert-to-arff
Use the Convert to ARFF module in Azure Machine Learning Studio, to convert datasets and results in Azure Machine Learning to the attribute-relation file format used by the Weka toolset. This format is known as ARFF. 
The ARFF data specification for Weka supports multiple machine learning tasks, including data preprocessing, classification, and feature selection. In this format, data is organized by entites and their attributes, and is contained in a single text file. 
References:
https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/convert-to-arff



Question 8

You need to implement a scaling strategy for the local penalty detection data.
Which normalization type should you use?


  1. Streaming
  2. Weight
  3. Batch
  4. Cosine
Correct answer: C
Explanation:
Post batch normalization statistics (PBN) is the Microsoft Cognitive Toolkit (CNTK) version of how to evaluate the population mean and variance of Batch Normalization which could be used in inference Original Paper. In CNTK, custom networks are defined using the BrainScriptNetworkBuilder and described in the CNTK network description language "BrainScript." Scenario:Local penalty detection models must be written by using BrainScript. References:https://docs.microsoft.com/en-us/cognitive-toolkit/post-batch-normalization-statistics
Post batch normalization statistics (PBN) is the Microsoft Cognitive Toolkit (CNTK) version of how to evaluate the population mean and variance of Batch Normalization which could be used in inference Original Paper. 
In CNTK, custom networks are defined using the BrainScriptNetworkBuilder and described in the CNTK network description language "BrainScript." 
Scenario:
Local penalty detection models must be written by using BrainScript. 
References:
https://docs.microsoft.com/en-us/cognitive-toolkit/post-batch-normalization-statistics



Question 9

You are creating a machine learning model. You have a dataset that contains null rows. 
You need to use the Clean Missing Data module in Azure Machine Learning Studio to identify and resolve the null and missing data in the dataset. 
Which parameter should you use?


  1. Replace with mean
  2. Remove entire column
  3. Remove entire row
  4. Hot Deck
Correct answer: B
Explanation:
Remove entire row: Completely removes any row in the dataset that has one or more missing values. This is useful if the missing value can be considered randomly missing.References:https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/clean-missing-data
Remove entire row: Completely removes any row in the dataset that has one or more missing values. This is useful if the missing value can be considered randomly missing.
References:
https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/clean-missing-data



Question 10

You plan to deliver a hands-on workshop to several students. The workshop will focus on creating data visualizations using Python. Each student will use a device that has internet access. 
Student devices are not configured for Python development. Students do not have administrator access to install software on their devices. Azure subscriptions are not available for students. 
You need to ensure that students can run Python-based data visualization code. 
Which Azure tool should you use?


  1. Anaconda Data Science Platform
  2. Azure BatchAl
  3. Azure Notebooks
  4. Azure Machine Learning Service
Correct answer: C
Explanation:
References:https://notebooks.azure.com/
References:
https://notebooks.azure.com/









CONNECT US

Facebook

Twitter

PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount!



HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files