Download Microsoft.70-767.PremDumps.2019-03-29.62q.vcex

Download Exam

File Info

Exam Implementing a SQL Data Warehouse
Number 70-767
File Name Microsoft.70-767.PremDumps.2019-03-29.62q.vcex
Size 2 MB
Posted Mar 29, 2019
Download Microsoft.70-767.PremDumps.2019-03-29.62q.vcex

How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase

Coupon: MASTEREXAM
With discount: 20%






Demo Questions

Question 1

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. 
You are developing a Microsoft SQL Server Integration Services (SSIS) projects. The project consists of several packages that load data warehouse tables. 
You need to extend the control flow design for each package to use the following control flow while minimizing development efforts and maintenance:
   
  
Solution: You add the control flow to a control flow package part. You add an instance of the control flow package part to each data warehouse load package.
Does the solution meet the goal?


  1. Yes
  2. No
Correct answer: A
Explanation:
A package consists of a control flow and, optionally, one or more data flows. You create the control flow in a package by using the Control Flow tab in SSIS Designer. References: https://docs.microsoft.com/en-us/sql/integration-services/control-flow/control-flow
A package consists of a control flow and, optionally, one or more data flows. You create the control flow in a package by using the Control Flow tab in SSIS Designer. 
References: https://docs.microsoft.com/en-us/sql/integration-services/control-flow/control-flow



Question 2

Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You have a Microsoft SQL Server data warehouse instance that supports several client applications.
The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer, Dimension.Date, Fact.Ticket, and Fact.Order. The Dimension.SalesTerritory and Dimension.Customer tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company wants to change it to daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.
All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.
You have the following requirements:
  •  Implement table partitioning to improve the manageability of the data warehouse and to avoid the need to repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.
  •  Partition the Fact.Order table and retain a total of seven years of data.
  •  Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.
  •  Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date tables.
  •  Incrementally load all tables in the database and ensure that all incremental changes are processed.
  •  Maximize the performance during the data loading process for the Fact.Order partition.
  •  Ensure that historical data remains online and available for querying.
  •  Reduce ongoing storage costs while maintaining query performance for current data.
You are not permitted to make changes to the client applications.
You need to implement the data partitioning strategy.
How should you partition the Fact.Order table?


  1. Create 17,520 partitions.
  2. Use a granularity of two days.
  3. Create 2,557 partitions.
  4. Create 730 partitions.
Correct answer: C
Explanation:
We create on partition for each day, which means that a granularity of one day is used. 7 years times 365 days is 2,555. Make that 2,557 to provide for leap years. From scenario: Partition the Fact.Order table and retain a total of seven years of data.The Fact.Order table is optimized for weekly reporting, but the company wants to change it to daily. Maximize the performance during the data loading process for the Fact.Order partition. Reference: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-partition
We create on partition for each day, which means that a granularity of one day is used. 7 years times 365 days is 2,555. Make that 2,557 to provide for leap years. 
From scenario: Partition the Fact.Order table and retain a total of seven years of data.
The Fact.Order table is optimized for weekly reporting, but the company wants to change it to daily. 
Maximize the performance during the data loading process for the Fact.Order partition. 
Reference: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-partition



Question 3

Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You have a Microsoft SQL Server data warehouse instance that supports several client applications.
The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer, Dimension.Date, Fact.Ticket, and Fact.Order. The Dimension.SalesTerritory and Dimension.Customer tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company wants to change it to daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.
All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.
You have the following requirements:
  •  Implement table partitioning to improve the manageability of the data warehouse and to avoid the need to repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.
  •  Partition the Fact.Order table and retain a total of seven years of data.
  •  Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.
  •  Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date tables.
  •  Incrementally load all tables in the database and ensure that all incremental changes are processed.
  •  Maximize the performance during the data loading process for the Fact.Order partition.
  •  Ensure that historical data remains online and available for querying.
  •  Reduce ongoing storage costs while maintaining query performance for current data.
You are not permitted to make changes to the client applications.
You need to optimize the storage for the data warehouse.
What change should you make?


  1. Partition the Fact.Order table, and move historical data to new filegroups on lower-cost storage.
  2. Create new tables on lower-cost storage, move the historical data to the new tables, and then shrink the database.
  3. Remove the historical data from the database to leave available space for new data.
  4. Move historical data to new tables on lower-cost storage.
  5. Implement row compression for the Fact.Order table.
  6. Move the index for the Fact.Order table to lower-cost storage.
Correct answer: A
Explanation:
Create the load staging table in the same filegroup as the partition you are loading. Create the unload staging table in the same filegroup as the partition you are deleting. From scenario: The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.References: https://blogs.msdn.microsoft.com/sqlcat/2013/09/16/top-10-best-practices-for-building-a-large-scale-relational-data-warehouse/
Create the load staging table in the same filegroup as the partition you are loading. 
Create the unload staging table in the same filegroup as the partition you are deleting. 
From scenario: The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.
References: https://blogs.msdn.microsoft.com/sqlcat/2013/09/16/top-10-best-practices-for-building-a-large-scale-relational-data-warehouse/



Question 4

Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You are a database administrator for an e-commerce company that runs an online store. The company has three databases as described in the following table. 
   
  
You plan to load at least one million rows of data each night from DB1 into the OnlineOrder table. You must load data into the correct partitions using a parallel process. 
You create 24 Data Flow tasks. You must place the tasks into a component to allow parallel load. After all of the load processes compete, the process must proceed to the next task. 
You need to load the data for the OnlineOrder table. 
What should you use?


  1. Lookup transformation
  2. Merge transformation
  3. Merge Join transformation
  4. MERGE statement
  5. Union All transformation
  6. Balanced Data Distributor transformation
  7. Sequential container
  8. Foreach Loop container
Correct answer: H
Explanation:
The Parallel Loop Task is an SSIS Control Flow task, which can execute multiple iterations of the standard Foreach Loop Container concurrently. References: http://www.cozyroc.com/ssis/parallel-loop-task
The Parallel Loop Task is an SSIS Control Flow task, which can execute multiple iterations of the standard Foreach Loop Container concurrently. 
References: http://www.cozyroc.com/ssis/parallel-loop-task



Question 5

Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You are a database administrator for an e-commerce company that runs an online store. The company has the databases described in the following table. 
   
  
Each day, you publish a Microsoft Excel workbook that contains a list of product names and current prices to an external website. Suppliers update pricing information in the workbook. Each supplier saves the workbook with a unique name. 
Each night, the Products table is deleted and refreshed from MDS by using a Microsoft SQL Server Integration Services (SSIS) package. All files must be loaded in sequence. 
You need to add a data flow in an SSIS package to perform the Excel files import in the data warehouse. 
What should you use?


  1. Lookup transformation
  2. Merge transformation
  3. Merge Join transformation
  4. MERGE statement
  5. Union All transformation
  6. Balanced Data Distributor transformation
  7. Sequential container
  8. Foreach Loop container
Correct answer: A
Explanation:
If you're familiar with SSIS and don't want to run the SQL Server Import and Export Wizard, create an SSIS package that uses the Excel Source and the SQL Server Destination in the data flow.       References: https://docs.microsoft.com/en-us/sql/integration-services/import-export-data/import-data-from-excel-to-sql
If you're familiar with SSIS and don't want to run the SQL Server Import and Export Wizard, create an SSIS package that uses the Excel Source and the SQL Server Destination in the data flow. 
   
  
References: https://docs.microsoft.com/en-us/sql/integration-services/import-export-data/import-data-from-excel-to-sql



Question 6

Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You are a database administrator for an e-commerce company that runs an online store. The company has the databases described in the following table. 
   
  
Each week, you import a product catalog from a partner company to a staging table in DB2. 
You need to create a stored procedure that will update the staging table by inserting new products and deleting discontinued products. 
What should you use?


  1. Lookup transformation
  2. Merge transformation
  3. Merge Join transformation
  4. MERGE statement
  5. Union All transformation
  6. Balanced Data Distributor transformation
  7. Sequential container
  8. Foreach Loop container
Correct answer: G



Question 7

Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You are a database administrator for an e-commerce company that runs an online store. The company has the databases described in the following table. 
   
  
Each day, data from the table OnlineOrder in DB2 must be exported by partition. The tables must not be locked during the process. 
You need to write a Microsoft SQL Server Integration Services (SSIS) package that performs the data export. 
What should you use?


  1. Lookup transformation
  2. Merge transformation
  3. Merge Join transformation
  4. MERGE statement
  5. Union All transformation
  6. Balanced Data Distributor transformation
  7. Sequential container
  8. Foreach Loop container
Correct answer: E
Explanation:
The Union All transformation combines multiple inputs into one output. For example, the outputs from five different Flat File sources can be inputs to the Union All transformation and combined into one output. References: https://docs.microsoft.com/en-us/sql/integration-services/data-flow/transformations/union-all-transformation
The Union All transformation combines multiple inputs into one output. For example, the outputs from five different Flat File sources can be inputs to the Union All transformation and combined into one output. 
References: https://docs.microsoft.com/en-us/sql/integration-services/data-flow/transformations/union-all-transformation



Question 8

Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You are a database administrator for an e-commerce company that runs an online store. The company has the databases described in the following table. 
   
  
Product prices are updated and are stored in a table named Products on DB1. The Products table is deleted and refreshed each night from MDS by using a Microsoft SQL Server Integration Services (SSIS) package. None of the data sources are sorted. 
You need to update the SSIS package to add current prices to the Products table. 
What should you use?


  1. Lookup transformation
  2. Merge transformation
  3. Merge Join transformation
  4. MERGE statement
  5. Union All transformation
  6. Balanced Data Distributor transformation
  7. Sequential container
  8. Foreach Loop container
Correct answer: D
Explanation:
In the current release of SQL Server Integration Services, the SQL statement in an Execute SQL task can contain a MERGE statement. This MERGE statement enables you to accomplish multiple INSERT, UPDATE, and DELETE operations in a single statement. References: https://docs.microsoft.com/en-us/sql/integration-services/control-flow/merge-in-integration-services-packages
In the current release of SQL Server Integration Services, the SQL statement in an Execute SQL task can contain a MERGE statement. This MERGE statement enables you to accomplish multiple INSERT, UPDATE, and DELETE operations in a single statement. 
References: https://docs.microsoft.com/en-us/sql/integration-services/control-flow/merge-in-integration-services-packages



Question 9

You have a Microsoft SQL Server Integration Services (SSIS) package that includes the control flow shown in the following diagram. 
   
  
You need to choose the enumerator for the Foreach Loop container. 
Which enumerator should you use?


  1. Foreach SMO Enumerator
  2. Foreach ADO.Net Schema Rowset Enumerator
  3. Foreach NodeList Enumerator
  4. Foreach ADO Enumerator
  5. Foreach HDS File Enumerator
  6. Foreach File Enumerator
Correct answer: D
Explanation:
Use the Foreach ADO enumerator to enumerate rows in tables. For example, you can get the rows in an ADO recordset. References: https://docs.microsoft.com/en-us/sql/integration-services/control-flow/foreach-loop-container?view=sql-server-2017
Use the Foreach ADO enumerator to enumerate rows in tables. For example, you can get the rows in an ADO recordset. 
References: https://docs.microsoft.com/en-us/sql/integration-services/control-flow/foreach-loop-container?view=sql-server-2017



Question 10

You have a data quality project that focuses on the Products catalog for the company. The data includes a product reference number. 
The product reference should use the following format:
Two letters followed by an asterisk and then four or five numbers. An example of a valid number is XX*55522. 
Any reference number that does not conform to the format must be rejected during the data cleansing.
You need to add a Data Quality Services (DQS) domain rule in the Products domain. 
Which rule should you use?


  1. value matches pattern ZA*9876[5]
  2. value matches pattern AZ[*]1234[5]
  3. value matches regular expression AZ[*]1234[5]
  4. value matches pattern [a-zA-Z][a-zA-Z]*[0-9][0-9] [0-9][0-9] [0-9]?
Correct answer: A
Explanation:
For a pattern matching rule:Any letter (A…Z) can be used as a pattern for any letter; case insensitive Any digit (0…9) can be used as a pattern for any digit Any special character, except a letter or a digit, can be used as a pattern for itself Brackets, [], define optional matching Example: ABC:0000This rule implies that the data will contain three parts: any three letters followed by a colon (:), which is again followed by any four digits.
For a pattern matching rule:
Any letter (A…Z) can be used as a pattern for any letter; case insensitive 
Any digit (0…9) can be used as a pattern for any digit 
Any special character, except a letter or a digit, can be used as a pattern for itself 
Brackets, [], define optional matching 
Example: ABC:0000
This rule implies that the data will contain three parts: any three letters followed by a colon (:), which is again followed by any four digits.









CONNECT US

Facebook

Twitter

PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount!



HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files