Download Oracle.1z0-449.CertDumps.2017-12-11.72q.tqb

Download Exam

File Info

Exam Oracle Big Data 2016 Implementation Essentials
Number 1z0-449
File Name Oracle.1z0-449.CertDumps.2017-12-11.72q.tqb
Size 1 MB
Posted Dec 11, 2017
Download Oracle.1z0-449.CertDumps.2017-12-11.72q.tqb

How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase

Coupon: MASTEREXAM
With discount: 20%






Demo Questions

Question 1

You need to place the results of a PigLatin script into an HDFS output directory. 
What is the correct syntax in Apache Pig? 


  1. update hdfs set D as ‘./output’;
  2. store D into ‘./output’;
  3. place D into ‘./output’;
  4. write D as ‘./output’;
  5. hdfsstore D into ‘./output’;
Correct answer: B
Explanation:
Use the STORE operator to run (execute) Pig Latin statements and save (persist) results to the file system. Use STORE for production scripts and batch mode processing. Syntax: STORE alias INTO 'directory' [USING function];Example: In this example data is stored using PigStorage and the asterisk character (*) as the field delimiter.A = LOAD 'data' AS (a1:int,a2:int,a3:int);DUMP A; (1,2,3) (4,2,1) (8,3,4) (4,3,3) (7,2,5) (8,4,3) STORE A INTO 'myoutput' USING PigStorage ('*'); CAT myoutput; 1*2*3 4*2*1 8*3*4 4*3*3 7*2*5 8*4*3 References: https://pig.apache.org/docs/r0.13.0/basic.html#store
Use the STORE operator to run (execute) Pig Latin statements and save (persist) results to the file system. Use STORE for production scripts and batch mode processing. 
Syntax: STORE alias INTO 'directory' [USING function];
Example: In this example data is stored using PigStorage and the asterisk character (*) as the field delimiter.
A = LOAD 'data' AS (a1:int,a2:int,a3:int);
DUMP A; 
(1,2,3) 
(4,2,1) 
(8,3,4) 
(4,3,3) 
(7,2,5) 
(8,4,3) 
STORE A INTO 'myoutput' USING PigStorage ('*'); 
CAT myoutput; 
1*2*3 
4*2*1 
8*3*4 
4*3*3 
7*2*5 
8*4*3 
References: https://pig.apache.org/docs/r0.13.0/basic.html#store



Question 2

How is Oracle Loader for Hadoop (OLH) better than Apache Sqoop?


  1. OLH performs a great deal of preprocessing of the data on Hadoop before loading it into the database.
  2. OLH performs a great deal of preprocessing of the data on the Oracle database before loading it into NoSQL.
  3. OLH does not use MapReduce to process any of the data, thereby increasing performance.
  4. OLH performs a great deal of preprocessing of the data on the Oracle database before loading it into Hadoop.
  5. OLH is fully supported on the Big Data Appliance. Apache Sqoop is not supported on the Big Data Appliance.
Correct answer: A
Explanation:
Oracle Loader for Hadoop provides an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. Oracle Loader for Hadoop prepartitions the data if necessary and transforms it into a database-ready format. It optionally sorts records by primary key or user-defined columns before loading the data or creating output files. Note: Apache Sqoop(TM) is a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases.Incorrect Answers:A, D: Oracle Loader for Hadoop provides an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database.C: Oracle Loader for Hadoop is a MapReduce application that is invoked as a command-line utility. It accepts the generic command-line options that are supported by the org.apache.hadoop.util.Tool interface.E: The Oracle Linux operating system and Cloudera's Distribution including Apache Hadoop (CDH) underlie all other software components installed on Oracle Big Data Appliance. CDH includes Apache projects for MapReduce and HDFS, such as Hive, Pig, Oozie, ZooKeeper, HBase, Sqoop, and Spark.References: https://docs.oracle.com/cd/E37231_01/doc.20/e36961/start.htm#BDCUG326https://docs.oracle.com/cd/E55905_01/doc.40/e55814/concepts.htm#BIGUG117
Oracle Loader for Hadoop provides an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. Oracle Loader for Hadoop prepartitions the data if necessary and transforms it into a database-ready format. It optionally sorts records by primary key or user-defined columns before loading the data or creating output files. 
Note: Apache Sqoop(TM) is a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases.
Incorrect Answers:
A, D: Oracle Loader for Hadoop provides an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database.
C: Oracle Loader for Hadoop is a MapReduce application that is invoked as a command-line utility. It accepts the generic command-line options that are supported by the org.apache.hadoop.util.Tool interface.
E: The Oracle Linux operating system and Cloudera's Distribution including Apache Hadoop (CDH) underlie all other software components installed on Oracle Big Data Appliance. CDH includes Apache projects for MapReduce and HDFS, such as Hive, Pig, Oozie, ZooKeeper, HBase, Sqoop, and Spark.
References: 
https://docs.oracle.com/cd/E37231_01/doc.20/e36961/start.htm#BDCUG326
https://docs.oracle.com/cd/E55905_01/doc.40/e55814/concepts.htm#BIGUG117



Question 3

Which three pieces of hardware are present on each node of the Big Data Appliance? (Choose three.)


  1. high capacity SAS disks
  2. memory
  3. redundant Power Delivery Units
  4. InfiniBand ports
  5. InfiniBand leaf switches
Correct answer: ABD
Explanation:
Big Data Appliance Hardware Specification and Details, example:Per Node:2 x Eight-Core Intel ® Xeon ® E5-2260 Processors (2.2 GHz) 64 GB Memory (expandable to 256GB) Disk Controller HBA with 512MB Battery backed write cache 12 x 3TB 7,200 RPM High Capacity SAS Disks 2 x QDR (Quad Data Rate InfiniBand)(40Gb/s) Ports 4 x 10 Gb Ethernet Ports 1 x ILOM Ethernet Port References: http://www.oracle.com/technetwork/server-storage/engineered-systems/bigdata-appliance/overview/bigdataappliancev2-datasheet-1871638.pdf
Big Data Appliance Hardware Specification and Details, example:
Per Node:
  • 2 x Eight-Core Intel ® Xeon ® E5-2260 Processors (2.2 GHz) 
  • 64 GB Memory (expandable to 256GB) 
  • Disk Controller HBA with 512MB Battery backed write cache 
  • 12 x 3TB 7,200 RPM High Capacity SAS Disks 
  • 2 x QDR (Quad Data Rate InfiniBand)(40Gb/s) Ports 
  • 4 x 10 Gb Ethernet Ports 
  • 1 x ILOM Ethernet Port 
References: http://www.oracle.com/technetwork/server-storage/engineered-systems/bigdata-appliance/overview/bigdataappliancev2-datasheet-1871638.pdf









CONNECT US

Facebook

Twitter

PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount!



HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files