Download Cloudera.CCA-500.CertDumps.2019-08-01.25q.vcex

Download Exam

File Info

Exam Cloudera Certified Administrator for Apache Hadoop (CCAH)
Number CCA-500
File Name Cloudera.CCA-500.CertDumps.2019-08-01.25q.vcex
Size 21 KB
Posted Aug 01, 2019
Download Cloudera.CCA-500.CertDumps.2019-08-01.25q.vcex


How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase

Coupon: MASTEREXAM
With discount: 20%






Demo Questions

Question 1

You are migrating a cluster from MApReduce version 1 (MRv1) to MapReduce version 2 (MRv2) on YARN. You want to maintain your MRv1 TaskTracker slot capacities when you migrate. What should you do?


  1. Configure yarn.applicationmaster.resource.memory-mb and yarn.applicationmaster.resource.cpu-vcores so that ApplicationMaster container allocations match the capacity you require.
  2. You don't need to configure or balance these properties in YARN as YARN dynamically balances resource management capabilities on your cluster
  3. Configure mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum ub yarn-site.xml to match your cluster's capacity set by the yarn-scheduler.minimum-allocation
  4. Configure yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores to match the capacity you require under YARN for each NodeManager
Correct answer: D



Question 2

You have a Hadoop cluster HDFS, and a gateway machine external to the cluster from which clients submit jobs. What do you need to do in order to run Impala on the cluster and submit jobs from the command line of the gateway machine?


  1. Install the impalad daemon statestored daemon, and daemon on each machine in the cluster, and the impala shell on your gateway machine
  2. Install the impalad daemon, the statestored daemon, the catalogd daemon, and the impala shell on your gateway machine
  3. Install the impalad daemon and the impala shell on your gateway machine, and the statestored daemon and catalogd daemon on one of the nodes in the cluster
  4. Install the impalad daemon on each machine in the cluster, the statestored daemon and catalogd daemon on one machine in the cluster, and the impala shell on your gateway machine
  5. Install the impalad daemon, statestored daemon, and catalogd daemon on each machine in the cluster and on the gateway node
Correct answer: D



Question 3

You observed that the number of spilled records from Map tasks far exceeds the number of map output records. Your child heap size is 1GB and your io.sort.mb value is set to 1000MB. How would you tune your io.sort.mb value to achieve maximum memory to disk I/O ratio?


  1. For a 1GB child heap size an io.sort.mb of 128 MB will always maximize memory to disk I/O
  2. Increase the io.sort.mb to 1GB
  3. Decrease the io.sort.mb value to 0
  4. Tune the io.sort.mb value until you observe that the number of spilled records equals (or is as close to equals) the number of map output records.
Correct answer: D



Question 4

Which YARN daemon or service monitors a Controller's per-application resource using (e.g., memory CPU)?


  1. ApplicationMaster
  2. NodeManager
  3. ApplicationManagerService
  4. ResourceManager
Correct answer: A



Question 5

You want to understand more about how users browse your public website. For example, you want to know which pages they visit prior to placing an order. You have a server farm of 200 web servers hosting your website. Which is the most efficient process to gather these web server across logs into your Hadoop cluster analysis?


  1. Sample the web server logs web servers and copy them into HDFS using curl
  2. Ingest the server web logs into HDFS using Flume
  3. Channel these clickstreams into Hadoop using Hadoop Streaming
  4. Import all user clicks from your OLTP databases into Hadoop using Sqoop
  5. Write a MapReeeduce job with the web servers for mappers and the Hadoop cluster nodes for reducers
Correct answer: B
Explanation:
Explanation: Apache Flume is a service for streaming logs into Hadoop.Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of streaming data into the Hadoop Distributed File System (HDFS). It has a simple and flexible architecture based on streaming data flows; and is robust and fault tolerant with tunable reliability mechanisms for failover and recovery.
Explanation: Apache Flume is a service for streaming logs into Hadoop.
Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of streaming data into the Hadoop Distributed File System (HDFS). It has a simple and flexible architecture based on streaming data flows; and is robust and fault tolerant with tunable reliability mechanisms for failover and recovery.



Question 6

Identify two features/issues that YARN is designated to address:(Choose two)


  1. Standardize on a single MapReduce API
  2. Single point of failure in the NameNode
  3. Reduce complexity of the MapReduce APIs
  4. Resource pressure on the JobTracker
  5. Ability to run framework other than MapReduce, such as MPI
  6. HDFS latency
Correct answer: DE
Explanation:
Explanation: Reference:http://www.revelytix.com/?q=content/hadoop-ecosystem(YARN, first para)
Explanation: Reference:http://www.revelytix.com/?q=content/hadoop-ecosystem(YARN, first para)



Question 7

Assuming a cluster running HDFS, MapReduce version 2 (MRv2) on YARN with all settings at their default, what do you need to do when adding a new slave node to cluster?


  1. Nothing, other than ensuring that the DNS (or/etc/hosts files on all machines) contains any entry for the new node.
  2. Restart the NameNode and ResourceManager daemons and resubmit any running jobs.
  3. Add a new entry to /etc/nodes on the NameNode host.
  4. Restart the NameNode of dfs.number.of.nodes in hdfs-site.xml
Correct answer: A
Explanation:
Explanation: http://wiki.apache.org/hadoop/FAQ#I_have_a_new_node_I_want_to_add_to_a_running_Hadoop_cluster.3B_how_do_I_start_services_on_just_one_node.3F
Explanation: http://wiki.apache.org/hadoop/FAQ#I_have_a_new_node_I_want_to_add_to_a_running_H
adoop_cluster.3B_how_do_I_start_services_on_just_one_node.3F



Question 8

Each node in your Hadoop cluster, running YARN, has 64GB memory and 24 cores. Your yarn.site.xml has the following configuration:
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>32768</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>12</value>
</property>
You want YARN to launch no more than 16 containers per node. What should you do?


  1. Modify yarn-site.xml with the following property:<name>yarn.scheduler.minimum-allocation-mb</name><value>2048</value>
  2. Modify yarn-sites.xml with the following property:<name>yarn.scheduler.minimum-allocation-mb</name><value>4096</value>
  3. Modify yarn-site.xml with the following property:<name>yarn.nodemanager.resource.cpu-vccores</name>
  4. No action is needed: YARN's dynamic resource allocation automatically optimizes the node memory and cores
Correct answer: A



Question 9

Which scheduler would you deploy to ensure that your cluster allows short jobs to finish within a reasonable time without starting long-running jobs?


  1. Complexity Fair Scheduler (CFS)
  2. Capacity Scheduler
  3. Fair Scheduler
  4. FIFO Scheduler
Correct answer: C
Explanation:
Explanation: Reference:http://hadoop.apache.org/docs/r1.2.1/fair_scheduler.html
Explanation: Reference:http://hadoop.apache.org/docs/r1.2.1/fair_scheduler.html



Question 10

For each YARN job, the Hadoop framework generates task log file. Where are Hadoop task log files stored?


  1. Cached by the NodeManager managing the job containers, then written to a log directory on the NameNode
  2. Cached in the YARN container running the task, then copied into HDFS on job completion
  3. In HDFS, in the directory of the user who generates the job
  4. On the local disk of the slave mode running the task
Correct answer: D









PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount!



HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files