Download Google.Professional-Data-Engineer.VCEplus.2024-08-23.155q.tqb

Download Exam

File Info

Exam Professional Data Engineer on Google Cloud Platform
Number Professional-Data-Engineer
File Name Google.Professional-Data-Engineer.VCEplus.2024-08-23.155q.tqb
Size 1 MB
Posted Aug 23, 2024
Download Google.Professional-Data-Engineer.VCEplus.2024-08-23.155q.tqb

How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase

Coupon: MASTEREXAM
With discount: 20%






Demo Questions

Question 1

You launched a new gaming app almost three years ago. You have been uploading log files from the previous day to a separate Google BigQuery table with the table name format LOGS_yyyymmdd. You have been using table wildcard functions to generate daily and monthly reports for all time ranges.
Recently, you discovered that some queries that cover long date ranges are exceeding the limit of 1,000 tables and failing. How can you resolve this issue?


  1. Convert all daily log tables into date-partitioned tables
  2. Convert the sharded tables into a single partitioned table
  3. Enable query caching so you can cache data from previous months
  4. Create separate views to cover each month, and query from these views
Correct answer: A



Question 2

Your analytics team wants to build a simple statistical model to determine which customers are most likely to work with your company again, based on a few different metrics. They want to run the model on Apache Spark, using data housed in Google Cloud Storage, and you have recommended using Google Cloud Dataproc to execute this job. Testing has shown that this workload can run in approximately 30 minutes on a 15-node cluster, outputting the results into Google
BigQuery. The plan is to run this workload weekly. How should you optimize the cluster for cost?


  1. Migrate the workload to Google Cloud Dataflow
  2. Use pre-emptible virtual machines (VMs) for the cluster
  3. Use a higher-memory node so that the job runs faster
  4. Use SSDs on the worker nodes so that the job can run faster
Correct answer: A



Question 3

You are testing a Dataflow pipeline to ingest and transform text files. The files are compressed gzip, errors are written to a dead-letter queue, and you are using Sidelnputs to join data You noticed that the pipeline is taking longer to complete than expected, what should you do to expedite the Dataflow job?


  1. Switch to compressed Avro files
  2. Reduce the batch size
  3. Retry records that throw an error
  4. Use CoGroupByKey instead of the Sidelnput
Correct answer: B









CONNECT US

Facebook

Twitter

PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount!



HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files