Download Amazon.DOP-C01.Dump4Sure.2024-11-07.113q.tqb

Download Exam

File Info

Exam AWS DevOps Engineer - Professional
Number DOP-C01
File Name Amazon.DOP-C01.Dump4Sure.2024-11-07.113q.tqb
Size 1 MB
Posted Nov 07, 2024
Download Amazon.DOP-C01.Dump4Sure.2024-11-07.113q.tqb


How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase

Coupon: MASTEREXAM
With discount: 20%






Demo Questions

Question 1

An IT department manages a portfolio with Windows and Linux (Amazon and Red Hat Enterprise Linux) servers both on-premises and on AWS. An audit reveals that there is no process for updating OS and core application patches, and that the servers have inconsistent patch levels. 
Which of the following provides the MOST reliable and consistent mechanism for updating and maintaining all servers at the recent OS and core application patch levels? 
 


  1. Install AWS Systems Manager agent on all on-premises and AWS servers. Create Systems Manager Resource Groups. Use Systems Manager Patch Manager with a preconfigured patch baseline to run scheduled patch updates during maintenance windows. 
  2. Install the AWS OpsWorks agent on all on-premises and AWS servers. Create an OpsWorks stack with separate layers for each operating system, and get a recipe from the Chef supermarket to run the patch commands for each layer during maintenance windows. 
  3. Use a shell script to install the latest OS patches on the Linux servers using yum and schedule it to run automatically using cron. Use Windows Update to automatically patch Windows servers. 
  4. Use AWS Systems Manager Parameter Store to securely store credentials for each Linux and Windows server. Create Systems Manager Resource Groups. Use the Systems Manager Run Command to remotely deploy patch updates using the credentials in Systems Manager Parameter Store  
Correct answer: A
Explanation:
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-patch-patchgroups.html  
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-patch-patchgroups.html 
 



Question 2

A Developer is designing a continuous deployment workflow for a new Development team to facilitate the process for source code promotion in AWS. Developers would like to store and promote code for deployment from development to production while maintaining the ability to roll back that deployment if it fails.  
Which design will incur the LEAST amount of downtime? 
 


  1. Create one repository in AWS CodeCommit. Create a development branch to hold merged changes. Use AWS CodeBuild to build and test the code stored in the development branch triggered on a new commit. Merge to the master and deploy to production by using AWS CodeDeploy for a blue/green deployment. 
  2. Create one repository for each Developer in AWS CodeCommit and another repository to hold the production code. Use AWS CodeBuild to merge development and production repositories, and deploy to production by using AWS CodeDeploy for a blue/green deployment. 
  3. Create one repository for development code in AWS CodeCommit and another repository to hold the production code. Use AWS CodeBuild to merge development and production repositories, and deploy to production by using AWS CodeDeploy for a blue/green deployment. 
  4. Create a shared Amazon S3 bucket for the Development team to store their code. Set up an Amazon CloudWatch Events rule to trigger an AWS Lambda function that deploys the code to production by using AWS CodeDeploy for a blue/green deployment.  
Correct answer: A



Question 3

A DevOps Engineer discovered a sudden spike in a website's page load times and found that a recent deployment occurred. A brief diff of the related commit shows that the URL for an external API call was altered and the connecting port changed from 80 to 443. The external API has been verified and works outside the application. The application logs show that the connection is now timing out, resulting in multiple retries and eventual failure of the call. 
Which debug steps should the Engineer take to determine the root cause of the issue? 
 


  1. Check the VPC Flow Logs looking for denies originating from Amazon EC2 instances that are part of the web Auto Scaling group. Check the ingress security group rules and routing rules for the VPC. 
  2. Check the existing egress security group rules and network ACLs for the VPC. Also check the application logs being written to Amazon CloudWatch Logs for debug information. 
  3. Check the egress security group rules and network ACLs for the VPC. Also check the VPC flow logs looking for accepts originating from the web Auto Scaling group. 
  4. Check the application logs being written to Amazon CloudWatch Logs for debug information. Check the ingress security group rules and routing rules for the VPC.  
Correct answer: C
Explanation:
 
 



Question 4

A DevOps Engineer is working on a project that is hosted on Amazon Linux and has failed a security review. 
The DevOps Manager has been asked to review the company buildspec.yaml file for an AWS CodeBuild project and provide recommendations. The buildspec.yaml file is configured as follows: 
 
  
  
What changes should be recommended to comply with AWS security best practices? (Choose three.) 


  1. Add a post-build command to remove the temporary files from the container before termination to ensure they cannot be seen by other CodeBuild users. 
  2. Update the CodeBuild project role with the necessary permissions and then remove the AWS credentials from the environment variable. 
  3. Store the DB_PASSWORD as a SecureString value in AWS Systems Manager Parameter Store and then remove the DB_PASSWORD from the environment variables. 
  4. Move the environment variables to the ‘db-deploy-bucket’ Amazon S3 bucket, add a prebuild stage to download, then export the variables. 
  5. Use AWS Systems Manager run command versus scp and ssh commands directly to the instance. 
  6. Scramble the environment variables using XOR followed by Base64, add a section to install, and then run XOR and Base64 to the build phase.  
Correct answer: BCE
Explanation:
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-console.html 
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-console.html 



Question 5

A DevOps Engineer administers an application that manages video files for a video production company. The application runs on Amazon EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. Data is stored in an Amazon RDS PostgreSQL Multi-AZ DB instance, and the video files are stored in an Amazon S3 bucket. On a typical day, 50 GB of new video are added to the S3 bucket. The Engineer must implement a multi-region disaster recovery plan with the least data loss and the lowest recovery times. The current application infrastructure is already described using AWS CloudFormation. 
Which deployment option should the Engineer choose to meet the uptime and recovery objectives for the system? 
 


  1. Launch the application from the CloudFormation template in the second region, which sets the capacity of the Auto Scaling group to 1. Create an Amazon RDS read replica in the second region. In the second region, enable cross-region replication between the original S3 bucket and a new S3 bucket. To fail over, promote the read replica as master. Update the CloudFormation stack and increase the capacity of the Auto Scaling group. 
  2. Launch the application from the CloudFormation template in the second region, which sets the capacity of the Auto Scaling group to 1. Create a scheduled task to take daily Amazon RDS cross-region snapshots to the second region. In the second region, enable cross-region replication between the original S3 bucket and Amazon Glacier. In a disaster, launch a new application stack in the second region and restore the database from the most recent snapshot. 
  3. Launch the application from the CloudFormation template in the second region, which sets the capacity of the Auto Scaling group to 1. Use Amazon CloudWatch Events to schedule a nightly task to take a snapshot of the database, copy the snapshot to the second region, and replace the DB instance in the second region from the snapshot. In the second region, enable cross-region replication between the original S3 bucket and a new S3 bucket. To fail over, increase the capacity of the Auto Scaling group. 
  4. Use Amazon CloudWatch Events to schedule a nightly task to take a snapshot of the database and copy the snapshot to the second region. Create an AWS Lambda function that copies each object to a new S3 bucket in the second region in response to S3 event notifications. In the second region, launch the application from the CloudFormation template and restore the database from the most recent snapshot.  
Correct answer: A
Explanation:
https://aws.amazon.com/about-aws/whats-new/2016/06/amazon-rds-for-postgresql-now-supports-cross-region-read-replicas/ 
https://aws.amazon.com/about-aws/whats-new/2016/06/amazon-rds-for-postgresql-now-supports-cross-region-read-replicas/ 



Question 6

A DevOps Engineer has a single Amazon DynamoDB table that receives shipping orders and tracks inventory. The Engineer has three AWS Lambda functions reading from a DymamoDB stream on that table. The Lambda functions perform various functions such as doing an item count, moving items to Amazon Kinesis Data Firehose, monitoring inventory levels, and creating vendor orders when parts are low. 
While reviewing logs, the Engineer notices the Lambda functions occasionally fail under increased load, receiving a stream throttling error. 
 
Which is the MOST cost-effective solution that requires the LEAST amount of operational management? 
 


  1. Use AWS Glue integration to ingest the DynamoDB stream, then migrate the Lambda code to an AWS Fargate task. 
  2. Use Amazon Kinesis streams instead of DynamoDB streams, then use Kinesis analytics to trigger the Lambda functions. 
  3. Create a fourth Lambda function and configure it to be the only Lambda reading from the stream. Then use this Lambda function to pass the payload to the other three Lambda functions. 
  4. Have the Lambda functions query the table directly and disable DynamoDB streams. Then have the Lambda functions query from a global secondary index.  
Correct answer: C
Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html  
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html 
 



Question 7

A government agency is storing highly confidential files in an encrypted Amazon S3 bucket. The agency has configured federated access and has allowed only a particular on-premises Active Directory user group to access this bucket. 
The agency wants to maintain audit records and automatically detect and revert any accidental changes administrators make to the IAM policies used for providing this restricted federated access. 
Which of the following options provide the FASTEST way to meet these requirements? 
 


  1. Configure an Amazon CloudWatch Events Event Bus on an AWS CloudTrail API for triggering the AWS Lambda function that detects and reverts the change. 
  2. Configure an AWS Config rule to detect the configuration change and execute an AWS Lambda function to revert the change. 
  3. Schedule an AWS Lambda function that will scan the IAM policy attached to the federated access role for detecting and reverting any changes. 
  4. Restrict administrators in the on-premises Active Directory from changing the IAM policies.  
Correct answer: B
Explanation:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-CloudTrail-Rule.html  
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-CloudTrail-Rule.html 
 



Question 8

A company is using an AWS CloudFormation template to deploy web applications. The template requires that manual changes be made for each of the three major environments: production, staging, and development. 
The current sprint includes the new implementation and configuration of AWS CodePipeline for automated deployments. 
What changes should the DevOps Engineer make to ensure that the CloudFormation template is reusable across multiple pipelines? 
 


  1. Use a CloudFormation custom resource to query the status of the CodePipeline to determine which environment is launched. Dynamically alter the launch configuration of the Amazon EC2 instances. 
  2. Set up a CodePipeline pipeline for each environment to use input parameters. Use CloudFormation mappings to switch associated UserData for the Amazon EC2 instances to match the environment being launched. 
  3. Set up a CodePipeline pipeline that has multiple stages, one for each development environment. Use AWS Lambda functions to trigger CloudFormation deployments to dynamically alter the UserData of the Amazon EC2 instances launched in each environment. 
  4. Use CloudFormation input parameters to dynamically alter the LaunchConfiguration and UserData sections of each Amazon EC2 instance every time the CloudFormation stack is updated.  
Correct answer: B
Explanation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html#reuse  
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html#reuse 
 



Question 9

A company wants to use Amazon DynamoDB for maintaining metadata on its forums. See the sample data set in the image below.  
 
  
A DevOps Engineer is required to define the table schema with the partition key, the sort key, the local secondary index, projected attributes, and fetch operations. The schema should support the following example searches using the least provisioned read capacity units to minimize cost.  
  • Search within ForumName for items where the subject starts with ‘a’. 
  • Search forums within the given LastPostDateTime time frame. 
  • Return the thread value where LastPostDateTime is within the last three months. 
Which schema meets the requirements? 


  1. Use Subject as the primary key and ForumName as the sort key. Have LSI with LastPostDateTime as the sort key and fetch operations for thread. 
  2. Use ForumName as the primary key and Subject as the sort key. Have LSI with LastPostDateTime as the sort key and the projected attribute thread. 
  3. Use ForumName as the primary key and Subject as the sort key. Have LSI with Thread as the sort key and the projected attribute LastPostDateTime. 
  4. Use Subject as the primary key and ForumName as the sort key. Have LSI with Thread as the sort key and fetch operations for LastPostDateTime.  
Correct answer: B
Explanation:
https://aws.amazon.com/blogs/database/using-sort-keys-to-organize-data-in-amazon-dynamodb/  
https://aws.amazon.com/blogs/database/using-sort-keys-to-organize-data-in-amazon-dynamodb/ 
 



Question 10

A business has an application that consists of five independent AWS Lambda functions. 
The DevOps Engineer has built a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild that builds, tests, packages, and deploys each Lambda function in sequence. The pipeline uses an Amazon CloudWatch Events rule to ensure the pipeline execution starts as quickly as possible after a change is made to the application source code. 
After working with the pipeline for a few months, the DevOps Engineer has noticed the pipeline takes too long to complete. 
What should the DevOps Engineer implement to BEST improve the speed of the pipeline? 
 


  1. Modify the CodeBuild projects within the pipeline to use a compute type with more available network throughput. 
  2. Create a custom CodeBuild execution environment that includes a symmetric multiprocessing configuration to run the builds in parallel. 
  3. Modify the CodePipeline configuration to execute actions for each Lambda function in parallel by specifying the same runOrder. 
  4. Modify each CodeBuild project to run within a VPC and use dedicated instances to increase throughput.  
Correct answer: C
Explanation:
https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html  
https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html 
 









PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount!



HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files