Download Cisco.Pass4sure.200-310.2017-09-25.1e.75q.vcex

Download Dump

File Info

Exam CCDA - Designing for Cisco Internetwork Solutions
Number 200-310
File Name Cisco.Pass4sure.200-310.2017-09-25.1e.75q.vcex
Size 3.28 Mb
Posted September 25, 2017
Downloaded 10



How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase

Coupon: MASTEREXAM
With discount: 20%

 
 



Demo Questions

Question 1

In a campus network hierarchy, which of the following security functions does not typically occur at the campus access layer?

  • A: NAC
  • B: packet filtering
  • C: DHCP snooping
  • D: DAI

Correct Answer: B

Packet filtering is typically implemented in the campus distribution layer, not the campus access layer. The distribution layer of the campus network hierarchy is where access control lists (ACLs) and inter-VLAN routing are typically implemented. The distribution layer serves as an aggregation point for access layer network links. Because the distribution layer is the intermediary between the access layer and the core layer, the distribution layer is the ideal place to enforce security policies, provide load balancing, provide Quality of Service (QoS), and perform tasks that involve packet manipulation, such as routing and packet filtering. Because the distribution layer connects to both the access and core layers, it is often comprised of multilayer switches that can perform both Layer 3 routing functions and Layer 2 switching. 
Network Admission Control (NAC), Dynamic ARP Inspection (DAI), and Dynamic Host Configuration Protocol (DHCP) snooping are performed at the campus access layer. The access layer serves as a media termination point for devices, such as servers and hosts. Because access layer devices provide access to the network, the access layer is the ideal place to classify traffic and perform network admission control. NAC is a Cisco feature that prevents hosts from accessing the network if they do not comply with organizational requirements, such as having an updated antivirus definition file. DHCP snooping is a feature used to mitigate DHCP spoofing attacks. In a DHCP spoofing attack, an attacker installs a rogue DHCP server on the network in an attempt to intercept DHCP requests. The rogue DHCP server can then respond to the DHCP requests with its own IP address as the default gateway address? hence all traffic is routed through the rogue DHCP server. DAI is a feature that can help mitigate Address Resolution Protocol (ARP) poisoning attacks. In an ARP poisoning attack, which is also known as an ARP spoofing attack, the attacker sends a gratuitous ARP (GARP) message to a host. The message associates the attacker's MAC address with the IP address of a valid host on the network. Subsequently, traffic sent to the valid host address will go through the attacker's computer rather than directly to the intended recipient. 
Reference:
Cisco: Campus Network for High Availability Design Guide: Access Layer




Question 2

Which of the following is a network architecture principle that represents the structured manner in which the logical and physical functions of the network are arranged? 

  • A: modularity
  • B: hierarchy
  • C: top-down
  • D: bottom-up

Correct Answer: B

The hierarchy principle is the structured manner in which both the physical and logical functions of the network are arranged. A typical hierarchical network consists of three layers: the core layer, the distribution layer, and the access layer. The modules between these layers are connected to each other in a fashion that facilitates high availability. However, each layer is responsible for specific network functions that are independent from the other layers. 
The core layer provides fast transport services between buildings and the data center. The distribution layer provides link aggregation between layers. Because the distribution layer is the intermediary between the access layer and the campus core layer, the distribution layer is the ideal place to enforce security policies, provide load balancing, provide Quality of Service (QoS), and perform tasks that involve packet manipulation, such as routing. The access layer, which typically comprises Open Systems Interconnection (OSI) Layer 2 switches, serves as a media termination point for devices, such as servers and workstations. Because access layer devices provide access to the network, the access layer is the ideal place to perform user authentication and to institute port security. High availability, broadcast suppression, and rate limiting are also characteristics of access layer devices. 
The modularity network architecture principle is most likely to facilitate troubleshooting. The modularity and hierarchy principles are complementary components of network architecture. The modularity principle is used to implement an amount of isolation among network components. This ensures that changes to any given component have little to no effect on the rest of the network. Modularity also simplifies the troubleshooting process by limiting the task of isolating the problem to the affected module. 
The modularity principle typically consists of two building blocks: the access distribution block and the services block. The access distribution block contains thebottom two layers of a three tier hierarchical network design. The services block, which is a newer building block, typically contains services like routing policies, wireless access, tunnel termination, and Cisco Unified Communications services. 
Top-down and bottom-up are both network design models, not network architecture principles. The top-down network design approach is typically used to ensure that the eventual network build will properly support the needs of the network's use cases. For example, a dedicated customer service call center might first evaluate communications and knowledgebase requirements prior to designing and building out the call center's network infrastructure. In other words, a top-down design approach typically begins at the Application layer, or Layer 7, of the OSI reference model and works down the model to the Physical layer, or Layer 1. 
In contrast to the top-down approach, the bottom-up approach begins at the bottom of the OSI reference model. Decisions about network infrastructure are made first, and application requirements are considered last. This approach to network design can often lead to frequent network redesigns to account for requirements that have not been met by the initial infrastructure. 
Reference:
CCDA 200-310 Official Cert Guide, Chapter 2, Cisco Enterprise Architecture Model, pp. 49-50 
Cisco: Enterprise Campus 3.0 Architecture: Overview and Framework: Hierarchy




Question 3

From the left, select the characteristics that apply to a small branch office, and drag them to the right. 

Correct Answer: Exam simulator is required

A small branch office typically uses a single Integrated Services Router (ISR), combines LAN and WAN termination, and does not include a distribution layer. Cisco defines a small branch office as an office that contains up to 50 users and that implements a one-tier design. A single-tier design combines LAN and WAN termination into a single ISR, where a redundant link to the access layer can be created if the ISR uses an EtherChannel topology versus a trunked topology, which offers no link redundancy. Because a small branch office uses a single ISR, such as the ISR G2, to provide LAN and WAN services, an external access switch, such as the Cisco 2960, is not necessary. In addition, Rapid PerVLAN Spanning Tree Plus (RPVST+) is not supported on most ISR platforms. 
Medium and large branch offices typically use RPVST+ and external access switches. RPVST+ is an advanced spanning tree algorithm that can prevent loops on a switch that handles multiple virtual LANs (VLANs). RPVST+ is typically supported only on external switches and advanced routing platforms. External access switches provide high-density LAN connectivity to individual hosts and typically aggregate links on distribution layer switches. 
Cisco defines a medium branch office as an office that contains between 50 and 100 users and that implements a two-tier design. A dual-tier design separates LAN and WAN termination into multiple devices. A medium branch office typically uses two ISRs, with one ISR serving as a connection to the headquarters location and the second serving as a connection to the Internet. In addition, the two ISRs are typically connected by at least one external switch that also serves as an access layer switch for the branch users. 
Cisco defines a large branch office as an office that contains between 100 and 200 users and that implements a three-tier design. Similar to a dual-tier design, a triple-tier design separates LAN and WAN termination into multiple devices. However, a triple-tier design separates additional services, such as firewall functionality and intrusion detection. A large branch office typically uses at least one dedicated device for each network service. Whereas small and medium branch offices consist of only an edge layer and an access layer, the large branch office also includes a distribution layer. 
Reference:
CCDA 200-310 Official Cert Guide, Chapter 7, Enterprise Branch Profiles, pp. 275-279 
Cisco: LAN Baseline Architecture Branch Office Network Reference Design Guide: Small Office Design (PDF)
Cisco: LAN Baseline Architecture Branch Office Network Reference Design Guide: Branch LAN Design Options (PDF)




Question 4

Which of the following statements is true regarding route summarization?

  • A: Summarization increases routing protocol convergence times.
  • B: Summarization must be performed on classless network boundaries.
  • C: Summarization causes a router to advertise more routes to its peers.
  • D: Summarization can reduce the amount of bandwidth used by a routing protocol.
  • E: Summarization cannot be performed on a group of contiguous networks.

Correct Answer: D

Route summarization can reduce the amount of bandwidth used by a routing protocol. Summarization is the process of advertising a group of contiguous networks as a single route. When a router performs summarization, the router advertises a summary route rather than routes to each individual subnetwork. Summarization can cause a routing protocol to converge faster and can reduce the consumption of network bandwidth, because only a single summary route will be advertised by the routing protocol. For example, summarizing routes from the distribution layer to the core layer of a hierarchical network enables the distribution layer devices to limit the number of routing advertisements that are sent to the core layer devices. Because fewer advertisements are sent, the routing tables of core layer devices are kept small and access layer topology changes are not advertised into the core layer. 
You can configure a router to summarize its networks on either classful or classless network boundaries. When combining routes to multiple subnetworks into a single summarized route, you must take bits away from the subnet mask. For example, consider a router that has interfaces connected to the 16 contiguous networks from 10.10.0.0/24 through 10.10.15.0/24. The routing table would contain a route to each of the 16 networks. The 16 contiguous networks can be summarized in 4 bits (24 = 16). Taking 4 bits away from the 24bit subnet mask yields a 20bit mask, which is 255.255.240.0. Thus the network and subnet mask combination of 10.10.0.0 255.255.240.0 encompasses all 16 networks. The process of taking bits away from the subnet mask to more broadly encompass multiple subnetworks is called supernetting. This is the opposite of subnetting, which divides a network into smaller subnetworks. 
Reference:
CCDA 200-310 Official Cert Guide, Chapter 11, Route Summarization, pp. 455-458 
Cisco: IP Routing Frequently Asked Questions: What does route summarization mean? 
Cisco: IP Addressing and Subnetting for New Users




Question 5

Which of the following queuing methods is the most appropriate for handling voice, video, mission-critical, and lower-priority traffic?

  • A: FIFO
  • B: WFQ
  • C: LLQ
  • D: CBWFQ

Correct Answer: C

Of the choices provided, low-latency queuing (LLQ) is the most appropriate queuing method for handling voice, video, mission-critical, and lower-priority traffic. LLQ supports the creation of up to 64 user-defined traffic classes as well as one or more strict-priority queues that can be used to guarantee bandwidth for delay-sensitive traffic, such as voice and video traffic. Each strict-priority queue can use as much bandwidth as possible but can use only the guaranteed bandwidth when other queues have traffic to send, thereby avoiding bandwidth starvation. Cisco recommends limiting the strict-priority queues to a total of 33 percent of the link capacity. 
Class-based weighted fair queuing (CBWFQ) provides bandwidth guarantees, so it can be used for voice, video, mission-critical, and lower-priority traffic. However, CBWFQ does not provide the delay guarantees provided by LLQ, because CBWFQ does not provide support for strict-priority queues. CBWFQ improves upon weighted fair queuing (WFQ) by enabling the creation of up to 64 custom traffic classes, each with a guaranteed minimum bandwidth. 
Although WFQ can be used for voice, video, mission-critical, and lower-priority traffic, it does not provide the bandwidth guarantees or the strict-priority queues that are provided by LLQ. WFQ is used by default on Cisco routers for serial interfaces at 2.048 Mbps or lower. Traffic flows are identified by WFQ based on source and destination IP address, port number, protocol number, and Type of Service (ToS). Although WFQ is easy to configure, it is not supported on high-speed links. 
First-in-first-out (FIFO) queuing is the least appropriate for voice, video, mission-critical, and lower-priority traffic. By default, Cisco uses FIFO queuing for interfaces faster than 2.048 Mbps. FIFO queuing requires no configuration because all packets are arranged into a single queue. As the name implies, the first packet received is the first packet transmitted, without regard for packet type, protocol, or priority. 
Reference:
CCDA 200-310 Official Cert Guide, Chapter 6, Low-Latency Queuing, p. 235 
Cisco: Enterprise QoS Solution Reference Network Design Guide: Queuing and Dropping Principles 
Cisco: Signalling Overview: RSVP Support for Low Latency Queueing




Question 6

To which of the following high-availability resiliency levels do duplicate power supplies belong?

  • A: management
  • B: monitoring
  • C: network
  • D: system

Correct Answer: D

Duplicate power supplies are a system-level resiliency component of a high-availability solution. High-availability solutions feature redundant components that provide protection in the event that a primary component fails. Cisco defines three components of a high-availability solution: network-level resiliency, system-level resiliency, and management and monitoring. System-level resiliency components provide failover protection for system hardware components. Duplicate power supplies ensure that critical system components can maintain power in the event of a failure of the primary power supply. 
Duplicate power supplies are not an example of management and monitoring resiliency components. Management and monitoring is a resiliency component used to quickly detect changes to various components of a high-availability solution. Examples of the monitoring component include Syslog. Syslog is used to gather information about the state of network components and to compile them in a centralized location. This allows administrators to gain information regarding the state of network or system components without having to log on to each device on the network. 
Duplicate power supplies are not an example of network-level resiliency components. Network-level resiliency features redundant network devices, such as backup switches. In addition, network resiliency features duplicate links that can be used to maintain communication between network devices if the primary link fails. When you increase network resiliency by adding redundant links to a network design, you should also configure link management protocols, such as Spanning Tree Protocol (STP), to ensure that the redundant links do not generate loops within the network. 
Reference:
Cisco: Deploying High Availability in the Wiring Closet Q&A




Question 7

Which of the following statements are true about OSPF and EIGRP? (Choose two.) 

  • A: Both use a DR and a BDR.
  • B: Both use a DIS.
  • C: Both can operate on an NBMA point-to-multipoint network.
  • D: Both can operate on an NBMA point-to-point network.
  • E: Both perform automatic route summarization.
  • F: Both use areas to limit the flooding of database updates.

Correct Answer: CD

Enhanced Interior Gateway Routing Protocol (EIGRP) and Open Shortest Path First (OSPF) can operate on non-broadcast multi-access (NBMA) point-to-point networks and NBMA point-to-multipoint networks. Because NBMA networks, such as Frame Relay and Asynchronous Transfer Mode (ATM), do not support Data Link layer broadcasts, routing protocols that operate on NBMA networks must support methods of neighbor discovery and route advertisement that do not rely on multicast or broadcast transmission methods. Although subinterfaces can be used to treat an NBMA point-to-multipoint network as a series of point-to-point connections, you are not required to configure subinterfaces for NBMA point-to-multipoint networks with EIGRP and OSPF. 
EIGRP, not OSPF, performs automatic route summarization. Summarization is a method that can be used to advertise a group of contiguous networks as a single route. You can configure a router to summarize its networks on either classful or classless network boundaries. When a router performs summarization, the router advertises a summary route rather than routes to each individual subnetwork, which can cause a routing protocol to converge faster. This can also reduce unnecessary consumption of network bandwidth, because only a single summary route will be advertised by the routing protocol. EIGRP is capable of performing summarization on any EIGRP interface. By contrast, OSPF supports summarization at border routers and redistribution summarization. 
OSPF, not EIGRP, uses a designated router (DR) and a backup designated router (BDR) as focal points for routing information. Only the DR distributes link-state advertisements (LSAs) that contain OSPF routing information to all the OSPF routers in the area. A DR and a BDR are elected only on multiaccess networks; they are not elected on point-to-point networks. If the DR fails or is powered off, the BDR takes over for the DR and a new BDR is elected. 
Intermediate System-to-Intermediate System (ISIS), not EIGRP or OSPF, uses a designated intermediate system (DIS). A DIS is functionally equivalent to an OSPF DR. The DIS serves as a focal point for the distribution of routing information. Once elected, the DIS must relinquish its duties if another router with a higher priority joins the network. If the DIS is no longer detected on the network, a new DIS is elected based on the priority of the remaining routers on the network segment. 
OSPF, not EIGRP, uses areas to limit the flooding of database updates, thereby keeping routing tables small and update traffic low within each area. By contrast, EIGRP uses stub routers to limit EIGRP queries. 
An EIGRP stub router advertises only a specified set of routes. 
Reference:
CCDA 200-310 Official Cert Guide, Chapter 11, OSPFv2 Summary, p. 439 
CCDA 200-310 Official Cert Guide, Chapter 10, EIGRP for IPv4 Summary, p. 406 
Cisco: Configuration Notes for the Implementation of EIGRP over Frame Relay and Low Speed Links: NBMA Interfaces (Frame Relay, X.25, ATM)
Cisco: OSPF Design Guide: Adjacencies on Non-Broadcast Multi-Access (NBMA) Networks




Question 8

STP is disabled by default in which of the following Layer 2 access designs?

  • A: Flex Link
  • B: loop-free U
  • C: looped triangle
  • D: loop-free inverted U
  • E: looped square

Correct Answer: A

Spanning Tree Protocol (STP) is disabled by default in Flex Link designs. STP prevents switching loops on a network. Switching loops can occur when there is more than one switched path to a destination. The spanning tree algorithm determines the best path through a switched network, and any ports that create redundant paths are blocked. If the best path becomes unavailable, the network topology is recalculated and the port connected to the next best path is unblocked. There are no loops in a Flex Link design, and STP is disabled when a device is configured to participate in a Flex Link. Interface uplinks in this topology are configured in active/standby pairs, and each device can only belong to a single Flex Link pair. In the event of an uplink failure, the standby link becomes active and takes over, thereby offering redundancy when an access layer uplink fails. Possible disadvantages of the Flex Link design include its inability to return to the original state after a failed link is recovered, its increased convergence time over other designs, and its inability to run STP in order to block redundant paths that might be created by inadvertent errors in cabling or configuration. 
STP is not disabled by default in loop-free inverted U designs. Loop-free inverted U designs offer redundancy at the aggregation layer, not the access layer? 
therefore, traffic will black-hole upon failure of an access switch uplink. All uplinks are active with no looping, thus there is no STP blocking by default. However, STP is still essential so that redundant paths that might be created by any inadvertent errors in cabling or configuration are blocked.  
STP is not disabled by default in loop-free U designs. This topology offers a redundant link between access layer switches as well as a redundant link at the aggregation layer. Because of the redundant path in both layers, extending a virtual LAN (VLAN) beyond an individual access layer pair would create a loop? 
therefore, loop-free U designs cannot support VLAN extensions. Like loop-free inverted U designs, loop-free U designs also run STP and have issues with traffic being black-holed upon failure of an access switch uplink. 
STP is not disabled by default in looped triangle designs. A looped triangle design can provide deterministic convergence in the event of a link failure. In a triangle design, each access layer device has direct paths to redundant aggregation layer devices. The ability to recover from a failed link in this design is granted by redundant physical connections that are blocked by Rapid STP (RSTP) until the primary connection fails. RSTP is an evolution of STP that provides faster convergence. RSTP achieves this by merging the disabled, blocking, and listening states into a single state, called the discarding port state. With fewer port states to transition through, convergence is faster. A looped triangle topology is currently the most common design in enterprise data centers. 
STP is not disabled by default in looped square designs. Like a looped triangle, a looped square design can provide deterministic convergence through redundant connections. However, the difference between the two is that in a looped square the redundant link exists between the access layer devices themselves, whereas in a looped triangle the redundant link exists between the access layer devices and the aggregation layer devices. In a looped square, the connection between the access layer devices is blocked by STP until a primary link failure occurs. 
Reference:
Cisco: Data Center Access Layer Design: FlexLinks Access Model




Question 9

Your company is opening a branch office that will contain 29 host computers. Your company has been allocated the 192.168.10.0/24 address range, and you have been asked to conserve IP address space when creating a subnet for the new branch office. 
Which of the following network addresses should you use for the new branch office?

  • A: 192.168.10.0/25
  • B: 192.168.10.32/26
  • C: 192.168.10.64/26
  • D: 192.168.10.64/27

Correct Answer: D

You should use the 192.168.10.64/27 network address for the new branch office. The /27 notation indicates that 27 bits are used for the network portion of the address and that five bits remain for the host portion of the address, which allows for 32 (25) usable host addresses. Therefore, this address range is large enough to handle the number of hosts on the new branch office subnet. The first address is the network address, the last address is the broadcast address, and the other 30 (25-2) addresses are usable host addresses. Therefore, this address range is large enough to handle a subnet containing 29 host computers. 
You should always begin allocating address ranges starting with the largest group of hosts to ensure that the entire group has a large, contiguous address range available. Subnetting a contiguous address range in structured, hierarchical fashion enables routers to maintain smaller routing tables and eases administrative burden when troubleshooting. 
You should not use the 192.168.10.0/25 network address for the new branch office. The /25 notation indicates that 25 bits are used for the network portion of the address and that 7 bits remain for the host portion of the address, which allows for 126 (27-2) usable host addresses. Although this address range is large enough to handle the new branch office subnet, it does not conserve IP address space, because a smaller range can successfully be used. 
You should not use the 192.168.10.32/26 network address for the new branch office. Although a 26bit mask is large enough for 62 usable host addresses, the 192.168.10.32 address is not a valid network address for a 26-bit mask. The 192.168.10.0/24 address range can be divided into four ranges, each with 64 addresses, by using a 26-bit mask:
192.168.10.0/26 
192.168.10.64/26 
192.168.10.128/26 
192.168.10.192/26 
You should not use the 192.168.10.64/26 network address for the new branch office. The /26 notation indicates that 26 bits are used for the network portion of the address and that six bits remain for the host portion of the address, which allows for 62 (26-2) host addresses. Although this address range is large enough to handle the new branch office subnet, it does not conserve IP address space, because a smaller range can successfully be used. 
Although it is important to learn the formula for calculating valid host addresses, the following list demonstrates the relationship between common subnet masks and valid host addresses:
  
 

Reference:
CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Address Subnets, pp. 302-310 
CCDA 200-310 Official Cert Guide, Chapter 8, Plan for a Hierarchical IP Address Network, pp. 311-312  
Cisco: IP Addressing and Subnetting for New Users




Question 10

You are planning a network by using the top-down design method. You are using structured design principles to generate a model of the completed system. 
Which of the following are you most likely to consider when creating the model? (Choose four.)

  • A: business goals
  • B: future network services
  • C: network protocols
  • D: technical objectives
  • E: applications
  • F: network topologies
  • G: network components

Correct Answer: ABDE

Most likely, you will consider business goals, existing and future network services, technical objectives, and applications if you are using structured design principles to generate a model of the completed system if that system is being planned by using the top-down design method. The top-down network design approach is typically used to ensure that the eventual network build will properly support the needs of the network's use cases. In other words, a top-down design approach typically begins at the Application layer, or Layer 7, of the Open Systems Interconnection (OSI) reference model and works down the model to the Physical layer, or Layer 1. In order for the designer and the organization to obtain a complete picture of the design, the designer should create models that represent the logical functionality of the system, the physical functionality of the system, and the hierarchical layered functionality of the system. 
Because a top-down design model of the completed system is intended to provide an overview of how the system functions, lower OSI-layer specifics such as network protocols should not be included in the model. Therefore, you should not consider the network protocols that will be implemented. Nor should you consider the network topologies or network hardware components. Those components of the design should be assessed in more specific detail in the lower layers of the OSI reference model. 
Reference:
Cisco: Using the Top-Down Approach to Network Design: Structured Design Principles (Flash)










CONNECT US

Facebook

Twitter

PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount!



HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files