Download Designing for Cisco Internetwork Solutions.CertDumps.200-310.2017-12-27.1e.174q.vcex

Download Dump

File Info

Exam CCDA - Designing for Cisco Internetwork Solutions
Number 200-310
File Name Designing for Cisco Internetwork Solutions.CertDumps.200-310.2017-12-27.1e.174q.vcex
Size 8.42 Mb
Posted December 27, 2017
Downloaded 18



How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase

Coupon: MASTEREXAM
With discount: 20%

 
 



Demo Questions

Question 1

Which of the following is a leased-line WAN technology that divides a link's bandwidth into equal-sized segments based on clock rate?

  • A: TDM
  • B: ATM
  • C: WDM
  • D: DWDM
  • E: MPLS
  • F: Metro Ethernet

Correct Answer: A

Time division multiplexing (TDM) is a leased-line WAN technology that divides a link's bandwidth into equal-sized segments based on clock rate. TDM enables several data streams to share a single physical connection. Each data stream is then allotted a fixed number of segments that can be used to transmit data. Because the number of segments dedicated to each data stream is static, unused bandwidth from one data stream cannot be dynamically reallocated to another data stream that has exceeded its available bandwidth. By contrast, statistical multiplexing dynamically allocates bandwidth to data streams based on their traffic flow. For example, if a particular data stream does not have any traffic to send, its bandwidth is reallocated to other data streams that need it. 
Metro Ethernet does not divide a link's bandwidth into equal-sized segments based on clock rate. Metro Ethernet is a WAN technology that is commonly used to connect networks in the same metropolitan area. 
For example, if a company has multiple branch offices within the same city, the company can use Metro Ethernet to connect the branch offices to the corporate headquarters. Metro Ethernet providers typically provide up to 1,000 Mbps of bandwidth. 
Wavelength division multiplexing (WDM) does not divide a link's bandwidth into equal-sized segments based on clock rate. WDM is a leased-line WAN technology used to increase the amount of data signals that a single fiber strand can carry. To accomplish this, WDM can transfer data of varying light wavelengths on up to 16 channels per single fiber strand. Whereas TDM divides the bandwidth in order to carry multiple data streams simultaneously, WDM aggregates the data signals being carried within the fiber strand. 
Dense WDM (DWDM) does not divide a link's bandwidth into equal-sized segments based on clock rate. DWDM is a leased-line WAN technology that improves on WDM by carrying up to 160 channels on a single fiber strand. The spacing of DWDM channels is highly compressed, requiring a more complex transceiver design and therefore making the technology very expensive to implement. 
Asynchronous Transfer Mode (ATM) uses statistical multiplexing and does not divide a link's bandwidth into equal-sized segments based on clock rate. ATM is a shared WAN technology that transports its payload in a series of 53byte cells. ATM has the unique ability to transport different types of traffic-including IP packets, traditional circuit-switched voice, and video-while still maintaining a high quality of service for delay-sensitive traffic, such as voice and video services. Although ATM could be categorized as a packet-switched WAN technology, it is often listed in its own category as a cell-switched WAN technology. 
Multiprotocol Label Switching (MPLS) does not divide a link's bandwidth into equal-sized segments based on clock rate. MPLS is a shared WAN technology that makes routing decisions based on information contained in a fixed-length label. In an MPLS virtual private network (VPN), each customer site is provided with its own label by the service provider. This enables the customer site to use its existing IP addressing scheme internally while allowing the service provider to manage multiple sites that might have conflicting IP address ranges. The service provider then forwards traffic over shared lines between the sites in the VPN according to the routing information that is passed to each provider edge router. 
Reference:
CCDA 200-310 Official Cert Guide, Chapter 6, TimeDivision Multiplexing, p. 225 
Cisco: ISDN Voice, Video and Data Call Switching with Router TDM Switching Features




Question 2

You are adding an additional LAP to your current wireless network, which uses LWAPP. The LAP is configured with a static IP address. You want to identify the sequence in which the LAP will connect to and register with a WLC on the network. 
Select the lap connection steps on the left, and drag them to the appropriate location on the right. Not all steps will be used. 

Correct Answer: Exam simulator is required

When you add a lightweight access point (LAP) to a Wireless network that uses Lightweight Access Point Protocol (LWAPP), the LAP goes through a sequence of steps to register with a Wireless LAN controller (WLC) on the network. First, if Open Systems Interconnection (OSI) Layer 2 LWAPP mode is supported, the LAP attempts to locate a WLC by broadcasting a Layer 2 LWAPP discovery request message. If a WLC does not respond to the Layer 2 broadcast, the LAP will broadcast a 
Layer 3 LWAPP discovery request message. 
Once a WLC receives the LWAPP discovery message, the WLC will send an LWAPP discovery response message to the LAP; the discovery response will contain the IP address of the WLC. The LAP compiles a list of all discovery responses it receives. The list is cross-referenced against the LAP's internal configuration. The LAP will then send an LWAPP join request message to one of the WLCs on its list of responses. 
If the LAP has been configured with a primary, secondary, and tertiary WLC, the LAP will first send an LWAPP join request message to the primary WLC. If no response is received from the primary WLC, the LAP will try the secondary and tertiary WLCs in sequence. If no response is received from either the secondary or tertiary WLCs, the LAP will examine the responses on its list for a master controller flag. If one of the WLCs is configured as a master, the LAP will send an LWAPP join request message to the master WLC. If there is no master configured, or if the master does not respond, the LAP will examine its list of responses and send an LWAPP join request message to the WLC with the greatest capacity. 
When a WLC responds with an LWAPP join response message, the authentication process begins. After the LAP and the WLC authenticate with each other, the LAP will register with the WLC. 
Reference:
Cisco: Lightweight AP (LAP) Registration to a Wireless LAN Controller (WLC): Register the LAP with the WLC




Question 3

Which of the following statements best describes the purpose of CDP?

  • A: CDP is a proprietary protocol used by Cisco devices to detect neighboring Cisco devices.
  • B: CDP is a standard protocol used to power IP devices over Ethernet.
  • C: CDP is a proprietary protocol used to power IP devices over Ethernet.
  • D: CDP is a standard protocol used by Cisco devices to detect neighboring devices of any type.

Correct Answer: A

Cisco Discovery Protocol (CDP) is a Cisco-proprietary protocol used by Cisco devices to detect neighboring Cisco devices. For example, Cisco switches use CDP to determine whether an attached Voice over IP (VoIP) phone is manufactured by Cisco or by a third party. CDP is enabled by default on Cisco devices. You can globally disable CDP by issuing the no cdp run command in global configuration mode. You can disable CDP on a perinterface basis by issuing the no cdp enable command in interface configuration mode. 
CDP packets are broadcast from a CDPenabled device on a multicast address. Each directly connected CDPenabled device receives the broadcast and uses that information to build a CDP table. Detailed information about neighboring CDP devices can be viewed in IOS by issuing the show cdp neighbor detail command in global configuration mode. The following abbreviated sample output shows information obtained from CDP about the IP phone named SEP00123456789A:

 
   

Link Layer Detection Protocol (LLDP), not CDP, is a standard protocol that detects neighboring devices of any type. Cisco devices also support LLDP. LLDP can be used in a heterogeneous network to enable Cisco devices to detect non-Cisco devices and vice versa. LLDP, which is enabled by default, can be disabled globally by issuing the no lldp run command. You can reenable LLDP by issuing the lldp run command. 
CDP is not a protocol used to power IP devices over Ethernet, although an IP phone can provide Power over Ethernet (PoE) requirements to a switch by using CDP. A Catalyst switch can provide power to both Cisco and non-Cisco IP phones that support either the 802.3af standard method or the Cisco prestandard method of PoE. For a Catalyst switch to successfully power an IP phone, both the switch and the IP phone must support the same PoE method. After a common PoE method is determined, CDP messages sent between Catalyst switches and Cisco IP phones can further refine the amount of power allocated to each device. 
Reference:
CCDA 200-310 Official Cert Guide, Chapter 15, CDP, p. 629 
Cisco: Catalyst 3750 Switch Software Configuration Guide, 12.2(40)SE: Configuring CDP




Question 4

Which of the following statements are true regarding standard IP ACLs? (Choose two.)

  • A: Standard ACLs should be placed as close to the source as possible.
  • B: Standard ACLs can filter traffic based on source and destination address.
  • C: Standard ACLs can be numbered in the range from 1 through 99 or from 1300 through 1999.
  • D: Standard ACLs can filter traffic based on port number.
  • E: Standard ACLs can filter traffic from a specific host or a specific network.

Correct Answer: CE

Standard IP access control lists (ACLs) can be numbered in the range from 1 through 99 or from 1300 through 1999 and can filter traffic from a specific host or a specific network. ACLs are used to control packet flow across a network. For example, you could use an ACL on a router to restrict a specific type of traffic, such as Telnet sessions, from passing through a corporate network. There are two types of IP ACLs: standard and extended. Standard IP ACLs can be used to filter based only on source IP addresses; standard IP ACLs cannot be used to filter based on source and destination address. Standard ACLs should be placed as close to the destination as possible so that other traffic originating from the source address is not affected by the ACL.
Extended IP ACLs enable you to permit or deny packets based on not only source IP address but destination network, protocol, or destination port. In contrast to standard IP ACLs, extended IP ACLs should be placed as close to the source as possible. This ensures that traffic being denied by the ACL does not unnecessarily traverse the network. Extended ACLs have access list numbers from 100 through 199 and from 2000 through 2699. 
Reference:
CCDA 200-310 Official Cert Guide, Chapter 13, Identity and Access Control Deployments, pp. 532-533 Cisco: Configuring IP Access Lists




Question 5

In which of the following situations would static routing be the most appropriate routing mechanism?

  • A: when the router has a single link to a router within the same AS
  • B: when the router has redundant links to a router within the same AS
  • C: when the router has a single link to a router within a different AS
  • D: when the router has redundant links to a router within a different AS

Correct Answer: C

Static routing would be the most appropriate routing mechanism for a router that has a single link to a router within a different autonomous system (AS). An AS is defined as the collection of all areas that are managed by a single organization. Because an interdomain routing protocol, such as Border Gateway Protocol (BGP), can be complicated to configure and uses a large portion of a router's resources, static routing is recommended if dynamic routing information is not exchanged between routers that reside in different ASes. For example, if you connect a router to the Internet through a single Internet service provider (ISP), it is not necessary for the router to run BGP, because the router will use this single connection to the Internet for all traffic that is not destined to the internal network. 
External BGP (eBGP), not static routing, would be the most appropriate routing protocol for a router that has redundant links to a router within a different AS. BGP is typically used to exchange routing information between ASes, between a company and an ISP, or between ISPs. BGP routers within the same AS communicate by using internal BGP (iBGP), and BGP routers in different ASes communicate by using eBGP. 
An intradomain routing protocol, such as Enhanced Interior Gateway Routing Protocol (EIGRP) or Open Shortest Path First (OSPF), would be the most appropriate routing protocol for a router that has a single link or redundant links to a router within the same AS. 
Reference:
CCDA 200-310 Official Cert Guide, Chapter 10, Static Versus Dynamic Route Assignment, pp. 380-381




Question 6

You are installing a 4U device in a data center. 
Which of the following are you installing?

  • A: cabling at the demarc
  • B: an environmental control
  • C: a network device in a 7inch space
  • D: a lock for rack security

Correct Answer: C

You are installing a network device in a 7inch (18centimeter) space if you are installing a 4unit (U) device in a data center. Although most racks adhere to a standard width of 19 inches (about 48 centimeters), a certain number of U, or height, of space must be available within a rack to allow the installation of your equipment and to allow space between your equipment and other equipment that is contained within the rack. A U is equivalent to 1.75 inches (about 4.5 centimeters) of height. Therefore, if the device you want to install is a 2U device, the rack should have at least 3.5 inches (about 9 centimeters) of available space to accommodate the device and more to allow for space above and below the device. A 4U device will fit into a 7inch (18centimeter) rack space. 
You are not installing a lock for rack security. However, rack security is likely to be a concern when installing a server in a third-party data center. Commercial data centers house devices for multiple customers within the same physical area. Although many data centers are physically secured against intruders who might steal or modify equipment, the data center's other customers have the same access to the physical area that you do. Therefore, you should install physical security mechanisms, such as a lock, at the rack level to ensure that your company’s devices cannot be accessed by others. 
You are not installing an environmental control. However, an environmental control such as airflow, which helps prevent devices from overheating, is likely to be a concern when installing a server in a third-party data center. You should choose a data center that provides environmental controls. For example, a hot and cold aisle layout is a data center design that attempts to control the airflow within the room in order to mitigate problems that can result from overheated servers? it essentially prevents hot air from mixing with cold air. A raised floor layout is a data center design that puts the heating, ventilation, and air conditioning (HVAC) ductwork below the floor tiles. The tiles, which are typically located in the aisles between the server racks in this type of environment, are perforated so that airflow can be directed and concentrated in the exact locations desired. 
You are not installing cabling at the demarc, or demarcation point. The demarc is the termination point between a physical location and its service provider. In other words, it is the point where the responsibility of the physical location ends and the responsibility of the service provider begins. At a third-party data center, the demarc is the responsibility of the data center provider and its service provider, not the data center's customers. 
Reference:
CCDA 200-310 Official Cert Guide, Chapter 4, Data Center Facility Aspects, pp. 136-138  
Cisco: Cabinet and Rack Installation




Question 7

Which of the following is true of the core layer of a hierarchical design?

  • A: It provides address summarization.
  • B: It aggregates LAN wiring closets.
  • C: It aggregates WAN connections.
  • D: It isolates the access and distribution layers.
  • E: It is also known as the backbone layer.
  • F: It performs Layer 2 switching.
  • G: It performs NAC for end users.

Correct Answer: E

The core layer of a hierarchical design is also known as the backbone layer. The core layer is used to provide connectivity to devices connected through the distribution layer. In addition, it is the layer that is typically connected to enterprise edge modules. Cisco recommends that the core layer provide fast transport, high reliability, redundancy, fault tolerance, low latency, limited diameter, and Quality of Service (QoS). However, the core layer should not include features that could inhibit CPU performance. For example, packet manipulation that results from some security, QoS, classification, or inspection features can be a drain on resources. 
The distribution layer of a hierarchical design, not the core layer, provides address summarization, aggregates LAN wiring closets, and aggregates WAN connections. The distribution layer is used to connect the devices at the access layer to those in the core layer. Therefore, the distribution layer isolates the access layer from the core layer. In addition to these features, the distribution layer can also be used to provide policy-based routing, security filtering, redundancy, load balancing, QoS, virtual LAN (VLAN) segregation of departments, inter-VLAN routing, translation between types of network media, routing protocol redistribution, and more. 
The access layer, not the core layer, typically performs Layer 2 switching and Network Admission Control (NAC) for end users. The access layer is the network hierarchical layer where end-user devices connect to the network. For example, port security and Spanning Tree Protocol (STP) toolkit features like PortFast are typically implemented in the access layer. 
Reference:
CCDA 200-310 Official Cert Guide, Chapter 2, Core Layer, pp. 42-43 
Cisco: High Availability Campus Network DesignRouted Access Layer using EIGRP or OSPF: Hierarchical Design




Question 8

In which of the following locations can you not deploy an IPS appliance?

  • A: between two Layer 2 devices on the same VLAN
  • B: between two Layer 2 devices on different VLANs
  • C: between two Layer 3 devices on the same IP subnet
  • D: between two Layer 3 devices on different IP subnets

Correct Answer: D

You cannot deploy an Intrusion Prevention System (IPS) appliance between two Layer 3 devices on different IP subnets. An IPS appliance is a standalone, dedicated device that actively monitors network traffic. An IPS appliance functions similarly to a Layer 2 bridge; a packet entering an interface on the IPS is directed to the appropriate outbound interface without regard to the packet's Layer 3 information. Instead, the IPS uses interface or virtual LAN (VLAN) pairs to determine where to send the packet. This enables an IPS to be inserted into an existing network topology without requiring any disruptive addressing changes. 
For example, an IPS could be inserted on the outside of a firewall to examine all traffic that enters or exits an organization, as shown in the following diagram:

 
   

Because the IPS in this example is configured to operate in inline mode, it functions similarly to a Layer 2 bridge in that it passes traffic through to destinations on the same subnet. Because all monitored traffic passes through the IPS, it can block malicious traffic, such as an atomic or single-packet attack, before it passes onto the network. However, an inline IPS also adds latency to traffic flows on the network because it must analyze each packet before passing it to its destination. 
An IPS can be deployed between two Layer 2 devices on the same VLAN or between two Layer 2 devices on different VLANs if the VLANs are on the same IP subnet. In addition, the interface on each Layer 2 device can be configured as an access port or as a trunk port. A trunk port tags each frame with VLAN information before it transmits the frame? tagging a frame preserves its VLAN membership as the frame passes across the trunk link. 
Reference:
CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535




Question 9

Which of the following is used by both NetFlow and NBAR to identify a traffic flow?

  • A: Network layer information
  • B: Transport layer information
  • C: Session layer information
  • D: Application layer information

Correct Answer: C

NetFlow and NetworkBased Application Recognition (NBAR) both use Transport layer information to identify a traffic flow. NetFlow is a Cisco IOS feature that can be used to gather flow-based statistics such as packet counts, byte counts, and protocol distribution. A device configured with NetFlow examines packets for select Open Systems Interconnection (OSI) Network layer and Transport layer attributes that uniquely identify each traffic flow. The data gathered by NetFlow is typically exported to management software. You can then analyze the data to facilitate network planning, customer billing, and traffic engineering. For example, NetFlow can be used to obtain information about the types of applications generating traffic flows through a router. 
A traffic flow can be identified based on the unique combination of the following seven attributes:
Source IP address  
Destination IP address  
Source port number  
Destination port number  
Protocol value  
Type of Service (ToS) value  
Input interface 
Although NetFlow does not use Data Link layer information, such as a source Media Access Control (MAC) address, to identify a traffic flow, the input interface on a switch will be considered when identifying a traffic flow. 
NBAR is a Quality of Service (QoS) feature that classifies application traffic that flows through a router interface. NBAR enables a router to perform deep packet inspection for all packets that pass through an NBARenabled interface. With deep packet inspection, an NBARenabled router can classify traffic based on the content of a Transmission Control Protocol (TCP) or a User Datagram Protocol (UDP) packet, instead of just the network header information. In addition, NBAR provides statistical reporting relative to each recognized application. 
Reference:
Cisco: Cisco IOS Switching Services Configuration Guide, Release 12.2: Capturing Traffic Data




Question 10

Which of the following are required when configuring a VSS? (Choose two.)

  • A: HSRP
  • B: GLBP
  • C: VRRP
  • D: identical supervisor types
  • E: identical IOS versions

Correct Answer: DE

Identical supervisor types and identical IOS versions are required when configuring a Virtual Switching System (VSS). VSS is a Cisco physical device virtualization feature that can enable a pair of chassis-based switches, such as the Cisco Catalyst 6500, to function as a single logical device. There are two identical supervisors in a VSS, one on each physical device, and one control plane. One of the supervisors is active, and the other is designated as hot-standby; the active supervisor manages the control plane. If the active supervisor in a VSS goes down, the hot-standby will automatically take over as the new active supervisor. The supervisors in a VSS are connected through the Virtual Switch Link (VSL). 
Hot Standby Router Protocol (HSRP), Gateway Load Balancing Protocol (GLBP), and Virtual Router Redundancy Protocol (VRRP) are First Hop Redundancy Protocols (FHRPs) and are not required when configuring a VSS. Conversely, one of the benefits of using VSS is that the need for HSRP, GLBP, and VRRP is removed. 
HSRP is a Cisco-proprietary protocol that enables multiple routers to act as a single gateway for the network. Each router is configured with a priority value that ranges from 0 through 255, with 100 being the default priority value and 255 being the highest priority value. 
GLBP is a Cisco-proprietary protocol used to provide router redundancy and load balancing. GLBP enables you to configure multiple routers into a GLBP group; the routers in the group receive traffic sent to a virtual IP address that is configured for the group. 
Like GLBP and HSRP, VRRP provides router redundancy. However, similar to HSRP, only one router is active at any time. If the master router becomes unavailable, one of the backup routers becomes the master router. 
Reference:
Cisco: Campus 3.0 Virtual Switching System Design Guide: VSL link Initialization and Operational Characteristics










CONNECT US

Facebook

Twitter

PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount!



HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files