Try 90 free CompTIA Network+ N10-009 questions across the exam domains, with explanations, then continue with full IT Mastery practice.
This free full-length CompTIA Network+ N10-009 practice exam includes 90 original IT Mastery questions across the exam domains.
These questions are for self-assessment. They are not official exam questions and do not imply affiliation with the exam sponsor.
Count note: this page uses the full-length practice count maintained in the Mastery exam catalog. Some certification vendors publish total questions, scored questions, duration, or unscored/pretest-item rules differently; always confirm exam-day rules with the sponsor.
Need concept review first? Read the CompTIA Network+ N10-009 Cheat Sheet on Tech Exam Lexicon, then return here for timed mocks and full IT Mastery practice.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
Try CompTIA Network+ N10-009 on Web View full CompTIA Network+ N10-009 practice page
| Domain | Weight |
|---|---|
| Networking Fundamentals | 24% |
| Network Implementations | 18% |
| Network Operations | 19% |
| Network Security | 19% |
| Network Troubleshooting | 20% |
Use this as one diagnostic run. IT Mastery gives you timed mocks, topic drills, analytics, code-reading practice where relevant, and full practice.
Topic: Network Security
A small company uses an internal HTTPS management portal that currently has a self-signed certificate. Users see browser warnings every time they connect and have become used to clicking through them, which worries the security team. All employees use managed, domain-joined Windows laptops. The security team wants to: (1) stop certificate warnings for this internal portal, (2) reinforce proper certificate trust behavior in browsers, and (3) avoid paying for public certificates for internal-only sites. Which change would BEST meet these goals?
Options:
A. Use Group Policy to configure all browsers to automatically ignore certificate warnings for HTTPS sites on the internal network.
B. Deploy an internal enterprise CA, issue a certificate for the portal, and distribute the CA’s root certificate to all domain computers via Group Policy.
C. Keep using the self-signed certificate but send a security awareness email telling users not to ignore certificate warnings on external websites.
D. Purchase a public wildcard certificate for the company’s domain from a commercial CA and install it on the internal portal.
Best answer: B
Explanation: Public Key Infrastructure (PKI) uses certificate authorities (CAs) to issue digital certificates that prove the identity of servers and clients. Browsers trust a server’s certificate when it forms a valid chain up to a CA in the client’s trusted root store, the name matches, and the certificate is not expired or revoked.
Self-signed certificates are signed by the same key they present, not by a trusted CA. Unless the self-signed certificate (or its issuing root) is explicitly trusted on client devices, browsers will show security warnings. Training users to click through these warnings conditions them to ignore important trust indicators, increasing the risk of falling for real attacks.
In an Active Directory environment, deploying an internal enterprise CA allows the organization to issue certificates for internal services while keeping trust centralized and controlled. The CA’s root certificate can be automatically distributed to domain-joined machines using Group Policy, so internal certificates are trusted without user warnings. This aligns with the goals of eliminating warning fatigue, preserving certificate validation behavior, and avoiding ongoing public CA fees for internal-only sites.
Topic: Network Operations
Which of the following statements about regulatory and industry compliance for networks is NOT correct?
Options:
A. Network teams may be required to retain security and activity logs for specific periods to meet regulatory or contractual requirements.
B. Accurate network diagrams and asset inventories can help demonstrate to auditors that security controls are properly implemented and scoped.
C. Compliance frameworks such as GDPR, HIPAA, or PCI DSS can influence how sensitive data must be encrypted in transit and at rest on the network.
D. Once an organization complies with one major data-protection regulation, it is automatically considered compliant with all other regulations and no further assessments are needed.
Best answer: D
Explanation: Compliance requirements such as data-protection regulations and industry standards affect how networks are designed, documented, and operated. They often require controls like encryption, logging, access restrictions, and clear documentation. However, each framework has its own scope and rules, so organizations must evaluate and meet them individually.
The incorrect statement claims that compliance with one major regulation automatically means compliance with all others. In reality, GDPR, HIPAA, PCI DSS, and similar frameworks each have different purposes (for example, privacy, healthcare, payment cards), cover different types of data, and define different technical and procedural controls. Network and security teams must understand which frameworks apply to their organization and ensure that network controls meet each applicable requirement.
The other statements reflect common compliance-related expectations: retaining logs for investigations and audits, using encryption for sensitive data, and maintaining clear network documentation to show where controls are applied.
Topic: Network Implementations
A network technician is configuring three 2.4GHz access points placed along a straight hallway so their coverage areas slightly overlap. To minimize adjacent-channel interference while still using only the 2.4GHz band, which channel assignment is MOST appropriate?
Options:
A. AP1: channel 1, AP2: channel 6, AP3: channel 11
B. AP1: channel 3, AP2: channel 7, AP3: channel 11
C. AP1: channel 2, AP2: channel 6, AP3: channel 10
D. AP1: channel 1, AP2: channel 1, AP3: channel 6
Best answer: A
Explanation: In the 2.4GHz band, most enterprise deployments use 20MHz channels and rely on three standard non-overlapping channels: 1, 6, and 11. Non-overlapping means that their frequency ranges do not significantly overlap, so devices on these channels avoid both co-channel and adjacent-channel interference when cells overlap.
For a simple hallway with three access points whose coverage areas overlap, the goal is to assign each AP one of the three non-overlapping channels so that no two neighboring cells share the same channel and none of the APs use partially overlapping channels like 2, 3, 4, 5, 7, 8, 9, or 10. This pattern allows channel reuse while keeping interference low.
The deciding attribute in this question is: only the choice that assigns channels 1, 6, and 11 (each used once) provides a fully non-overlapping 2.4GHz channel plan for three adjacent APs.
Topic: Network Security
A malicious actor calls an organization’s help desk claiming to be a senior network engineer working remotely. They reference a fabricated change ticket number and insist they are locked out of the corporate VPN, requesting that their VPN password be reset and read to them over the phone. Which social engineering technique does this attack BEST represent?
Options:
A. Pretexting
B. Tailgating
C. Phishing
D. Spear phishing
Best answer: A
Explanation: This scenario centers on a social engineering attack aimed at obtaining network access credentials. The attacker calls the help desk, impersonates a senior network engineer, cites a fake change ticket, and pressures the staff member to reset and disclose a VPN password.
This behavior is characteristic of pretexting, where the attacker constructs a believable cover story (pretext) and assumed identity to gain the target’s trust. From a network perspective, if the help desk complies, the attacker gains valid VPN credentials that can be used to access internal network resources as if they were a legitimate engineer.
Understanding the differences between phishing, spear phishing, and pretexting helps technicians recognize how attackers attempt to steal credentials used for VPNs, Wi‑Fi, privileged accounts, and other network services. While all are social engineering techniques, they use different channels and methods (mass email vs targeted email vs live impersonation with a detailed story).
Topic: Network Security
Which of the following statements about log management for change tracking and security investigations is NOT correct?
Options:
A. Retaining logs for a reasonable period helps support security investigations and compliance requirements.
B. It is usually sufficient to store logs only on each individual device; forwarding them to a centralized log system is unnecessary overhead.
C. Audit logs should record who made configuration changes and when they occurred to support accountability.
D. Synchronizing device clocks with NTP improves the usefulness of logs during investigations by making timestamps consistent across systems.
Best answer: B
Explanation: Effective log management is a core part of securing network management and remote access. Logs and audit trails provide a record of what changed, who changed it, and when, which is critical for both routine troubleshooting and post-incident investigations.
Accurate audit logs should capture key details such as the user or account that performed an action, the time it occurred, and the type of change. To make these records reliable across many devices, organizations typically synchronize clocks using NTP so that timestamps can be compared directly.
Logs also need to be retained for a sufficient period to support typical investigation and compliance needs. While the exact duration varies, the principle is that short retention windows can cause important evidence to be lost.
Storing logs only on individual devices is not considered sufficient. Devices can fail, be wiped, or be tampered with during an attack. Centralizing logs in a dedicated logging server or SIEM improves integrity, availability, correlation, and searchability of log data, making investigations and change tracking much more effective.
Topic: Networking Fundamentals
Which TWO statements about remote access technologies are correct? (Select TWO.)
Options:
A. Telnet is recommended instead of SSH for managing devices over the public internet because it avoids the performance cost of encryption.
B. Clientless SSL VPNs usually allow users to connect through a standard web browser without installing a separate VPN client to reach specific internal web applications.
C. SSH provides encrypted command-line access to remote devices and is commonly used to manage network equipment over TCP port 22.
D. RDP is limited to transferring files only and cannot display a remote desktop session to the user.
E. Modern remote-access VPNs for employees typically rely on PPTP because it is considered the most secure tunneling protocol available.
Correct answers: B and C
Explanation: Remote access technologies let users manage devices and reach internal resources from other networks, such as over the internet. SSH is a secure, encrypted protocol commonly used for command-line management of network devices and servers. Clientless SSL VPNs typically provide remote access through a web browser, using SSL/TLS to secure the connection and exposing selected internal web-based resources instead of the whole network.
Other technologies, such as RDP, provide full remote desktop sessions, while VPN tunneling protocols like IPsec or SSL/TLS are preferred over legacy and insecure options such as PPTP or Telnet. Modern best practice emphasizes encryption and least-privilege access for remote connections.
Topic: Networking Fundamentals
A small company has been assigned a single public IPv4 address from its ISP for Internet access. Currently, several PCs on the office LAN are configured with public IP addresses from an older block that is being reclaimed by the ISP, and the owner is worried about both address exhaustion and exposing internal hosts directly to the Internet.
The company wants to:
Which of the following actions/solutions will best address this issue or requirement? (Select TWO.)
Options:
A. Configure IPv6 link-local addresses on all internal hosts so they can reach the Internet without NAT
B. Assign each internal device a unique public IP address from the ISP’s /30 and enable DHCP to manage those assignments
C. Readdress the office LAN to use a private RFC1918 range such as 192.168.10.0/24 on all internal hosts
D. Configure port address translation (PAT) on the edge router/firewall so multiple internal hosts share the single public IP
E. Create a static one-to-one NAT mapping for each internal host to a different public IP address
Correct answers: C and D
Explanation: This scenario is about using private versus public IP addresses and applying NAT/PAT at the network edge.
In a typical small or medium enterprise, internal hosts use private RFC1918 IP addresses (for example, 192.168.0.0/16, 10.0.0.0/8) and do not appear directly on the public Internet. An edge router or firewall holds one or more public IP addresses provided by the ISP and performs network address translation (NAT) for traffic going in and out.
Port address translation (PAT), also called many-to-one or NAT overload, is the most common outbound model for IPv4: many internal private addresses share a single public address, with the device keeping track of sessions using TCP/UDP port numbers. This preserves scarce public IPv4 space and adds a basic layer of address-hiding for internal hosts.
In this question, the company only has one public IPv4 address. The best solution is to move internal hosts to a private range and use PAT on the edge device so everyone can still access the Internet through that single public IP while no longer exposing internal hosts directly with public addresses.
Topic: Network Troubleshooting
A branch office reports that users can browse public websites by name but cannot access the internal file server using its FQDN filesrv1.corp.local. Pinging the file server’s IP address works from the same PCs. A review of a sample client shows:
Windows IP Configuration
IPv4 Address. . . . . . . . . . : 10.20.5.23
Subnet Mask . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . : 10.20.5.1
DNS Servers . . . . . . . . . . : 8.8.8.8
The corporate DNS server for internal records is 10.10.1.10. Which action should the technician take to restore internal name resolution for all branch users?
Options:
A. Change the default gateway on branch clients to 10.10.1.10 so internal traffic routes correctly
B. Manually add a hosts file entry for filesrv1.corp.local on each affected PC
C. Update the branch DHCP scope to hand out 10.10.1.10 as the DNS server, then renew client leases
D. Reduce the subnet mask on branch clients to 255.255.0.0 to allow access to the internal server
Best answer: C
Explanation: The clients in the branch office have working IP connectivity to the internal file server, as shown by successful pings to its IP address. However, they cannot resolve the server’s FQDN, which points to a name resolution problem rather than a routing or addressing issue.
The ipconfig output shows the clients are using a public DNS server (8.8.8.8). Public resolvers do not hold private internal records like filesrv1.corp.local, so lookups fail. The corporate internal DNS server at 10.10.1.10 is the authoritative source for that zone.
To fix this for all users, the technician should correct the DHCP scope so it distributes the internal DNS server address. Once clients renew their DHCP leases, they will use the internal DNS server and internal name resolution will work again, while internet name resolution can still function via the internal server’s forwarders.
Topic: Network Security
A company’s security policy requires the perimeter firewall to allow access to specific SaaS applications (such as Office 365 and Salesforce) while blocking other web applications, even though they all use HTTPS over port 443. Which type of firewall capability BEST meets this requirement?
Options:
A. A traditional stateful firewall that filters based only on IP addresses, ports, and connection state
B. A next-generation firewall that performs deep packet inspection with application awareness
C. A router using stateless ACLs to permit or deny traffic on TCP port 443
D. A circuit-level gateway that tracks TCP handshakes but does not inspect application payloads
Best answer: B
Explanation: Firewalls conceptually act as filters between networks, enforcing security policies by inspecting traffic and deciding whether to allow or block it. Early firewalls primarily filtered on source/destination IP addresses and ports, with stateful inspection adding awareness of connection state (such as established TCP sessions).
However, when many different applications all use the same ports (for example, HTTPS on TCP 443), simple IP/port filtering is no longer enough. To distinguish specific SaaS applications like Office 365 or Salesforce from other HTTPS traffic, the firewall must be able to look deeper into the packet payload and understand the application protocol. This is where next-generation firewalls come in.
Next-generation firewalls (NGFWs) extend traditional stateful inspection by adding features such as deep packet inspection, application awareness and control, and sometimes integrated IDS/IPS. Application-aware firewalls can identify traffic by application signatures or behavioral patterns, allowing administrators to write rules like “allow Office 365” or “block unknown web apps” even though they all use the same port.
In this scenario, the discriminating factor is application-layer awareness (Layer 7): only a firewall with deep packet inspection and application awareness can selectively allow some HTTPS-based SaaS apps while blocking others that use the same port.
Topic: Networking Fundamentals
An online backup application must ensure that every byte of a large file arrives at the destination host exactly once and in the correct order, even if the network drops or reorders some packets. Which transport-layer communication principle best meets this requirement?
Options:
A. Separating devices into VLANs to reduce broadcast traffic
B. Distributing client requests across multiple servers using a virtual IP
C. Minimizing latency by avoiding connection setup and acknowledgments
D. Using a connection-oriented, reliable stream with acknowledgments and retransmissions
Best answer: D
Explanation: At the transport layer, TCP and UDP offer different communication principles. TCP is connection-oriented: it establishes a session (3-way handshake), numbers segments, uses acknowledgments (ACKs), and retransmits lost data. This provides a reliable byte stream, ensuring that all data arrives, in order, with no duplication.
UDP is connectionless: it sends independent datagrams without establishing a session or tracking delivery. This reduces overhead and latency but provides no built-in guarantee that data arrives, arrives in order, or arrives only once.
For an online backup application transferring large files, the business requirement is reliability: every byte must reach the destination exactly once and in sequence, even if the network drops or reorders packets. That directly maps to the principle of connection-oriented, reliable transport using acknowledgments and retransmissions, which is what TCP provides.
By contrast, principles like minimizing latency with connectionless delivery (UDP), load balancing, or VLAN-based segmentation address other goals (speed, scalability, network organization) but do not themselves guarantee ordered, complete delivery of file data.
Topic: Network Implementations
A network administrator manages a remote branch router using SSH to its LAN interface. After applying a new ACL to the WAN interface, all Internet access from the branch stops and the SSH session is dropped. Monitoring shows the router is still powered, and the site’s LTE-based console server is reachable from the NOC. Which action is the MOST appropriate for the administrator to take next?
Options:
A. Have on-site staff perform a factory reset of the router to restore default settings
B. Connect to the LTE-based console server and access the router’s console port to review and fix the ACL
C. Use the configuration management tool to push a corrected ACL over the primary WAN connection
D. Wait to see if the WAN link recovers on its own before taking any further action
Best answer: B
Explanation: In this scenario, the administrator applied a new ACL and immediately lost both user Internet access and the SSH management session. This strongly suggests the ACL is blocking necessary traffic, including in-band management. Because the router is still powered but unreachable over its normal interfaces, the in-band management path is unusable.
The branch has an LTE-based console server that remains reachable, which is an out-of-band (OOB) management path. By connecting to the console server, the administrator can access the router’s console port, bypassing the failed production network. From the console, the administrator can review or roll back the ACL without depending on the blocked WAN link.
This illustrates the reliability benefit of OOB management: when in-band management fails due to configuration errors or outages, an independent console/OOB path still allows safe recovery. It must be secured carefully, but it greatly reduces the risk of being locked out of critical devices.
Topic: Network Operations
An organization updates its change-management policy so that no single administrator can both approve and implement a production network change. All change tickets must be approved by a separate reviewer, and only users in a specific role group can apply approved changes on devices. Which principle is this policy primarily enforcing to reduce the risk of unauthorized or unreviewed changes?
Options:
A. Least privilege
B. Availability
C. Separation of duties
D. Defense in depth
Best answer: C
Explanation: The scenario describes a change-management policy where one person must approve a change and a different person actually applies it. This is designed to prevent any single administrator from pushing an unauthorized or poorly reviewed change into production.
This is the essence of separation of duties: splitting critical tasks among multiple individuals so that no one person can complete the full sequence alone. In network operations, this reduces the risk of unreviewed or malicious configuration changes and helps ensure that changes follow formal processes, including documentation and peer review.
While access control and role groups are mentioned, the central risk being addressed is the possibility of a single individual making unapproved changes, which is why separation of duties is the best-matching principle here.
Topic: Network Implementations
A company has extended its on‑premises network (10.0.0.0/16) to a public cloud VPC (10.20.0.0/16) using a site‑to‑site IPsec VPN. Users report they cannot reach the new internal web app using its FQDN app.corpcloud.com, but the network engineer can ping the app’s private IP over the VPN.
Review the exhibit:
| Check | Result |
|---|---|
| VPN status | Up – routes 10.0.0.0/16 ↔ 10.20.0.0/16 |
| Ping from on‑prem to 10.20.10.5 | Success (low latency) |
| DNS lookup from on‑prem | app.corpcloud.com → 203.0.113.45 (public) |
| Cloud firewall rule | Allow src 10.0.0.0/16 → dst 10.20.10.5 |
Based on the exhibit, which action is the most appropriate NEXT step to allow on‑premises users to reach the app securely over the VPN using its FQDN?
Options:
A. Change the VPN configuration to use IKEv2 instead of IKEv1 for the site‑to‑site tunnel
B. Add a static route on the on‑premises router sending traffic for 203.0.113.45 through the VPN tunnel
C. Configure the on‑premises DNS server with a conditional forwarder or private zone so app.corpcloud.com resolves to 10.20.10.5
D. Modify the cloud firewall rule to allow any source to reach 203.0.113.45 on HTTPS
Best answer: C
Explanation: In a hybrid cloud deployment, three key pieces must work together for internal users to reach a cloud application securely:
In the exhibit, connectivity is working: the VPN is up and on‑prem can ping the app’s private IP 10.20.10.5. The cloud firewall rule allows traffic from 10.0.0.0/16 to 10.20.10.5. The problem is DNS: on‑premises DNS resolves app.corpcloud.com to the public IP 203.0.113.45, which does not match the private route and firewall rule.
The best fix in a hybrid design is to use split‑horizon DNS or a conditional forwarder/private DNS zone so that on‑premises clients resolve app.corpcloud.com to the private 10.20.10.5 when they are inside the corporate network. That way, users can reach the app via the VPN using the same FQDN, and the existing security rule still enforces private‑only access.
Topic: Networking Fundamentals
A regional office is deploying Wi‑Fi across three floors. The design calls for about 30 access points connected to existing Ethernet switches, with identical SSIDs on all floors, fast roaming for mobile users, and the ability for a small IT staff to push security and configuration changes from a single interface. Which wireless architecture would be MOST appropriate for this deployment?
Options:
A. Ad hoc (peer-to-peer) wireless networks created by user devices on each floor
B. Independent access points operating in standalone infrastructure mode
C. A wireless mesh network where APs rely on wireless backhaul instead of Ethernet
D. A controller-based wireless LAN with lightweight access points managed centrally
Best answer: D
Explanation: The scenario describes an office with about 30 access points, wired Ethernet already in place, and clear requirements for uniform SSIDs, fast roaming, and centralized management by a small IT team. This aligns strongly with a controller-based wireless LAN design.
In a controller-based architecture, multiple lightweight APs connect to a central wireless LAN controller over the wired network. The controller handles key functions such as authentication, roaming decisions, and configuration distribution. Administrators can define SSIDs, security policies, and RF settings once on the controller and have them automatically applied to all managed APs. This greatly simplifies managing dozens of APs and improves client roaming because decisions can be coordinated centrally.
By contrast, standalone infrastructure APs require per-device configuration, ad hoc networks lack enterprise features entirely, and mesh is focused on solving backhaul constraints rather than simplifying management when Ethernet is already available. Therefore, a controller-based wireless LAN with lightweight APs best satisfies all of the stated business requirements.
Topic: Network Operations
A small company has three access switches and one edge router. Currently, each device is configured manually via the CLI, and there are no configuration backups. A recent misconfiguration on a switch caused a long outage because the technician had nothing to roll back to. Management now wants to:
Which of the following is the BEST approach to meet these goals without adding unnecessary complexity?
Options:
A. Implement an automated system that backs up all device configurations to a central repository with version control and uses standardized templates for each device role.
B. Enable automatic saving of the running configuration to startup on each device after every change, but do not export configurations off the devices.
C. Periodically copy the running configuration from a known “good” access switch and paste it to the other switches whenever they need to be updated.
D. After major changes, have technicians export configuration files manually to a USB drive plugged into each device and store the files on a shelf in the network closet.
Best answer: A
Explanation: Configuration management and change control processes help networks stay stable, recover quickly from problems, and remain consistent as they grow. For network devices, this typically means maintaining regular configuration backups, using templates for common device roles, and tracking changes with version control.
An automated backup system that pulls configurations from switches and routers to a central repository ensures there is always a recent copy available if a device fails or a configuration change goes badly. When these backups are stored with version control, you can see what changed over time, compare versions, and quickly roll back to a known-good configuration.
Templates for common roles (for example, access switches vs. the edge router) provide a standard baseline. This makes it easier to deploy new devices, avoid configuration drift, and ensure consistent security and network behaviors. Combined, automated backups, templates, and versioning directly support disaster recovery, easy rollback, and configuration consistency without requiring deep enterprise tooling that would be overly complex for a small environment.
Topic: Network Troubleshooting
A user on a wired connection reports that web pages are very slow to load, but pings to the default gateway show low latency and no loss. On the access switch, the technician checks the user’s port and sees:
| Port | Link LED | Speed LED | Errors |
|---|---|---|---|
| Gi0/12 | Solid green | Amber (100 Mb/s) | Many late collisions |
Other nearby users on the same switch do not have issues and show 1 Gb/s links with no errors. Which action would BEST resolve this user’s problem?
Options:
A. Upgrade the access switch uplink to 10 Gb/s so the user’s traffic is not bottlenecked
B. Configure both the switch port and the user’s NIC to auto-negotiate speed and duplex, then reconnect the cable
C. Replace the patch cable with a known-good Cat6 cable to eliminate possible physical layer faults
D. Move the user’s connection to a different VLAN with fewer users to reduce congestion
Best answer: B
Explanation: The key clues are the late collisions and the fact that this port is operating at 100 Mb/s while other similar ports are running at 1 Gb/s without issues. Late collisions typically indicate a duplex mismatch, where one side of the link is operating in half‑duplex and the other in full‑duplex. This causes frames to collide after the normal collision window, leading to retransmissions and poor throughput even if basic connectivity (like ping) still works.
On modern Ethernet networks, best practice for typical access links is to allow auto‑negotiation on both ends (switch and NIC). When both sides are set to auto, they can agree on the highest common speed and on full‑duplex, eliminating collisions entirely. Once the link comes up at 1 Gb/s full‑duplex with no collisions, the user’s slow web performance should normalize.
This scenario maps to Network+ Domain 5 (Network Troubleshooting), specifically interpreting interface indicators and statistics (link lights, speed LEDs, and error counters) to identify and correct speed/duplex mismatches on wired links.
Topic: Network Security
A company recently discovered that a suspected insider changed firewall rules two weeks ago, but the security team cannot confirm details because most device logs were stored only locally and rotated after 48 hours. Management now requires better support for change tracking and future security investigations without specifying exact retention times.
Which of the following actions/solutions will best address this issue or requirement? (Select TWO.)
Options:
A. Allow each device to store its own logs locally but increase local storage capacity so logs rotate less often.
B. Keep only nightly configuration backups of routers and switches, using them as the primary audit trail for any changes.
C. Increase the log retention period on the central log system and archive older logs to low-cost storage instead of deleting them.
D. Rely on SNMP polling from existing monitoring tools instead of collecting logs, because SNMP data already shows device status.
E. Deploy a centralized syslog/SIEM server and configure all network devices and servers to forward security and configuration-change logs to it.
F. Enable full debug logging on all routers and switches permanently to capture as much detail as possible.
Correct answers: C and E
Explanation: For effective change tracking and security investigations, organizations should centralize logs from critical systems and retain them long enough to reconstruct past events. When logs live only on individual devices and are quickly rotated, analysts cannot review the history of configuration changes, logins, or suspicious activity.
Using a centralized syslog or SIEM platform allows network and security devices, servers, and applications to send their logs to an aggregated, controlled environment. This makes it easier to correlate events across systems, protect log integrity, and search quickly during incidents. Extending log retention and archiving to lower-cost storage ensures that enough historical data is available without requiring precise retention numbers in policy.
By contrast, relying on device-local logs, debug-level logging everywhere, or only configuration backups does not provide a complete, scalable, or reliable audit trail for investigations.
Topic: Networking Fundamentals
Which statement BEST describes the role of a wireless LAN controller in a controller-based enterprise Wi‑Fi deployment?
Options:
A. It extends the signal of an existing wireless network by repeating frames, improving coverage without changing how APs are managed.
B. It centralizes configuration, security policies, and RF management for multiple lightweight access points, handling control‑plane functions while the APs forward client data traffic.
C. It provides wireless coverage and directly bridges client traffic between the wireless and wired LAN at a single location.
D. It is a cloud-hosted dashboard that manages many geographically distributed network devices over the internet without requiring on‑premises control hardware.
Best answer: B
Explanation: In a controller-based wireless architecture, the wireless LAN controller is responsible for the control plane of the Wi‑Fi network. It centralizes configuration of SSIDs, VLAN mappings, security settings, firmware updates, and radio parameters (channels, power levels) for many lightweight access points.
The access points themselves focus mainly on data-plane tasks: transmitting and receiving client frames over the air and forwarding those frames to the wired LAN. Separating control and data planes in this way simplifies management, enforces consistent policies, and allows features like coordinated RF optimization and fast roaming.
Cloud-managed networking appliances provide similar centralized management benefits, but the control logic is hosted in the vendor’s cloud instead of on a local wireless LAN controller appliance or VM.
Topic: Network Security
A company recently experienced a malware outbreak traced to unmanaged employee laptops plugged into open Ethernet jacks and connecting to the corporate Wi‑Fi. Management wants to ensure that only devices that meet security requirements (such as current patches and active antivirus) receive full access to internal resources. Non-compliant or unknown devices should be allowed only limited connectivity until they are remediated.
Which of the following actions/solutions will best address this issue or requirement? (Select TWO.)
Options:
A. Implement a stateful firewall rule set that blocks all unsolicited inbound traffic from the internet to internal clients.
B. Deploy a NAC solution that performs posture assessment (checking antivirus status, OS patch level, and host firewall) before granting devices access to internal VLANs.
C. Enable switch port security on all access ports to limit each port to a single learned MAC address.
D. Deploy a standalone IDS sensor on the core switch to alert on suspicious or anomalous internal traffic.
E. Configure NAC policies so that non-compliant or unknown devices are automatically placed into a restricted remediation or guest VLAN with limited access.
Correct answers: B and E
Explanation: Network access control (NAC) is designed to control which endpoints are allowed onto the network and what level of access they receive, based on identity and device health. Posture assessment is the process of evaluating an endpoint’s security state—such as antivirus status, OS patch level, and host firewall configuration—before granting full access.
In this scenario, unmanaged and potentially insecure laptops are plugging into the network and joining Wi‑Fi, leading to a malware outbreak. The organization needs a control that can evaluate each device’s compliance with security policies at connection time and then decide whether to give it normal access or restrict it. That is exactly what NAC with posture assessment and quarantine/remediation VLANs is designed to do.
Perimeter firewalls, IDS, and basic port security are helpful but do not evaluate endpoint posture or dynamically adjust their access based on compliance, so they cannot directly solve the requirement described.
Topic: Network Troubleshooting
A network technician is troubleshooting why users in the branch LAN 192.168.10.0/24 cannot reach an application server at 10.20.0.10 located at HQ over a site-to-site VPN. The VPN peer at HQ is reachable via 10.0.0.2 over Tunnel0. The branch router uses static routes for remote internal networks and a default route for Internet traffic.
The technician captures the following output from the branch router:
Branch-R1# show ip route
Gateway of last resort is 198.51.100.1 to network 0.0.0.0
C 10.0.0.0/30 is directly connected, Tunnel0
C 192.168.10.0/24 is directly connected, GigabitEthernet0/0
S 10.200.0.0/24 [1/0] via 10.0.0.2
S* 0.0.0.0/0 [1/0] via 198.51.100.1
Which of the following conclusions about this routing table is INCORRECT?
Options:
A. Traffic destined for 192.168.10.0/24 will be sent directly out GigabitEthernet0/0 because that subnet is directly connected.
B. Because a default route exists, the router does not need a specific route to 10.20.0.0/24; the current routing table is correctly configured to reach 10.20.0.10.
C. The static route to 10.200.0.0/24 appears to contain a typo and does not match the actual HQ subnet 10.20.0.0/24.
D. The router has no route to 10.20.0.0/24, so traffic to 10.20.0.10 will not be sent over the VPN and is likely misrouted or dropped.
Best answer: B
Explanation: The routing table shows two directly connected networks (10.0.0.0/30 on Tunnel0 and 192.168.10.0/24 on GigabitEthernet0/0), one incorrect static route to 10.200.0.0/24 via 10.0.0.2, and a default route to the Internet via 198.51.100.1. The application server is in 10.20.0.0/24, which is supposed to be reachable over the VPN.
Because there is no specific route for 10.20.0.0/24, the router will follow the longest-prefix match rule. With no matching subnet, it will fall back to the default route 0.0.0.0/0 and send traffic toward the Internet, not the VPN peer. This is a classic example of a missing or misconfigured static route causing a routing problem.
The statement claiming that the default route is sufficient for reaching 10.20.0.10 and that the routing table is correctly configured is therefore incorrect and reflects a misunderstanding of how traffic should be directed to internal remote networks over a VPN.
Correct troubleshooting here would recognize that a specific static route such as 10.20.0.0/24 via 10.0.0.2 is required instead of relying on the Internet default route.
Topic: Network Operations
A network administrator is planning to introduce scripting to reduce routine workload. They want to start with tasks that are repetitive and low risk, while still following change-management best practices. Which of the following is a task they should AVOID AUTOMATING in a fully unattended way?
Options:
A. Running a nightly script that backs up router and switch configurations to a secure server
B. Automatically pushing all firewall rule changes from a shared spreadsheet directly into production with no manual review or approval
C. Using a script to bulk-update interface descriptions on access switches based on an approved inventory
D. Scheduling a script to periodically collect SNMP statistics from switches and upload them to a monitoring system
Best answer: B
Explanation: When deciding what to automate, network teams should prioritize tasks that are repetitive, time-consuming, and low risk if something goes wrong. Good examples are configuration backups, inventory updates, and metrics collection, because they are frequent and can be tested or rolled back easily.
High-risk changes that directly impact security and availability—such as firewall rules—should not be applied to production in a fully unattended manner. These tasks typically require human review, approvals, testing in a lab or staging environment, and a clear rollback plan. Automation can assist (for example, generating candidate configs), but final application should remain under strict change control.
In this scenario, the only choice that bypasses human review for critical firewall changes is the one that should be avoided as a fully automated, unattended task. The other tasks are repetitive and low impact, making them good candidates for safe automation.
Topic: Network Troubleshooting
A user on VLAN10 reports they cannot reach the internal CRM web application at crm.internal.local from their workstation. Another administrator on the CRM server’s local subnet confirms the server at 10.20.30.50 responds to pings from that subnet. You verify that other users on VLAN10 have the same issue.
You collect the following output from the affected user’s PC.
C:\>ipconfig
IPv4 Address. . . . . . . . . . : 10.20.10.25
Subnet Mask . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . : 10.20.10.1
C:\>ping crm.internal.local
Pinging crm.internal.local [10.20.30.50] with 32 bytes of data:
Reply from 10.20.10.1: Destination host unreachable.
Reply from 10.20.10.1: Destination host unreachable.
Reply from 10.20.10.1: Destination host unreachable.
Reply from 10.20.10.1: Destination host unreachable.
Ping statistics for 10.20.30.50:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
C:\>tracert 10.20.30.50
1 <1 ms <1 ms <1 ms 10.20.10.1
2 * * * Request timed out.
3 * * * Request timed out.
Based on the exhibit, which of the following is the BEST next troubleshooting step?
Options:
A. Clear the DNS cache on the user’s workstation using ipconfig /flushdns.
B. Verify the routing configuration for network 10.20.30.0/24 on the default gateway 10.20.10.1.
C. Replace the user’s Ethernet patch cable and move the connection to a different switch port.
D. Restart the CRM application service on the server at 10.20.30.50.
Best answer: B
Explanation: The exhibit shows that the user’s workstation has a valid IP configuration on 10.20.10.0/24 with a default gateway of 10.20.10.1. When pinging crm.internal.local, the name resolves to 10.20.30.50, confirming that DNS is functioning.
However, the ping output shows Reply from 10.20.10.1: Destination host unreachable. This means the default gateway (10.20.10.1) is reachable and is the device generating the error, indicating it does not have a valid path to the destination network 10.20.30.0/24 or is configured to block that traffic.
The traceroute reinforces this: the first hop to 10.20.10.1 succeeds with very low latency, but all subsequent hops time out. Combined with the information that the CRM server responds to pings from its local subnet, the problem is clearly between the user’s VLAN and the server network.
Following the troubleshooting methodology (identify, establish a theory, test, and narrow down the cause), the best next step is to investigate the default gateway/router for missing or incorrect routes or ACLs for 10.20.30.0/24. Other actions like flushing DNS, changing cables, or restarting the server do not address the routing evidence shown in the exhibit.
Topic: Networking Fundamentals
Which of the following statements about connecting an on-premises network to a public cloud provider are INCORRECT? (Select TWO.)
Options:
A. For small organizations or temporary test workloads, an Internet-based VPN is often more cost-effective than ordering a dedicated private circuit to the cloud.
B. Dedicated private links usually offer more consistent latency and bandwidth, but they cost more and take longer to provision than an Internet-based VPN tunnel.
C. Some designs use both a dedicated private link and a VPN connection to the same cloud provider, giving a backup path if one connection fails.
D. A site-to-site VPN typically sends encrypted traffic over the public Internet, so latency and path changes are less predictable than with a dedicated private link.
E. Dedicated private links are typically the lowest-cost connectivity option for short-term cloud lab environments because they can be deployed in minutes without any coordination with service providers.
F. Site-to-site VPNs normally provide the same deterministic latency and carrier-grade SLAs as dedicated private links, because traffic does not traverse the public Internet.
Correct answers: E and F
Explanation: There are two common options for connecting an on-premises network to a cloud provider: site-to-site VPNs over the Internet and dedicated private links. A site-to-site VPN creates an encrypted tunnel across the public Internet. It is relatively quick and inexpensive to deploy but inherits the variable latency, jitter, and path changes of the Internet.
A dedicated private link is a reserved circuit (or similar private connectivity) between the organization and the cloud provider. It typically offers more predictable latency, higher reliability, and formal SLAs, but it is more expensive and takes longer to order and provision.
Therefore, any statement claiming that a VPN has the same deterministic latency and SLAs as a private link, or that private links are the lowest-cost, fastest-to-deploy option for short-term labs, is inaccurate.
Topic: Networking Fundamentals
A small office currently uses a physical star topology with a single central Ethernet switch. All 40 desktop PCs connect directly to this switch, and when the switch fails, the entire office loses network access. Management wants to reduce the impact of a single switch failure while keeping cabling and equipment costs moderate. Which physical topology change would BEST meet these goals?
Options:
A. Replace the star with a bus topology, chaining all PCs along a single shared cable segment.
B. Convert to a ring topology, connecting each PC to two neighbors to form a single closed loop.
C. Upgrade to a full mesh topology where each PC has a direct physical link to every other PC.
D. Adopt a hybrid design by adding a second central switch and interconnecting the two switches (a partial mesh), then distributing PCs between them.
Best answer: D
Explanation: The office currently has a pure star topology with a single central switch. In a star, all devices connect to a central point, which simplifies cabling but creates a single point of failure: if the central switch goes down, everyone loses connectivity.
To improve availability, management wants to reduce the impact of a single switch failure but still avoid a large increase in cost and complexity. The most practical solution is to introduce limited redundancy at the center while keeping the basic star-like structure for end-user devices. Adding a second central switch and interconnecting the two creates a hybrid/partial mesh among the central devices, which is a common modern approach.
This hybrid design improves resilience because a single switch failure only affects the devices directly connected to that switch, while the network as a whole can still function through the other switch. It avoids the huge cable and port count explosion of a full mesh and does not revert to legacy, non-redundant designs such as bus or simple ring topologies.
Topic: Network Troubleshooting
Which TWO of the following are appropriate containment actions during an active malware incident on a corporate network? (Select TWO.)
Options:
A. Move infected hosts to an isolated or quarantine VLAN, or physically disconnect them from the network to prevent further spread.
B. Apply temporary firewall or ACL rules to block outbound connections to known malicious IP addresses, domains, or ports associated with the attack.
C. Wipe and rebuild every system in the same subnet immediately, without first confirming which hosts are compromised.
D. Restore affected servers from backup images before stopping the active infection on the network.
E. Disable all organization internet access for days, even if only a single low-risk workstation is believed to be infected.
Correct answers: A and B
Explanation: In a security incident, containment, eradication, and recovery are distinct phases. Containment aims to limit the attacker’s ability to spread or cause further damage while preserving systems for analysis. Typical containment actions include isolating affected hosts and blocking malicious network traffic. Eradication happens after scoping and analysis; it focuses on removing malware or attacker access (for example, cleaning or reimaging systems). Recovery follows eradication and involves restoring normal operations, such as bringing systems back online, restoring from backups, and monitoring for recurrence.
Because the question specifically asks about containment actions, the correct answers are the ones that limit spread or communication of the malware without immediately jumping into full rebuilds or broad, long-term outages. Actions that belong to eradication or recovery phases, or that are overly disruptive to the business, are not considered good containment steps for a typical incident.
Topic: Network Troubleshooting
A network technician manages three APs covering an open-plan office. Users report low throughput and frequent disconnections, especially near the overlap between APs. A quick survey shows all APs are using the same 2.4 GHz channel at high transmit power, and adding new hardware is not an option. Which change would BEST improve wireless performance with minimal disruption?
Options:
A. Move all APs closer together in the center of the office while keeping current channels and power levels
B. Add a second SSID on each AP to spread user sessions across two networks without changing channels or power
C. Increase transmit power on all APs so clients maintain a stronger signal throughout the floor
D. Reconfigure the APs to use non-overlapping 2.4 GHz channels and slightly reduce transmit power on each AP
Best answer: D
Explanation: In this scenario, the main issue is that all three APs are configured on the same 2.4 GHz channel at high transmit power. This creates a large, overlapping cell where many clients and multiple APs are contending for the same RF medium. Even if the signal strength is high, excessive overlap and co-channel interference lead to retransmissions, higher latency, and reduced throughput.
A practical, low-impact fix is to adjust the RF plan rather than adding hardware. In the 2.4 GHz band, the standard best practice is to use non-overlapping channels (typically 1, 6, and 11) so that adjacent APs are not transmitting on the same frequency. In addition, slightly reducing transmit power helps shrink the size of each cell and reduce the area where coverage overlaps heavily, which further limits contention.
These changes directly target the root cause—co-channel interference and oversize cells—without requiring new equipment or a complex redesign, matching the requirement for minimal disruption and no new hardware.
Topic: Network Operations
A junior network technician at a mid-sized company notices a coworker send the shared “netadmin” password to an external contractor over instant messaging so the contractor can log in to a core switch after hours. Company policy requires unique accounts, prohibits credential sharing, and states that security incidents must be handled through the documented incident response process. Which action should the technician take to BEST follow network policies and procedures?
Options:
A. Immediately disable the contractor’s remote access account without notifying a supervisor to stop any potential misconfiguration.
B. Record the details and immediately report the suspected policy violation through the documented incident response or escalation process.
C. Log in with the shared “netadmin” credentials to confirm that the contractor is only making approved configuration changes.
D. Privately warn the coworker not to share passwords again and avoid reporting the incident to prevent getting anyone in trouble.
Best answer: B
Explanation: Network operations policies typically include rules about account usage (such as unique accounts and password handling) and define how to handle suspected violations or security incidents. In this scenario, sharing a privileged “netadmin” password with an external contractor clearly violates the stated policies.
The junior technician’s role is not to improvise a response or ignore the issue, but to follow the organization’s documented incident response and escalation procedures. That usually means documenting what was observed (time, users involved, systems affected, how the sharing occurred) and reporting it to the appropriate authority, such as a supervisor, security officer, or incident response team.
This approach ensures the incident is handled consistently, legally, and with the right level of authority, while avoiding further policy violations or unnecessary disruption.
Topic: Network Troubleshooting
A user reports they can access shared folders on nearby PCs but cannot browse any websites or reach servers in other subnets. Other users on the same switch are not affected. You collect the following output from the user’s Windows workstation:
Ethernet adapter Ethernet:
IPv4 Address . . . . . . . : 192.168.10.57
Subnet Mask . . . . . . . : 255.255.255.0
Default Gateway . . . . . : 192.168.1.1
DNS Servers . . . . . . . : 192.168.10.10
On this network, all hosts in this VLAN should use 192.168.10.1 as their default gateway. Based on this information, what is the MOST likely cause of the user’s issue?
Options:
A. The subnet mask is incorrect, placing the host in a different subnet from other devices.
B. The workstation is missing a DNS server configuration.
C. The workstation has an incorrect default gateway configured.
D. The network adapter is experiencing an IP address conflict with another host.
Best answer: C
Explanation: The user can communicate with nearby PCs, which indicates that the local IP address and subnet mask are functional for same-subnet traffic. However, the user cannot reach websites or servers in other subnets, which requires sending traffic to the default gateway.
The ipconfig output shows the default gateway configured as 192.168.1.1, but the network standard for this VLAN is 192.168.10.1. This means the host is trying to send off-subnet traffic to a non-existent or unreachable router address, so anything outside the local 192.168.10.0/24 network fails. Correcting the default gateway to 192.168.10.1 would restore access to remote networks and the internet.
Topic: Network Implementations
A user reports they can access the internal file server at 192.168.10.50 but cannot browse any internet websites. You collect the following information.
Exhibit:
PC1 ipconfig:
IPv4 Address . . . . . . . . : 192.168.10.25
Subnet Mask . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . : 192.168.20.1
Office router LAN interface:
GigabitEthernet0/1 IP : 192.168.10.1/24
Based on the exhibit, which change would BEST restore PC1’s internet connectivity?
Options:
A. Change PC1’s subnet mask to 255.255.0.0
B. Configure the router’s LAN interface to 192.168.20.1/24
C. Change PC1’s IP address to 192.168.20.25
D. Change the default gateway on PC1 to 192.168.10.1
Best answer: D
Explanation: The exhibit shows that PC1 has an IP address of 192.168.10.25 with a subnet mask of 255.255.255.0. This means the host’s local subnet is 192.168.10.0/24, and any destination outside that network must be sent to a default gateway. The office router’s LAN interface is 192.168.10.1/24, which is the proper default gateway for this subnet.
However, PC1’s default gateway is configured as 192.168.20.1, which is in a completely different subnet (192.168.20.0/24). Because the gateway address is not in the same subnet as the host, PC1 cannot ARP for that address and therefore cannot send any traffic to it. As a result, the host can reach local devices on 192.168.10.0/24 (like the file server at 192.168.10.50) but cannot send traffic off the local network to the internet.
To restore internet connectivity, the host’s default gateway must be corrected to an IP address that is reachable and that can route off the local network. According to the exhibit, that address is 192.168.10.1, the router’s LAN interface on the same 192.168.10.0/24 subnet.
Topic: Networking Fundamentals
Which statement BEST describes a major improvement introduced with IEEE 802.11ax (Wi‑Fi 6) compared to 802.11ac (Wi‑Fi 5)?
Options:
A. It is the first Wi‑Fi standard to operate only in the 5GHz band.
B. It uses OFDMA and improved MU-MIMO to increase efficiency in high-density environments.
C. It replaces WPA2 with WPA3 as the required security protocol.
D. It reintroduces DSSS modulation to extend range at the cost of speed.
Best answer: B
Explanation: IEEE 802.11ax, marketed as Wi‑Fi 6, focuses on improving overall network efficiency and performance in environments with many connected devices, such as offices, apartment buildings, and stadiums. Its key enhancements include the use of OFDMA (orthogonal frequency-division multiple access) and improved MU-MIMO (multi-user, multiple-input multiple-output).
OFDMA allows a single channel to be subdivided into smaller subchannels so that multiple clients can be served in parallel within the same time slot. Enhanced MU-MIMO lets the access point communicate with multiple clients at once, both downstream and (in Wi‑Fi 6) upstream. Together, these features significantly improve throughput and reduce contention in high-density deployments, which is the primary advantage of 802.11ax over 802.11ac.
By contrast, 802.11ac’s big step was higher speeds in the 5GHz band using wider channels and higher-order modulation, not the density-focused scheduling improvements that define Wi‑Fi 6.
Topic: Network Implementations
Which statement BEST describes the primary function of SD-WAN in a modern enterprise network?
Options:
A. A device that balances incoming client requests across several servers to prevent any one server from overloading.
B. A protocol that encrypts traffic between two endpoints over the internet to create a secure tunnel.
C. A LAN technology that tags Ethernet frames so that multiple virtual networks can share the same physical switches.
D. A WAN architecture that uses centralized, application-aware policies to dynamically route traffic over multiple WAN links to improve performance and resiliency.
Best answer: D
Explanation: Software-defined WAN (SD-WAN) is a WAN architecture that decouples traffic control from the underlying transport links. It uses a centralized controller to define application-aware policies and then pushes those policies to edge devices. These edge devices continuously monitor multiple WAN circuits (such as MPLS, broadband, and LTE) and dynamically select the best path for each application flow.
By doing this, SD-WAN can automatically fail over to alternate links when one degrades or fails, which improves redundancy. It can also prioritize critical applications and send them over links with better latency, jitter, or loss characteristics, thereby improving application performance. Traditional WANs typically rely on static routing and per-device configuration, while SD-WAN introduces centralized management and dynamic path selection as core capabilities.
Topic: Network Troubleshooting
Which of the following statements about using spectrum analyzers and Wi‑Fi analyzers for wireless troubleshooting is NOT correct?
Options:
A. A Wi‑Fi analyzer typically lists nearby SSIDs, their channels, and signal strength values for each detected access point.
B. A spectrum analyzer displays RF energy levels across frequency ranges, helping locate non‑Wi‑Fi interference sources like microwaves or cordless phones.
C. A spectrum analyzer normally decodes Wi‑Fi management frames to display SSID names and security types for nearby wireless networks.
D. A Wi‑Fi analyzer can reveal channel congestion by showing how many networks and clients are active on each Wi‑Fi channel.
Best answer: C
Explanation: Spectrum analyzers and Wi‑Fi analyzers are complementary tools for wireless troubleshooting.
A spectrum analyzer shows radio‑frequency (RF) energy across a range of frequencies. It does not understand Wi‑Fi protocols; instead, it reveals where strong RF signals or interference exist, including from non‑Wi‑Fi devices such as microwave ovens, cordless phones, Bluetooth devices, or wireless cameras. This helps identify interference sources that a Wi‑Fi‑only tool might miss.
A Wi‑Fi analyzer understands 802.11. It discovers SSIDs, signal strengths, channels, and often security types and data rates. This view helps you see coverage gaps, overlapping channels, and congested areas where many APs share the same channel, which can cause contention and reduced throughput.
Because a spectrum analyzer works at the raw RF layer, it cannot decode Wi‑Fi management frames to show SSIDs or security details. That capability belongs to Wi‑Fi analyzers, making the statement that a spectrum analyzer “normally decodes Wi‑Fi management frames to display SSID names and security types” incorrect.
Topic: Networking Fundamentals
Which of the following statements about the purposes of the OSI and TCP/IP models is NOT accurate? (Select TWO.)
Options:
A. They are used as mandatory implementation specifications that all hardware and software must follow exactly when transmitting data.
B. They help technicians isolate and troubleshoot problems by focusing on one layer at a time.
C. They allow complex networking functions to be broken into abstract layers so that changes in one layer do not require redesigning all others.
D. They provide a common language to discuss networking concepts across different vendors and technologies.
E. Their primary goal is to define specific cable pinouts, connector shapes, and radio frequencies for network media.
Correct answers: A and E
Explanation: The OSI and TCP/IP models are reference models that describe how data moves through a network in layered steps. They are mainly used as conceptual tools, not as strict implementation blueprints.
These models serve several key purposes. They give network professionals a common language to describe where protocols operate and where problems occur. They also support layered abstraction, where each layer focuses on specific functions (such as physical signaling, routing, or application services). This separation lets engineers design, update, or troubleshoot one part of the stack with minimal impact on others.
However, the models themselves do not define exact implementation details like device pinouts, radio frequencies, or the exact way vendors must build protocols. Those specifics are handled by protocol and media standards (for example, IEEE 802.3 for Ethernet, 802.11 for Wi‑Fi, RFCs for TCP/UDP/IP).
Topic: Networking Fundamentals
A small office uses a single flat VLAN on an unmanaged switch connected to a router. Users report slowdowns during the day, and a protocol analyzer shows heavy ARP and broadcast traffic on the LAN. The IT team wants to reduce broadcast traffic without changing IP addressing or adding major complexity. Which change at the most appropriate OSI layer would BEST meet these goals?
Options:
A. Replace the router with a higher-throughput model while keeping the single flat network
B. Enable deep packet inspection and content filtering on the edge firewall to block unnecessary web traffic
C. Disable DHCP and assign static IP addresses to all clients to reduce broadcast usage
D. Configure multiple VLANs on a managed switch to segment user groups and route between them on the existing router
Best answer: D
Explanation: The scenario describes a small office with a single flat VLAN experiencing heavy ARP and broadcast traffic, which slows the network. ARP and broadcast frames are handled at the OSI Data Link layer (Layer 2). A single large Layer 2 segment means every broadcast is seen by all devices, which can consume bandwidth and processing resources.
To reduce broadcast traffic effectively, you need to reduce the size of the broadcast domains. VLANs provide logical segmentation at Layer 2, allowing you to create multiple smaller broadcast domains on the same physical switch. When you configure multiple VLANs and then use the existing router for inter-VLAN routing, you:
Other options that focus on higher OSI layers, router horsepower, or IP address management do not address the root cause: an oversized Layer 2 broadcast domain.
This ties directly to the OSI model learning objective: recognizing that broadcast storms, ARP volume, and VLAN segmentation are primarily Data Link layer concerns, while routing, IP addressing, and content filtering occur at higher layers and solve different problems.
Topic: Network Implementations
A small company is redesigning its switched network. Requirements state that:
Which proposed VLAN plan is the one the network team should AVOID implementing?
Options:
A. Create VLAN 10 for employees, VLAN 20 for servers, VLAN 30 for guests, and VLAN 99 for network management. Restrict VLAN 30 so it can only access the internet through a firewall.
B. Create VLAN 10 for employees and VLAN 20 for servers. Keep guests on an open, separate wireless SSID mapped to VLAN 30 that is allowed only outbound internet access and no internal routes.
C. Create VLAN 10 for employees and servers together, and VLAN 20 for guests. Allow inter-VLAN routing between VLANs 10 and 20 so guests can reach internal web applications if needed.
D. Create VLAN 10 for employees, VLAN 20 for servers, and VLAN 30 for guests. Use ACLs on the router to allow employees to reach VLAN 20 but block VLAN 30 from reaching VLANs 10 and 20.
Best answer: C
Explanation: VLANs are commonly used to segment different types of traffic, such as users, servers, and guests. Good VLAN design supports least privilege by limiting unnecessary communication paths and reducing the attack surface. In a typical small company, employees and servers should be on separate VLANs, and guests should be fully isolated from internal networks except for tightly controlled exceptions.
The unsafe design is the one that combines sensitive and less-trusted devices in the same VLAN and then allows untrusted guest traffic to route into that VLAN. This effectively removes the protections VLANs are supposed to provide and violates both segmentation and least-privilege principles.
Topic: Network Troubleshooting
You are the on-call network technician when the security team forwards you the following IDS alert summary and asks you to take an immediate containment action. Based only on this information, which action should you take NEXT to contain the incident?
| Time (UTC) | Severity | Src IP | Dst IP | Signature |
|---|---|---|---|---|
| 2025-05-12 14:03 | High | 10.10.5.23 | 185.77.12.9 | Outbound C2 traffic to known botnet host |
| 2025-05-12 14:04 | High | 10.10.5.23 | 185.77.12.11 | Outbound C2 traffic to known botnet host |
| 2025-05-12 14:05 | High | 10.10.5.23 | 45.66.8.200 | Outbound C2 traffic to known botnet host |
Options:
A. Block all inbound traffic from 10.10.5.23 on the perimeter firewall.
B. Restart the IDS sensor to clear the alerts and see if they return.
C. Schedule a full antivirus scan of all workstations during the next maintenance window.
D. Disable the switch port connected to 10.10.5.23 to immediately isolate that workstation from the network.
Best answer: D
Explanation: The exhibit shows three back-to-back High-severity IDS alerts, all sourced from the same internal IP address 10.10.5.23. Each alert is for Outbound C2 traffic to known botnet host but to different destination IPs. This pattern strongly indicates that the workstation at 10.10.5.23 is already compromised and attempting to maintain command-and-control connections with a botnet infrastructure.
In incident response, the first priority after identification is containment: limit the spread and impact of the compromise as quickly as possible. For an infected endpoint, the most direct containment control a network technician can apply is to isolate the host from the network, typically by disabling its switch port or placing it in a quarantine VLAN. This immediately stops both its outbound C2 traffic and any potential lateral movement on the LAN, buying time for eradication (cleaning/reimaging) and recovery.
The other options either target the wrong traffic direction (inbound instead of outbound), delay action until later, or interfere with monitoring instead of the compromised system. None of those choices meet the requirement for an immediate and effective containment step based on the IDS evidence provided.
Topic: Network Implementations
A regional retailer has its headquarters and eight branches connected via a managed MPLS WAN. The links are reliable and considered secure, but the monthly MPLS bill is significantly higher than comparable business-class broadband quotes. The CFO has mandated at least a 30% reduction in WAN costs, but the security team insists that branch-to-HQ traffic must remain protected against eavesdropping on the public internet.
Which of the following actions/solutions will best address this issue or requirement? (Select TWO.)
Options:
A. Deploy individual LTE hotspots for each POS terminal at every branch and disconnect all WAN circuits to eliminate monthly WAN charges.
B. Increase the MPLS bandwidth for each site so more applications can share the private circuits, improving perceived value instead of reducing WAN spending.
C. Terminate the MPLS service and use low-cost broadband with static routes only, relying on the ISP’s internal network to keep inter-site traffic private without additional encryption.
D. Adopt a hybrid WAN by keeping an MPLS circuit only for the most critical data center traffic while migrating routine branch connectivity to encrypted VPN tunnels over local broadband links.
E. Replace the MPLS links at branch sites with business-class broadband circuits and configure site-to-site IPsec VPN tunnels between each branch firewall and the HQ firewall.
Correct answers: D and E
Explanation: This scenario compares traditional private WAN circuits (such as MPLS) with public internet-based connectivity from a cost and security perspective.
Dedicated private circuits like MPLS are typically more expensive per megabit but offer predictable performance and a provider-managed, logically isolated network. They are often perceived as secure because customer traffic is not mixed at Layer 3 with general internet traffic. However, they can be costly for small and medium enterprises.
Using business-class broadband with site-to-site IPsec VPN tunnels allows organizations to leverage cheaper public internet links while still encrypting and authenticating all inter-site traffic. IPsec provides confidentiality, integrity, and authentication between sites, mitigating the risk of eavesdropping on the public internet. This approach significantly reduces recurring WAN costs while maintaining strong security.
A hybrid WAN approach uses both: retaining private circuits only where their benefits are truly needed (for example, for highly sensitive or latency-critical data-center traffic) and moving less critical or more cost-sensitive branch connections to encrypted VPNs over broadband. This directly addresses the business requirement to reduce costs without sacrificing security for branch-to-HQ communications.
Topic: Network Troubleshooting
A network technician is troubleshooting weak Wi‑Fi coverage in several conference rooms at the far end of a single-story office floor. The existing access point (AP) is mounted on the wall inside the server room at the opposite end of the floor, behind a concrete wall, and signal is already strong near the lobby close to the server room but very weak in the conference rooms.
Several changes are proposed to improve wireless coverage on this floor.
Which TWO proposed changes should you AVOID implementing? (Select TWO.)
Options:
A. Replace the AP’s omnidirectional antennas with a directional panel antenna aimed toward the lobby where signal is already strong.
B. Relocate the existing AP from the server room wall to a central hallway location on the ceiling, in open air.
C. Add a second ceiling-mounted AP near the conference rooms, configured on a non-overlapping channel from the first AP.
D. Raise the AP mounting height so it is above cubicle walls and just below the ceiling tiles, unobstructed by furniture.
E. Move the existing AP inside a locked steel equipment cabinet in the server room to protect it from tampering.
Correct answers: A and E
Explanation: Poor wireless coverage is often caused by physical factors such as distance, obstacles, and antenna placement/orientation. In this scenario, the conference rooms are far from the current AP and separated by a concrete wall and other office structures, so the goal is to reduce obstacles and distance to the weak area and to direct RF energy where it is needed.
Putting an AP inside a metal enclosure or aiming a directional antenna away from the problem area both significantly reduce usable signal where it is needed most. In contrast, centrally locating the AP, raising it above obstructions, or adding another AP near the weak area on a non-overlapping channel are all standard techniques to improve coverage in office environments.
Topic: Network Operations
A network engineer configures a critical core switch with dual hot-swappable power supplies that both run actively. If one power supply fails, the switch continues operating with no reboot and users do not lose their connections. Which business continuity concept BEST describes this design goal?
Options:
A. High availability
B. Disaster recovery
C. Backup
D. Fault tolerance
Best answer: D
Explanation: The scenario describes a core switch with two hot-swappable, active power supplies where a failure of one supply does not interrupt traffic or force users to reconnect. This is a classic example of fault tolerance.
Fault tolerance focuses on designing systems so that a failure of one component does not stop service. Common examples include dual power supplies in a switch, RAID for disks that can survive a drive failure, or fully redundant controllers running in lockstep. The key idea is continuous operation with no visible outage when a single component fails.
By contrast, high availability is about maximizing uptime over time, often through clustering and fast failover. A brief interruption or session reset might occur while a standby node becomes active, as long as total downtime stays very low.
Backup is about making copies of data or configurations (for example, nightly database backups or scheduled switch configuration exports) so they can be restored after data loss, corruption, or a failed upgrade. Backups alone do not keep the system running during a hardware failure.
Disaster recovery is a broader process for restoring services after a major outage or disaster, such as a data center fire, regional power loss, or ransomware attack. It often includes failover to a secondary site, using backups, and following formal recovery plans. There is usually some downtime before full service is restored.
In this question, the emphasis on no interruption and no user reconnects when a component fails matches fault tolerance most precisely.
Topic: Network Implementations
Which statement BEST describes the least-connections load-balancing method?
Options:
A. It always sends new traffic to the server with the highest CPU utilization.
B. It sends each new request to the next server in sequence, regardless of current load.
C. It sends all traffic to a single primary server until it fails, then switches to a backup server.
D. It sends new connections to the backend server that currently has the fewest active sessions.
Best answer: D
Explanation: Least-connections is a dynamic load-balancing method that looks at how many active connections each backend server is currently handling. When a new connection arrives, the load balancer assigns it to the server with the fewest active sessions. This helps keep traffic more evenly distributed when some sessions are long-lived or when different servers experience varying levels of load.
In contrast, round-robin simply cycles through servers in order without checking how many active connections each one already has. Failover or active/passive designs prioritize high availability rather than continuously balancing load, and CPU-based decisions are a different, more advanced type of metric-based load balancing, not the definition of least-connections.
Topic: Network Security
A junior network administrator is documenting common network-based attacks and possible responses. Which of the following statements is INCORRECT and describes a practice the organization should AVOID?
Options:
A. A distributed denial-of-service (DDoS) attack uses many systems to flood a target’s bandwidth or resources; rate limiting and working with the ISP can help mitigate it.
B. IP spoofing occurs when an attacker falsifies a packet’s source address to impersonate another system or hide their identity; ingress and egress filtering can help reduce this risk.
C. To keep critical services available during a DDoS attack, the firewall should be configured to allow all traffic through without inspection so it does not become a bottleneck.
D. In a man-in-the-middle attack, an attacker positions themselves between two endpoints to intercept or alter traffic; using TLS with proper certificate validation helps defend against this.
Best answer: C
Explanation: This question targets understanding of common network-based attacks—DDoS, spoofing, and man-in-the-middle—and appropriate high-level mitigations. DDoS aims to overwhelm bandwidth or server resources; spoofing aims to falsify identity; man-in-the-middle aims to intercept or modify communications.
A secure response should reduce exposure and maintain layered defenses. Any recommendation that disables core security controls, such as a firewall, is an anti-pattern because it violates defense-in-depth and significantly increases risk during an attack.
The unsafe statement is the one that suggests allowing all traffic through the firewall during a DDoS attack. That action removes critical protection right when it is needed most, contradicting standard security practices and the principle of defense in depth.
Topic: Network Security
Which TWO of the following statements about security control types are INCORRECT? (Select TWO.)
Options:
A. Configuring 802.1X on switch ports so that only authenticated devices can connect is a technical control.
B. Documenting a change-management process but not actually following it still makes it an effective administrative control.
C. A written acceptable-use policy that employees must read and sign is an example of an administrative control.
D. Installing a keypad lock on the door to the server room is considered a technical control rather than a physical control.
E. Enabling CCTV cameras to record activity at the data center entrance is a physical control.
Correct answers: B and D
Explanation: Security controls are commonly grouped into three categories: technical, administrative, and physical. Technical controls are implemented using systems, software, or hardware logic, such as firewalls or 802.1X port authentication. Administrative controls are policies, procedures, and processes that guide how people should behave, such as acceptable-use policies or change-management procedures. Physical controls protect the tangible environment, such as doors, locks, fences, and cameras.
In this question, the incorrect statements either misclassify a physical control as technical or claim that a purely documented but unenforced policy is still an effective control. Correct statements properly match the control type and reflect realistic best practices.
Topic: Network Operations
A company is updating its business continuity plan. The network administrator makes the following statements about protecting services and data. Which statement is INCORRECT?
Options:
A. Relying only on nightly off-site backups to keep an e-commerce site online with almost no downtime is an example of high availability.
B. Configuring two core switches in an active/standby pair so one immediately takes over if the other fails is an example of high availability.
C. Scheduling nightly off-site backups of critical databases so they can be restored after accidental deletion or corruption is an example of a backup strategy.
D. Using RAID 1 on a virtualization host so the loss of a single disk does not interrupt running VMs is an example of fault tolerance.
Best answer: A
Explanation: Business continuity planning uses several related but distinct concepts.
High availability (HA) is about keeping services online with minimal interruption by using redundancy and quick failover. Examples include clustered switches, redundant power supplies, and multiple network paths so that if one component fails, another immediately takes over.
Fault tolerance focuses on continuing operation through specific component failures, often by duplicating critical components inside a system. RAID 1 or RAID 5 on a storage array, or dual power supplies in a server, are common examples. The system may degrade in capacity or performance, but it stays running.
Backups create point-in-time copies of data that you can restore after data loss, corruption, or ransomware. They protect data, not uptime. Restoring from backup usually involves downtime and losing any data created after the last backup.
Disaster recovery (DR) is the broader process and plan for restoring IT services after a major outage or disaster, including where to restore, in what order, and acceptable recovery time and data loss (RTO/RPO). DR can use backups, secondary sites, and scripted recoveries, but it is still different from real-time high availability.
In the incorrect statement, backups are incorrectly treated as a way to provide near-continuous uptime for an e-commerce site, which violates business continuity best practices and misunderstands the role of backups versus high availability.
Topic: Network Operations
Users at a small branch office report very slow internet access. The NOC already sees, via SNMP graphs, that the branch router’s WAN interface is running at 95% utilization, and syslog shows no errors. The network team now needs to identify which internal hosts and applications are consuming most of the WAN bandwidth. Which action should the technician take next?
Options:
A. Set up continuous ping tests from the NOC to the branch router
B. Raise the syslog severity level so the router logs more detailed messages
C. Increase the SNMP polling frequency on the branch router’s WAN interface
D. Configure the router to export NetFlow/IPFIX data to a flow collector
Best answer: D
Explanation: In this scenario, SNMP has already revealed that the branch WAN interface is heavily utilized, and syslog does not show errors. The missing information is who and what is using the bandwidth. That requires flow-level visibility.
Flow-based monitoring such as NetFlow, sFlow, or IPFIX summarizes conversations by source/destination IP, ports, protocol, and byte counts. Exporting this data to a flow collector lets the technician quickly identify top talkers and top applications, which is exactly what is needed to troubleshoot congestion on a saturated link.
SNMP remains useful for interface statistics and thresholds, and syslog for event and error logging, but neither provides per-conversation usage details. Ping helps confirm reachability and latency but does not show traffic composition. Therefore, configuring flow export is the best next troubleshooting action.
Topic: Network Operations
A call center’s VoIP SLA requires one-way latency <150ms, jitter <30ms, and packet loss <1%. Users report brief but frequent call-quality issues that current monitoring is missing because thresholds are too loose and some metrics are not being collected.
You are revising monitoring and alerting for VoIP performance.
Which of the following actions should you AVOID? (Select TWO.)
Options:
A. Increase the VoIP packet-loss alert threshold to 10% over a 5-minute average to reduce the number of alerts.
B. Configure alerts when jitter exceeds 25ms for at least 1 minute on interfaces in the VoIP VLAN.
C. Enable syslog-based alerts when VoIP gateways report dropped calls, registration failures, or media-path errors.
D. Schedule synthetic MOS tests every 5 minutes and alert if the MOS score drops below 4.0 on VoIP paths.
E. Disable per-interface utilization monitoring on access switches to reduce SNMP polling load on the monitoring system.
Correct answers: A and E
Explanation: For VoIP, monitoring must align closely with the SLA and the nature of voice traffic. Because voice is sensitive to jitter, packet loss, and congestion, thresholds should be configured near (or slightly tighter than) the SLA values and use short enough time windows to catch brief but impactful issues.
Raising thresholds far above SLA limits or disabling key metrics like interface utilization will hide problems rather than help detect them. In contrast, monitoring jitter near its SLA limit, running synthetic MOS tests, and alerting on VoIP-specific syslog messages all help quickly identify and troubleshoot call-quality problems.
This question targets network monitoring and performance optimization: specifically, choosing appropriate monitoring methods and thresholds to reliably detect VoIP issues without undermining visibility.
Topic: Network Security
A network administrator is updating remote management for several branch routers and switches. Company policy requires that all management traffic be encrypted and that device credentials never cross the network in plaintext. Which of the following configuration changes would NOT comply with this policy and should be avoided?
Options:
A. Require administrators to connect over an IPsec VPN before accessing devices using SSH from their workstations.
B. Allow SNMPv2c access with the community string “public” from any internet address for monitoring.
C. Enable SSH and disable Telnet on all network devices.
D. Restrict the management web interfaces so they accept only HTTPS connections.
Best answer: B
Explanation: Secure management protocols like SSH, HTTPS, and SNMPv3 encrypt management traffic, including usernames, passwords, and configuration data. Encryption helps prevent attackers who can sniff the network from reading or modifying management sessions.
The scenario explicitly requires that no management credentials travel in plaintext and that management traffic be encrypted. Protocols such as Telnet, HTTP, and SNMPv1/v2c use plaintext for authentication or community strings, making them vulnerable to eavesdropping and credential theft. By contrast, SSH, HTTPS, and SNMPv3 provide confidentiality and integrity for management traffic.
Leaving plaintext management protocols exposed, especially to the public internet, is a clear violation of modern security best practices and the stated policy in the scenario.
Topic: Network Security
A 150-user office has a single flat LAN. Any device plugged into a wall jack receives an internal IP address and full access to file servers and internal applications. There is no authentication on switch ports, and users sometimes bring personal laptops from home. Management wants to reduce the risk of unauthorized or non-compliant wired devices connecting to the internal network using network-based controls. Which of the following actions/solutions will best address this issue or requirement? (Select TWO.)
Options:
A. Deploy a separate guest Wi-Fi network and publish a policy asking staff not to plug in personal devices
B. Enable 802.1X port-based authentication on access switches using a centralized RADIUS server
C. Create DHCP reservations for all known devices so that unknown MAC addresses do not receive an IP address
D. Configure switch port security to limit the number of MAC addresses per port and shut down the port on violations
E. Disable all unused switch ports in the wiring closet and leave only currently used ports enabled
Correct answers: B and D
Explanation: The main risk in this scenario is that any device connected to a live Ethernet jack gains full internal access without any check. Effective controls should operate at the network edge (the switch ports) to authenticate or limit devices before they join the production LAN.
802.1X port-based authentication uses a RADIUS server to verify user or device credentials when they connect to a switch port. Until authentication succeeds, the port is placed in a restricted state, preventing unauthorized endpoints from accessing the network.
Switch port security complements this by limiting the number of MAC addresses that can appear on a port and specifying actions (such as shutting down the port or dropping traffic) when a violation occurs. This makes it much harder for users to plug in additional unauthorized devices or hubs behind a single jack.
Other options like DHCP reservations, disabling unused ports, or relying mainly on policies and a guest Wi-Fi network do not sufficiently or directly control who can access the wired LAN from active ports. They are either easy to bypass, only partially address the issue, or focus on wireless instead of wired access.
Topic: Network Troubleshooting
A security analyst is investigating repeated alerts from the company’s VPN gateway about authentication failures for an administrator account. The analyst reviews a portion of the log shown below:
2025-07-12 14:01:03 AUTH-FAIL user=admin src_ip=198.51.100.23 method=VPN
2025-07-12 14:01:04 AUTH-FAIL user=admin src_ip=198.51.100.23 method=VPN
2025-07-12 14:01:04 AUTH-FAIL user=admin src_ip=198.51.100.23 method=VPN
2025-07-12 14:01:05 AUTH-FAIL user=admin src_ip=198.51.100.23 method=VPN
2025-07-12 14:01:06 AUTH-FAIL user=admin src_ip=198.51.100.23 method=VPN
2025-07-12 14:01:06 ACCOUNT-LOCK user=admin src_ip=198.51.100.23
Which type of attack is MOST likely occurring?
Options:
A. Network port scanning
B. ARP spoofing attack
C. Brute-force password attack
D. Distributed denial-of-service (DDoS) attack
Best answer: C
Explanation: The log shows several authentication failures within a few seconds, all targeting the same privileged account (admin) from the same source IP address. This behavior strongly indicates that someone (or an automated tool) is repeatedly trying different passwords to gain access.
A brute-force password attack involves systematically guessing many passwords for a user account until a correct one is found. Many systems generate logs like these and may lock the account after a certain number of failed attempts, which is exactly what happens in the final line: ACCOUNT-LOCK user=admin.
In Network+ Domain 5 (Network Troubleshooting), recognizing these patterns in authentication or firewall logs helps a technician quickly identify that the issue is not a simple user mistake but a security attack that may require blocking the source IP, enforcing MFA, or adjusting lockout policies.
Topic: Networking Fundamentals
A company is redesigning its 5-floor headquarters using a three-tier campus model with access, distribution, and core switches. Each floor has access switches for user and VoIP VLANs, and there will be a small, high-speed core layer that should focus on fast backbone forwarding. To centralize inter-VLAN routing and enforce ACL-based policies without overloading the core, where should the network engineer place the default gateways and most Layer 3 policy functions?
Options:
A. On the WAN edge firewall so that all internal VLANs route through a single security device
B. On the distribution switches that aggregate the access layer and sit below the core
C. On the access switches on each floor so that all routing and policies are pushed to the edge
D. On the core switches so that all routing and security policies are fully centralized
Best answer: B
Explanation: In a three-tier campus design, the access layer connects end devices (PCs, phones, printers, APs), the distribution layer aggregates those access switches and typically provides Layer 3 boundaries, and the core layer offers fast, resilient backbone connectivity.
The scenario explicitly states a requirement to keep the core focused on high-speed backbone forwarding while still centralizing inter-VLAN routing and ACL-based policy. In standard enterprise designs, that combination of requirements points directly to the distribution layer as the right place to host default gateways, inter-VLAN routing, and most user-facing ACLs.
Placing those functions on the access layer would fragment policy across many devices, making it harder to manage consistently. Placing them on the core would overcomplicate the core, which is usually kept as simple and fast as possible. Offloading all internal routing to a WAN edge firewall would create unnecessary latency and a severe bottleneck.
Therefore, using the distribution switches to perform Layer 3 routing between VLANs and to apply ACLs best meets all the design goals described in the scenario.
Topic: Network Troubleshooting
Which statement BEST describes a primary use of a wireless spectrum analyzer when troubleshooting a Wi‑Fi network?
Options:
A. It shows non‑802.11 RF interference and overall energy usage across the wireless frequency bands.
B. It lists nearby SSIDs, security types, and client MAC addresses associated with each access point.
C. It automatically reconfigures access points to optimal channels and power levels across the WLAN.
D. It performs active throughput tests between wireless clients and the default gateway over TCP or UDP.
Best answer: A
Explanation: Wireless troubleshooting often uses two related but distinct tool types: Wi‑Fi analyzers and RF spectrum analyzers.
A Wi‑Fi analyzer understands 802.11 protocols. It can show SSIDs, BSSIDs, channels in use, signal strength (RSSI), security settings, and sometimes client associations and retry rates. This is excellent for seeing how Wi‑Fi networks are configured and how strong their signals are.
A spectrum analyzer, by contrast, looks at raw RF energy across a frequency range (such as 2.4GHz and 5GHz) without caring whether the energy is Wi‑Fi or some other signal. This lets you see non‑802.11 interference sources like microwaves, cordless phones, Bluetooth, baby monitors, and other devices that can disrupt Wi‑Fi even though they are not Wi‑Fi networks.
For the CompTIA Network+ Domain 5 (Troubleshooting), understanding that a spectrum analyzer reveals interference and overall channel congestion at the RF level—including sources that a Wi‑Fi analyzer cannot decode—is the key fact being tested here.
Topic: Network Troubleshooting
A network technician is troubleshooting a user’s inability to access a file server. They have already interviewed the user, verified the IP configuration, confirmed the issue on their own test laptop, and determined that only this user is affected. According to a structured troubleshooting methodology, what should the technician do NEXT?
Options:
A. Develop a theory of probable cause based on the information gathered
B. Escalate the issue to a higher-level engineer or vendor support
C. Document the symptoms and close the ticket as a one-off incident
D. Immediately implement the most likely fix to restore service quickly
Best answer: A
Explanation: A standard network troubleshooting methodology follows a specific order to reduce guesswork and avoid unnecessary changes. After the technician has identified the problem, gathered information, and defined the scope, the next step is to develop a theory of probable cause.
Only after forming a theory should the technician test that theory, plan and implement a fix, verify full functionality, and finally document the findings and actions. Skipping straight to implementation, escalation, or documentation before forming a theory breaks the methodical process and can lead to wasted effort or new issues.
In this scenario, the technician has already completed the initial step—identifying and scoping the problem. Therefore, the correct next action is to establish a likely cause based on the observed symptoms and collected data.
Topic: Network Security
A company is redesigning how it manages administrator and user logins to network devices and services. The security team wants to use centralized AAA with RADIUS and TACACS+ to improve security, authorization control, and accounting.
Which of the following AAA design decisions is NOT recommended and should be avoided? (Select TWO.)
Options:
A. Use a pair of redundant RADIUS servers integrated with the corporate directory to authenticate VPN and Wi‑Fi users.
B. Use TACACS+ for administrator logins to switches and routers so you can centrally control which commands each admin can run and log those actions.
C. Send RADIUS authentication for remote sites across the public internet using PAP without any additional encryption, because the shared secret is sufficient protection.
D. Centralize accounting logs on the RADIUS or TACACS+ server so you can audit which user account made each configuration change or connection attempt.
E. Configure all switches and routers with the same local ‘admin’ account and password, and do not integrate them with RADIUS or TACACS+, to keep management simple.
Correct answers: C and E
Explanation: Authentication, authorization, and accounting (AAA) are often centralized using protocols such as RADIUS and TACACS+. Centralization allows organizations to manage user identities and permissions in one place, enforce consistent policies, and collect detailed logs of who did what and when.
RADIUS is commonly used for network access control, such as Wi‑Fi and VPN logins, and typically integrates with a directory service (like Active Directory) so user account management is centralized. TACACS+ is commonly used for administrative access to network devices (switches, routers, firewalls) because it cleanly separates authentication, authorization, and accounting, allowing per‑command control and detailed logging.
Poor AAA designs either fail to use centralization at all or weaken the security of the authentication process. Using identical local credentials across devices removes user accountability and makes compromise more damaging, while sending cleartext credentials over the internet without encryption exposes them to interception. Both of these choices violate modern security best practices and should be avoided.
Topic: Network Implementations
A company has moved its CRM application to VMs in a public cloud VPC/VNet that is connected to headquarters with a site-to-site IPsec VPN. External customers use https://crm.example.com, which currently resolves in public DNS to a public IP on a cloud load balancer.
Security policy now requires internal users at headquarters to reach the CRM only over the private VPN path, while external customers must continue to use the existing public endpoint without change.
Which configuration change is the MOST appropriate to meet this requirement?
Options:
A. Change the public DNS A record for crm.example.com to point to the private IP address of the cloud load balancer.
B. Manually add hosts file entries on all employee workstations mapping crm.example.com to the private IP of the application server.
C. Configure split-horizon DNS by creating an internal DNS zone that returns a private IP for crm.example.com to on-premises clients.
D. Add a static route on the headquarters firewall so that traffic to the public IP of crm.example.com is forced through the VPN tunnel.
Best answer: C
Explanation: In a hybrid cloud deployment, internal users often access cloud-hosted services over a private connection such as a site-to-site VPN, while external users continue to use public internet endpoints. A common design pattern is to use split-horizon DNS, where internal and external clients receive different IP addresses for the same hostname based on which DNS server they query.
In this scenario, the cloud CRM is already reachable over a site-to-site VPN using private IPs, and external customers successfully use the existing public DNS record. The remaining problem is that internal users still resolve crm.example.com to the public IP, sending traffic over the internet. The cleanest solution is to adjust name resolution for internal clients only, so that they receive a private IP that routes across the VPN, while leaving public DNS untouched for customers.
Implementing an internal DNS zone (or record) that returns a private IP for crm.example.com to on-premises clients accomplishes this. Internal DNS servers answer internal queries with the private IP, and external queries continue to be answered by public DNS with the public IP. This respects the security policy by forcing internal access over the private VPN path and avoids disrupting external users.
Topic: Network Security
Which of the following statements about least privilege, defense in depth, and Zero Trust is INCORRECT? (Select TWO.)
Options:
A. In a defense-in-depth design, once traffic passes the perimeter firewall it should normally be trusted, so additional internal segmentation is usually unnecessary.
B. Role-based access control (RBAC) is often used to enforce least privilege by assigning permissions to job roles instead of individual users.
C. Zero Trust assumes no implicit trust based solely on network location, so internal services should still require strong authentication and authorization checks.
D. To simplify administration under Zero Trust, it is best to use broad “allow any internal to any internal” rules and rely mainly on endpoint antivirus to stop attacks.
E. Least privilege means granting users only the minimum permissions they need to perform their job tasks, reducing potential damage from compromised accounts.
Correct answers: A and D
Explanation: Least privilege, defense in depth, and Zero Trust all push network designs toward tighter access control and stronger segmentation.
Least privilege focuses on who can access what. Users and systems should receive only the permissions needed to perform their tasks. This limits the damage if an account is compromised or if someone makes a mistake.
Defense in depth focuses on layering controls. Rather than relying on a single perimeter firewall, networks should use multiple security layers: endpoint protections, internal firewalls, VLAN segmentation, access control lists (ACLs), strong authentication, and monitoring. The idea is that if one layer fails, others still provide protection.
Zero Trust extends these ideas by assuming that no network location is inherently trusted, including the “internal” network. Every request should be authenticated, authorized, and ideally continuously evaluated, regardless of whether it originates inside or outside the traditional perimeter.
Because of these principles, modern designs avoid trusting all internal traffic after a single perimeter check and avoid broad “allow any to any” rules. Instead, they use segmentation, RBAC, and fine-grained policies to control access between users, devices, and applications.
Topic: Network Troubleshooting
An office uses controller-managed APs broadcasting a single WPA2-Enterprise SSID, CorpWiFi. Staff report seeing two CorpWiFi networks; one shows a self-signed certificate and an unknown captive portal page. The team confirms only one authorized AP and wants to remove the threat and improve future detection with minimal disruption. Which action is BEST?
Options:
A. Reconfigure CorpWiFi as an open SSID with a captive portal to avoid certificate warnings that confuse users.
B. Increase the transmit power on the authorized APs so their signal is stronger than the unauthorized AP in all office areas.
C. Enable rogue AP detection on the wireless controller, locate the unauthorized AP via signal triangulation, disconnect it from the switch, and configure ongoing rogue-AP alerting.
D. Change the corporate SSID from CorpWiFi to CorpSecure and immediately disable the CorpWiFi SSID on all authorized APs.
Best answer: C
Explanation: The symptoms indicate an evil-twin or rogue AP broadcasting the same SSID (CorpWiFi) but presenting a self-signed certificate and an unknown captive portal. This is a clear wireless security threat: users could be tricked into joining the rogue network and entering credentials or other sensitive data.
In a controller-based WLAN, best practice is to use built-in rogue-AP detection to identify unauthorized radios, locate them physically (for example, by signal strength triangulation or a simple walk-through with a handheld device), and then remove or disconnect them at the switch. After remediation, you should enable ongoing monitoring and alerting so future rogues are quickly detected according to policy.
Simply changing SSIDs, turning up power, or weakening security does not actually address the root cause (an unauthorized AP on or near your network) and may introduce new risks or fail to meet the stated goals of removing the threat and improving detection with minimal disruption.
Topic: Network Implementations
A network technician must run a new horizontal cable from an IDF to a user’s workstation, approximately 80m away. The cable will pass through a shared return-air ceiling space where building code requires plenum-rated cabling. The link must support 1Gbps Ethernet. Which of the following cable and connector combinations is NOT appropriate for this requirement?
Options:
A. Cat6 plenum-rated UTP terminated to RJ45 keystone jacks at the patch panel and wall plate
B. OM3 multimode fiber in a plenum-rated jacket with LC connectors at each end
C. Cat6a plenum-rated UTP terminated to RJ45 keystone jacks on both ends
D. Cat5e PVC-jacketed UTP terminated to RJ45 keystone jacks, run directly through the return-air ceiling
Best answer: D
Explanation: In this scenario, the key constraints are the 1Gbps bandwidth requirement, the 80m distance (within the 100m limit for copper horizontal runs), and—most importantly—the fact that the cable passes through a return-air ceiling space where building code requires plenum-rated cabling.
Plenum spaces must use plenum-rated jackets (often CMP for copper or OFNP/OFCP for fiber), which produce less smoke and lower-toxicity fumes in a fire. Using non-plenum (PVC) cable in these spaces is a code violation and a safety risk, even if the cable meets the electrical performance needed for 1Gbps.
Both Cat6 and Cat6a plenum-rated UTP with RJ45 terminations are fully capable of supporting 1Gbps over 80m, and OM3 multimode fiber in a plenum-rated jacket also far exceeds the performance requirements while satisfying fire code. Only the choice that uses Cat5e with a standard PVC jacket in the plenum space fails to meet the environmental requirement, making it the incorrect option for this scenario.
Topic: Networking Fundamentals
A network administrator is creating firewall rules to allow internal administrators to remotely manage devices over the company VPN. Which of the following proposals is NOT appropriate based on standard port usage and current security best practices?
Options:
A. Allowing HTTPS access to network devices’ web management interfaces over TCP port 443
B. Allowing SSH management access to Linux servers over TCP port 22
C. Allowing Telnet access to core switches over TCP port 23
D. Allowing RDP access to Windows servers over TCP port 3389
Best answer: C
Explanation: This question tests knowledge of common TCP ports and their associated services, along with basic security best practices for remote management.
SSH uses TCP port 22 and provides encrypted remote command-line access. HTTPS uses TCP port 443 and provides encrypted web-based access. RDP uses TCP port 3389 for remote graphical access to Windows systems. These are all standard port assignments and, when restricted to trusted administrators over a VPN, are reasonable choices.
Telnet, however, uses TCP port 23 and transmits all data, including usernames and passwords, in cleartext. Using Telnet to manage core switches exposes critical credentials and configuration data to interception and violates modern security best practices. SSH on port 22 should be used instead for device management.
Topic: Network Security
A network team creates a standard hardened configuration template for all routers and switches, defining required services, login settings, and logging destinations. They run automated checks that compare device configurations to this template and report any deviations. Which security concept does this practice BEST illustrate?
Options:
A. Enforcing least privilege for administrative accounts
B. Providing defense in depth for the network perimeter
C. Implementing a security baseline for network devices
D. Performing continuous vulnerability scanning of infrastructure
Best answer: C
Explanation: The scenario describes creating a standard hardened configuration template for routers and switches, then automatically checking running configurations against that template and reporting deviations. This is the classic use of a security baseline or configuration standard.
A security baseline (or configuration standard) defines the minimum acceptable configuration and controls for a class of devices, such as required services, authentication methods, logging, and management protocols. Automated tools then compare current device configs to this baseline to detect drift, enforce consistency, and provide auditable proof of compliance.
This practice directly supports security best practices and policies by ensuring that all network devices remain configured according to the organization’s approved, documented standard instead of being changed ad hoc over time.
Topic: Networking Fundamentals
A small company is wiring four office buildings on a campus. Management is most concerned about keeping all buildings connected to the network core even if a single cable or switch fails. They are willing to pay for extra links to improve resiliency. Which physical topology would BEST meet this requirement?
Options:
A. Ring topology
B. Mesh topology
C. Bus topology
D. Star topology
Best answer: B
Explanation: The key requirement in this scenario is resiliency to a single cable or switch failure while keeping all office buildings connected to the network core. Physical topology choices differ mainly in how many alternate paths exist if something breaks.
A mesh topology links devices with multiple physical paths. If one link or node fails, traffic can reroute over another path, significantly improving fault tolerance. This makes it the best choice when the organization is willing to invest in extra cabling and ports to avoid outages.
Other common topologies like star, bus, and ring all introduce one or more single points of failure that can disconnect part or all of the network when a single component fails, so they do not meet the stated resiliency requirement as well as a mesh topology does.
Topic: Network Implementations
Which statement BEST describes the primary purpose of using Virtual LANs (VLANs) on a switch?
Options:
A. To automatically assign IP addresses to hosts as they join the network
B. To increase the physical bandwidth of a link by bonding multiple cables together
C. To create separate logical broadcast domains on the same physical switch, improving segmentation and traffic isolation
D. To encrypt user traffic end-to-end across the public Internet between remote sites
Best answer: C
Explanation: A VLAN (Virtual LAN) allows a network administrator to logically divide a single Layer 2 switching infrastructure into multiple, separate broadcast domains. Devices in one VLAN do not receive broadcast traffic from another VLAN unless traffic is specifically routed between them. This segmentation helps contain broadcasts, reduce unnecessary traffic, and improve security by isolating groups such as servers, guest users, and sensitive departments from one another.
Because VLANs provide logical separation on the same physical switches and cabling, organizations can implement security and segmentation policies without needing physically separate switches for every group. Routing or Layer 3 interfaces are then used to control and filter any inter-VLAN communication, typically with access control lists or firewall policies.
Topic: Network Implementations
Which of the following statements about guest Wi‑Fi networks and captive portals is NOT correct?
Options:
A. Guest SSIDs are commonly restricted to internet-only access, with firewalls or ACLs preventing direct access to internal servers and services.
B. Captive portals by themselves provide strong end-to-end encryption for all guest traffic, so an open guest SSID without WPA2/WPA3 is just as secure as an encrypted WLAN.
C. Guest Wi‑Fi networks are typically placed in a separate VLAN or subnet to isolate visitors from internal resources.
D. Captive portals often display an acceptable-use or terms-of-service page and may require basic registration before granting internet access.
Best answer: B
Explanation: Guest Wi‑Fi networks are commonly used to give visitors internet access without exposing the organization’s internal network. The usual design places guest devices on a dedicated VLAN or subnet and uses firewall rules to allow only outbound internet traffic, blocking access to internal servers and management networks.
A captive portal sits on top of this guest network and manages the user onboarding flow. After a user connects to the guest SSID and attempts to browse, they are redirected to a web page where they may see acceptable-use terms, provide basic information, or authenticate. Once they complete this step, the portal allows their MAC or IP address to reach the internet.
However, a captive portal does not provide Wi‑Fi encryption. Over-the-air protection is provided by wireless security modes such as WPA2 or WPA3 using AES. If the SSID is open (no WPA2/WPA3), guest traffic can be captured by anyone within radio range, regardless of whether a captive portal is used. The portal mainly controls access and logging, not link-layer confidentiality.
Topic: Network Operations
A small office has a single 1Gbps switch and a single edge router connecting to a 100Mbps internet circuit. All traffic, including VoIP handsets and web browsing, shares this link, and there is no QoS configured.
Users report that web pages generally load fine, but during busy periods voice calls sound choppy and words are cut off. A recent monitoring report for the WAN link shows:
Based on these metrics and the goal of improving VoIP quality without adding unnecessary complexity, which change would be the MOST appropriate?
Options:
A. Lower the codec bitrate on all IP phones to use less bandwidth per call
B. Upgrade the internet circuit from 100Mbps to 1Gbps to reduce bandwidth utilization below 10%
C. Configure QoS on the edge router to prioritize VoIP traffic and limit jitter-sensitive traffic delay
D. Deploy a second wireless access point to offload some client traffic from the existing WLAN
Best answer: C
Explanation: The scenario describes a small office with a shared 100Mbps WAN link. Web browsing is acceptable, but VoIP calls are choppy, with words being cut off. The monitoring report shows moderate bandwidth utilization (55–65%), stable latency (about 35ms), low packet loss, but jitter spikes up to 25–40ms during busy periods.
For real-time applications like VoIP, jitter (variation in packet arrival times) is often more harmful than average latency or even moderate bandwidth usage. When jitter is high, voice packets may arrive out of order or too late for the jitter buffer, causing choppy audio and gaps in speech, even if the link is not saturated and latency is within acceptable limits.
Because the utilization is moderate and the main problem is jitter under load, the best solution is to prioritize VoIP traffic using QoS on the edge router. QoS mechanisms like priority queuing or DSCP-based policies ensure that voice packets are transmitted ahead of less time-sensitive traffic when the link is congested, greatly reducing jitter and improving call quality without requiring major hardware changes or a much larger circuit.
Upgrading bandwidth, adding APs, or adjusting codecs either do not target the identified metric (jitter) or introduce unnecessary cost or quality trade-offs compared to implementing QoS, which directly addresses the performance metric most closely tied to the user experience in this scenario.
Topic: Networking Fundamentals
A company assigns all client devices addresses from the 192.168.0.0/16 range and uses a single public IPv4 address on its internet edge router. When many clients browse the web simultaneously, the router tracks each session using different TCP and UDP port numbers so replies go back to the correct internal host.
Which networking concept BEST describes this behavior?
Options:
A. Port address translation
B. Use of private IPv4 addressing
C. Static one-to-one network address translation
D. Assignment of public IPv4 addresses to each client
Best answer: A
Explanation: The scenario describes many internal hosts that all use private IPv4 addresses from 192.168.0.0/16 and share a single public IPv4 address on the edge router. The key detail is that the router keeps track of each connection using different TCP or UDP source port numbers, so that return traffic can be correctly matched to the original internal host.
That behavior is the definition of port address translation (PAT), sometimes called NAT overload. PAT extends basic NAT by translating not only IP addresses but also source port numbers. This allows dozens or hundreds of private internal hosts to access the internet simultaneously through a single public IPv4 address, which conserves public address space and hides internal addressing from the outside.
Private addressing alone just defines which ranges are not globally routable on the internet. Static NAT defines fixed one-to-one address mappings. Assigning each host a public address would remove the need for PAT entirely. Only PAT matches both the use of a single public address and the per-connection port tracking described in the question.
Topic: Network Troubleshooting
You are troubleshooting why a Windows laptop cannot access internal file shares or the Internet after connecting to the corporate Wi‑Fi. The user reports they can only open a “remediation portal” web page. You review the NAC system and see the following session details:
| Field | Value |
|---|---|
| Username | jdoe |
| IP address | 10.20.90.27 |
| Auth result | Success (802.1X) |
| Posture status | Non-compliant |
| Non-compliance reason | Host firewall disabled |
| Assigned VLAN | 290-QUARANTINE |
| Access policy | Remediation portal only |
Based on this exhibit, which is the MOST likely cause of the user’s limited network access?
Options:
A. The DHCP server did not provide a default gateway, so the user can only reach local web pages on the same subnet.
B. The RADIUS authentication failed, so the user was dropped into an unauthenticated guest network with no external access.
C. The laptop failed the NAC posture check because the host firewall is disabled and has been placed in a quarantine VLAN.
D. The wireless access point is enforcing bandwidth limits due to a weak signal, preventing access to most resources.
Best answer: C
Explanation: The exhibit comes from a NAC (Network Access Control) system that is enforcing posture checks on devices connecting to the network. In this case, the session details show that the user authenticated successfully via 802.1X, but the posture check flagged the endpoint as non-compliant because the host firewall is disabled.
When a device fails the posture check, NAC commonly assigns it to a quarantine or remediation VLAN. The exhibit confirms this with “Assigned VLAN: 290-QUARANTINE” and an “Access policy: Remediation portal only.” In such a configuration, the user can usually reach just a remediation web portal and a few update servers, but not normal internal resources or the Internet.
Therefore, the most likely cause of the limited access is that NAC has placed the device into a quarantine VLAN due to the disabled host firewall. Restoring compliance (for example, re-enabling the firewall and re-running the posture check) would allow NAC to grant normal network access.
This aligns with Network+ troubleshooting for security-related issues, where NAC and posture checks can intentionally block or limit network connectivity when endpoints do not meet policy requirements.
Topic: Network Operations
A small IT team wants to start using simple scripts to reduce manual work on their campus network. They are concerned about avoiding outages or security incidents while they learn automation.
Which of the following actions/solutions will best address this issue or requirement? (Select TWO.)
Options:
A. Use a script to automatically push emergency firewall rule changes to the internet edge firewall as soon as a security ticket is created.
B. Use automation to dynamically rewrite QoS policies on WAN routers in real time whenever VoIP traffic spikes are detected.
C. Run a weekly script that uses SNMP to collect interface utilization and error counters from all network devices and generate an emailed summary report.
D. Schedule a nightly script to log in with read-only credentials and back up configurations from all switches and routers to a central server.
E. Deploy a script that automatically upgrades firmware on all core switches whenever a new version is published by the vendor.
F. Configure a nightly script that removes and recreates all VLANs on access switches to match a configuration template from the CMDB.
Correct answers: C and D
Explanation: For early automation in network operations, the best candidates are repetitive, predictable, and low-risk tasks, especially those that are read-only (data collection, reporting, backups). These tasks save time without directly changing live traffic flows or device behavior.
Backing up configurations and collecting performance statistics both meet these criteria. In contrast, tasks that automatically modify firewall rules, firmware versions, VLANs, or QoS policies can have immediate, wide impact on connectivity and security if something goes wrong, so they are not good starting points for automation.
In Network+ Domain 3 (Network Operations), understanding which tasks to automate supports better reliability and safer adoption of automation and scripting basics.
Topic: Network Troubleshooting
A small office has a flat IPv4 LAN (192.168.10.0/24), a single edge router to the ISP, and one Windows server (192.168.10.10) that hosts the internal DNS zone corp.local. All clients get their IP settings from a DHCP scope configured on the router.
Users report they can browse the internet, but they cannot access internal resources by name (for example, fileserver.corp.local). From a user PC, the technician runs:
nslookup fileserver.corp.local
Server: 8.8.8.8
Address: 8.8.8.8#53
** server can't find fileserver.corp.local: NXDOMAIN
On the DNS server itself, the technician runs:
nslookup fileserver.corp.local
Server: 192.168.10.10
Address: 192.168.10.10#53
Name: fileserver.corp.local
Address: 192.168.10.50
The goal is to restore reliable internal name resolution for all clients while keeping the design simple and avoiding per-host manual configuration. Which change should the technician implement?
Options:
A. Modify the DHCP scope on the router to hand out 192.168.10.10 as the primary DNS server for all clients, and configure that DNS server to forward internet queries to public resolvers.
B. Manually set each client’s network adapter to use 192.168.10.10 as primary DNS and 8.8.8.8 as secondary DNS.
C. Deploy a logon script that updates the local hosts file on each workstation with entries for all internal servers.
D. Ask the ISP to host the corp.local zone on their public DNS servers and add A records for all internal hosts there.
Best answer: A
Explanation: The nslookup output shows that user PCs are sending DNS queries to a public resolver (8.8.8.8), which knows nothing about the internal corp.local zone, so it returns NXDOMAIN. On the internal DNS server itself, the same name resolves correctly because that server hosts the corp.local zone.
The core problem is not that DNS is broken globally, but that clients are using the wrong DNS server. They should query the internal DNS server for both internal names and, via forwarding, external names. The best way to achieve this in a small, flat network is to fix the DHCP configuration so that all clients automatically receive the correct DNS server address.
By having the router’s DHCP scope hand out the internal DNS server (192.168.10.10) as the DNS server, clients can resolve internal hostnames. If the internal DNS server is configured with forwarders or root hints, it can also resolve internet domains, so there is no need for clients to contact public DNS directly. This centralizes configuration and avoids per-host manual changes, matching the optimization goal.
Topic: Network Implementations
Which of the following statements about Wi‑Fi channel planning is NOT correct?
Options:
A. Co‑channel interference occurs when multiple nearby APs use the same channel and must share airtime, reducing effective throughput.
B. Channel reuse means assigning the same non‑overlapping channel to APs that are far enough apart that their coverage areas barely overlap.
C. Using overlapping 2.4GHz channels, such as channels 1 and 4 in the same area, improves total throughput by allowing more simultaneous transmissions.
D. In the 2.4GHz band, channels 1, 6, and 11 are commonly used because they do not overlap with each other in most regions.
Best answer: C
Explanation: Wi‑Fi channel planning aims to minimize interference and maximize usable capacity. In the 2.4GHz band, only a few channels are considered non‑overlapping in most regions: channels 1, 6, and 11. Using just these three in a pattern helps avoid adjacent‑channel interference, which happens when overlapping channels (for example, 1 and 4) are used close together.
Co‑channel interference occurs when nearby APs use the same channel. Their signals do not overlap in frequency, but they must share airtime because Wi‑Fi uses a contention‑based access method. Proper channel reuse places APs that share a channel far enough apart that their coverage areas barely overlap, reducing the amount of contention while still letting you reuse limited spectrum.
Overlapping channels, by contrast, create adjacent‑channel interference, where signals partially overlap in frequency and interfere directly. This usually causes more retransmissions and lower throughput, so overlapping channels like 1 and 4 in the same physical area are avoided in good channel plans.
Topic: Network Implementations
A network technician is preparing a small office telecom closet for a safety inspection. Several Cat6 runs pass through the return-air space above the suspended ceiling, and the existing cable jackets are marked as “CMR”. Inside the closet, patch cables between the switches and patch panels are bent at sharp 90-degree angles and hang loosely with no cable managers. The company wants to improve fire safety and long-term cable reliability without replacing any active network equipment. Which of the following actions/solutions will best address this issue or requirement? (Select TWO.)
Options:
A. Replace the non-plenum Cat6 runs in the return-air ceiling space with plenum-rated (CMP) cable
B. Coil excess patch cable length on top of the switches so that no cabling touches the floor
C. Bundle all horizontal and patch cables tightly with nylon zip ties to keep them straight and compact
D. Install horizontal and vertical cable managers and use hook-and-loop (Velcro) straps to support patch cords with gentle bends
E. Replace the existing Cat6 with riser-rated (CMR) shielded cable but leave it routed through the return-air ceiling space
Correct answers: A and D
Explanation: This scenario highlights two main issues: fire safety in an air-handling (plenum) space and physical cable handling that can impact reliability.
In return-air spaces, such as the area above a suspended ceiling used for HVAC air return, building and safety codes typically require plenum-rated (CMP) cabling. Plenum jackets are designed to produce less smoke and fewer toxic fumes in a fire compared to riser or general-purpose cable. Using riser-rated (CMR) cable in these spaces is not acceptable.
Inside the closet, sharp 90-degree bends and unsupported patch cords violate good bend-radius and cable-management practices. Twisted-pair cables should be supported and routed so that bends are gradual, not kinks, generally with a minimum bend radius of several times the cable diameter. Proper cable management using dedicated managers and hook-and-loop straps reduces strain, preserves cable geometry, and improves reliability.
Therefore, the best responses are to replace the non-plenum cable in the ceiling with plenum-rated cable and to add proper cable management that maintains bend radius using non-crushing fasteners like Velcro straps.
Topic: Networking Fundamentals
A company is migrating to a cloud-managed SD-WAN solution to simplify administration across 30 branch offices. The engineer wants to follow SDN principles such as centralized control and policy-based routing. Which of the following design choices is the engineer’s WRONG approach in this context?
Options:
A. Apply standardized configuration templates from the cloud controller so new branch routers automatically receive consistent VLAN, routing, and firewall policies.
B. Use the SD-WAN dashboard’s REST APIs to push updated security policies to all edge devices from a single automation script.
C. Use the cloud controller to define application-aware policies that send real-time voice over a low-latency link and bulk traffic over a cheaper internet link.
D. Continue to log in to each branch router individually over SSH to configure unique static routes and QoS rules by hand.
Best answer: D
Explanation: Software-defined networking (SDN), SD-WAN, and cloud-managed networking all emphasize centralized control, abstraction, and policy-based management. Instead of logging into each router or switch individually, administrators use a controller or cloud dashboard to define high-level intent (such as which apps are high priority) and push that policy out to many devices.
Continuing to configure each branch router separately with its own static routes and QoS rules ignores these principles. It keeps the control logic distributed on each device and relies on manual, error-prone changes. In contrast, using a central controller, templates, and APIs is exactly how SDN/SD-WAN are meant to be used: the controller abstracts the underlying hardware and applies consistent policies everywhere.
Topic: Network Operations
A company is creating a business continuity strategy for its customer-facing web application. Which of the following actions or assumptions should the network team AVOID because they misunderstand high availability, fault tolerance, backup, or disaster recovery? (Select TWO.)
Options:
A. Store backup copies both onsite and in a cloud storage region and test restoring from those backups every quarter.
B. Assume that using RAID 5 on the primary file server means a separate offsite backup system is no longer necessary.
C. Rely only on a nightly full backup of the database servers and skip any clustering or failover configuration for them.
D. Deploy two application servers behind a load balancer in different racks and on separate power circuits to keep the service available if one fails.
E. Document step-by-step procedures to bring critical applications online at a secondary site if the primary data center becomes unavailable, and rehearse this process annually.
Correct answers: B and C
Explanation: Business continuity and disaster recovery planning uses several related but distinct concepts.
High availability focuses on keeping services online through redundancy and quick failover. For example, two application servers behind a load balancer on separate power circuits provide high availability: if one fails, the other continues serving users with minimal downtime.
Fault tolerance aims for continuous operation even when a component fails, often by using redundant hardware that can fail without interrupting service. RAID is an example at the storage level: the system can survive a single disk failure with no immediate outage. However, fault tolerance alone does not protect against all risks.
Backups are point-in-time copies of data stored separately from production systems. They are used to restore data after corruption, deletion, or a disaster. Good backup practice includes keeping copies offsite (for example, in cloud storage) and regularly testing restores so you know recovery will work when needed.
Disaster recovery is the set of processes, runbooks, and alternate resources used to restore business services after a major event such as a site loss. This often involves a secondary site or cloud environment and documented, tested procedures to bring critical applications back online.
The actions that should be avoided in this scenario are the ones that confuse these concepts—treating backups as a substitute for high availability or treating RAID as a substitute for backups—because these misunderstandings leave the organization exposed to downtime or data loss.
Topic: Network Security
A company is tightening security around its on-premises finance database server, which stores payroll and tax records. The network already has:
The security team wants to maintain strong segmentation, enforce least-privilege access to the database, and improve monitoring of sensitive traffic.
Which of the following actions should you AVOID? (Select TWO.)
Options:
A. Configure a SPAN/mirror port on the finance VLAN to send a copy of database traffic to an IDS for inspection and alerting.
B. Place the finance database server on its own finance VLAN and apply ACLs so that only the application servers can reach it on the required database ports.
C. Create a firewall rule that allows any internal host to connect to the finance database server over all TCP and UDP ports to simplify application access.
D. Limit outbound internet access from the finance VLAN to only approved patch and update repositories via a secure web proxy.
E. Require DBAs to SSH to the finance database server only from a hardened jump host in an admin subnet and enforce MFA for those SSH logins.
F. Bridge the guest Wi-Fi VLAN directly to the finance VLAN at Layer 2 so vendor laptops can connect to the database without going through the firewall.
Correct answers: C and F
Explanation: Protecting sensitive resources such as finance databases relies on strong segmentation, strict access control, and effective monitoring.
Segmentation is typically implemented with separate VLANs and firewalls or ACLs that restrict which systems and ports can access critical servers. Least-privilege rules should allow only the specific application servers and administrators that truly need access, and only on required ports. Guest or untrusted networks should never be bridged directly into sensitive segments.
Monitoring complements segmentation and access control. Sending a copy of finance VLAN traffic to an IDS or similar tool allows you to detect suspicious patterns without weakening the isolation of that VLAN. Outbound internet access from sensitive systems should also be tightly controlled, usually via proxies and allowlists.
In this scenario, the actions that should be avoided are those that remove or bypass segmentation (such as bridging guest Wi-Fi into the finance VLAN) or that grant overly broad access (such as an any-any rule to the database). The other actions reinforce segmentation, least privilege, and monitoring, which are aligned with best practice for protecting financial data.
Topic: Network Operations
A company’s e‑commerce site processes customer payments and stores personal data from both U.S. and EU customers. The network team is updating procedures to align with data protection and industry compliance requirements (for example, GDPR-style privacy laws and PCI DSS for card data).
Which of the following network logging practices is INCORRECT and would most likely violate these compliance requirements?
Options:
A. Restricting administrative access to the payment database to a dedicated management VLAN reachable only over a VPN with MFA
B. Sending security and access logs containing minimal personal data to a centralized log server with role-based access control and a defined retention period
C. Configuring application and network devices to capture full, unmasked credit card numbers and user passwords in plain-text debug logs for later analysis
D. Enabling HTTPS/TLS on all web applications that handle personal or payment data and logging only session IDs for troubleshooting
Best answer: C
Explanation: Compliance frameworks and data protection regulations generally require organizations to minimize the collection and storage of sensitive data, protect it with strong security controls, and limit who can access it. Network logs are often in scope because they may contain user identifiers, IP addresses, and sometimes application-level details.
Capturing full credit card numbers and user passwords in plain-text debug logs is a clear violation of these principles. It creates extra copies of highly sensitive data, often outside tightly controlled payment systems, and exposes it to anyone with log access. This directly conflicts with data minimization and secure handling expectations in regulations like GDPR-style laws and industry standards such as PCI DSS, which require protecting cardholder data and never storing authentication data like full track contents or passwords in an easily readable form.
In contrast, using TLS for data in transit, restricting administrative access via secure network segments and VPN with MFA, and centralizing logs with role-based access and defined retention are all examples of practices that support compliance rather than undermine it.
Topic: Networking Fundamentals
Which statement BEST describes multicast traffic in an IPv4 network?
Options:
A. It is traffic sent from one source to a specific single destination host address, such as typical web browsing.
B. It is traffic sent from one source to all hosts in the local broadcast domain, such as a DHCP discovery message.
C. It is traffic automatically replicated by switches to all ports on the switch, regardless of destination address, to ensure delivery.
D. It is traffic sent from one source to a defined group of interested receivers using a special group address, often used for streaming media.
Best answer: D
Explanation: Multicast traffic is designed for efficient one-to-many communication: a single sender transmits to a multicast group address, and only hosts that have joined that group receive the traffic. This is commonly used for applications like streaming media or real-time data feeds, where the same content must be delivered to multiple receivers without sending a separate unicast stream to each one.
By contrast, unicast traffic is one-to-one, such as a user accessing a web server over HTTP/HTTPS. Broadcast traffic is one-to-all in the local subnet, such as a DHCP DISCOVER message sent to find any available DHCP server. Multicast sits between these: one-to-many, but only to interested receivers that join the multicast group, reducing unnecessary bandwidth usage compared to broadcast or multiple unicasts.
Topic: Networking Fundamentals
A technician is explaining how a user’s HTTP request from a PC is prepared for transmission to a web server. The technician says that as the data moves down the OSI layers, each layer adds its own header (and sometimes a trailer), creating a series of nested protocol data units before the bits are sent over the wire. Which networking concept is being described?
Options:
A. Multiplexing
B. Encapsulation
C. Segmentation
D. Decapsulation
Best answer: B
Explanation: When a PC sends data to a web server, the application data (for example, an HTTP request) is passed down through the OSI or TCP/IP layers. At each layer, the data is wrapped with that layer’s control information in the form of headers and sometimes trailers. For example, the transport layer adds TCP headers (ports, sequence numbers), the network layer adds IP headers (addresses), and the data link layer adds MAC addresses and a frame check sequence.
This step-by-step wrapping creates nested protocol data units (PDU): data, segment, packet, frame, and finally bits on the wire. This process of progressively adding headers/trailers as data moves from the application layer down to the physical layer is called encapsulation.
At the receiving end (the web server), the reverse occurs. As the bits are received and move up the stack, each layer reads and removes its own header/trailer, a process called decapsulation. Segmentation and multiplexing are related but distinct concepts and do not describe the per-layer wrapping process in the question.
Topic: Network Implementations
Which TWO of the following statements about selecting switches, routers, and firewalls are INCORRECT? (Select TWO.)
Options:
A. Access-layer switches in wiring closets should have enough copper ports at appropriate speeds (for example, 1/2.5/5GbE) to support current and near-future endpoint counts.
B. For branch offices that need inter-VLAN routing and secure Internet access, selecting a router or firewall with sufficient routing performance and VPN support is appropriate.
C. To maximize performance on an Internet-facing firewall, it is best practice to disable inspection features and configure an allow-any rule so traffic is not slowed by security checks.
D. When choosing aggregation or core switches, support for link aggregation (LACP) and redundant uplinks is an important factor for resiliency and performance.
E. Internet edge firewalls should be evaluated for features like stateful inspection and application-aware filtering, sized to handle expected concurrent sessions and traffic volume.
F. For enterprise access layers, unmanaged switches are preferred because they reduce complexity and eliminate the need to configure VLANs or management features.
Correct answers: C and F
Explanation: Hardware selection for switches, routers, and firewalls should balance port counts, speed, PoE needs, routing capacity, and security features with the environment’s requirements. In an enterprise or branch, managed switches are needed for VLANs, QoS, and security controls, while firewalls must provide adequate inspection and policy enforcement without sacrificing availability.
The false statements suggest using unmanaged switches in an enterprise access layer and disabling inspection on edge firewalls to improve performance. Both ideas directly conflict with modern best practices for manageability and security, even if they might seem to simplify configuration or reduce CPU load at first glance.
The accurate statements emphasize realistic selection criteria: sufficient interface counts and speeds on access switches, LACP and redundant uplinks at aggregation, routing and VPN capabilities for branch devices, and appropriate firewall inspection features sized for expected traffic and concurrent sessions.
Topic: Network Troubleshooting
A network technician recently added four new Wi‑Fi 6 access points to a 24‑port PoE+ access switch. After they were connected, several existing IP cameras and phones began randomly rebooting. Switch logs show repeated “PoE power limit reached” messages and some ports being denied power. Which of the following actions/solutions will best address this issue or requirement? (Select TWO.)
Options:
A. Upgrade the access switch’s uplink from 1GbE to 10GbE to increase available power to the PoE ports.
B. Enable LLDP-MED on all switch ports so connected devices can negotiate the exact amount of power they require.
C. Force all PoE access ports to 100Mb/s instead of 1Gb/s so each powered device uses less electrical power.
D. Use individual PoE injectors or local power adapters for some high-draw devices instead of powering them from the PoE switch.
E. Install an additional PoE-capable switch and move some APs and cameras so each switch’s PoE load stays within its power budget.
Correct answers: D and E
Explanation: The symptoms and log messages indicate that the switch’s total Power over Ethernet (PoE) budget is being exceeded after adding several new high-draw Wi‑Fi 6 access points. When the combined power requested by all powered devices goes beyond the switch’s PoE budget, the switch must deny power to some ports or repeatedly drop and reapply power, causing devices like phones and cameras to reboot.
To fix this, the technician must either increase the total PoE power available to those devices or reduce how much power is drawn from this particular switch. Adding another PoE switch and redistributing endpoints, or powering some devices with external injectors or local power bricks, both effectively lower the PoE load on the overloaded switch so it operates within its budget. Changing link speeds, upgrading uplink bandwidth, or enabling LLDP-MED does not change the hard PoE budget of the switch, so those actions will not resolve the underlying power issue.
Topic: Networking Fundamentals
A network technician is investigating complaints that employees must manually reconnect to Wi‑Fi when walking between different areas of the office. The technician runs a wireless survey and captures the following summary.
Exhibit:
| AP | Location | SSID | Channel | Security |
|---|---|---|---|---|
| AP1 | Lobby | CorpNet | 36 | WPA2-Enterprise |
| AP2 | West Wing | CorpNet-2 | 44 | WPA2-Enterprise |
| AP3 | East Wing | CorpNet-East | 1 | WPA2-Enterprise |
Based only on the information in the exhibit, which change would BEST enable seamless roaming for corporate users throughout the office?
Options:
A. Disable SSID broadcast on AP2 and AP3 to hide the network.
B. Set all three APs to use the same RF channel.
C. Configure all three APs to broadcast the same SSID and security settings for the corporate WLAN.
D. Reduce transmit power on AP1 so clients associate with closer APs.
Best answer: C
Explanation: In the exhibit, there are three APs, each advertising a different SSID: CorpNet, CorpNet-2, and CorpNet-East. Even though they all use WPA2-Enterprise, clients see these as three separate wireless networks.
A Basic Service Set (BSS) is defined by a single AP and its associated clients, identified by a unique BSSID (the AP’s MAC address) and an SSID. An Extended Service Set (ESS) is a group of multiple APs that share the same SSID and security settings and are typically bridged onto the same LAN. Within an ESS, clients can roam from one BSS to another while staying on the same logical WLAN, usually without manual reconnection.
Because the APs in the exhibit use different SSIDs, clients must disconnect from one SSID and connect to another when moving between areas, which matches the user complaints. The key change is to configure all corporate APs to use the same SSID and matching security settings so that they form a single ESS for roaming.
Topic: Network Security
Which TWO of the following statements about AAA (authentication, authorization, and accounting) are INCORRECT? (Select TWO.)
Options:
A. Authorization defines what an authenticated user is allowed to do, such as which VLANs, files, or device commands they can access.
B. In AAA, authorization must be completed successfully before any authentication can take place.
C. Accounting primarily occurs before login and is used to decide which permissions a user should receive.
D. Authentication answers “Who are you?” by validating credentials such as a username and password or a certificate before granting network access.
E. RADIUS or TACACS+ servers can provide centralized AAA so that network devices do not need to store individual user accounts locally.
Correct answers: B and C
Explanation: AAA stands for authentication, authorization, and accounting, three related but distinct functions that control and track network access.
In many enterprise networks, devices such as switches, routers, and VPN concentrators use centralized RADIUS or TACACS+ servers to provide AAA. Those servers perform authentication and authorization decisions and record accounting logs, so credentials and policies are not duplicated on every device.
The incorrect statements in this question confuse the order of operations (authorization coming before authentication) and the role of accounting (treating it as a permission‑granting step instead of logging activity).
Topic: Network Operations
Which statement BEST describes the purpose of an acceptable use policy (AUP) in an organization’s network environment?
Options:
A. It defines how users are allowed to use network and IT resources, including what activities are prohibited, to protect security and compliance.
B. It documents the steps for creating, modifying, and decommissioning user accounts during hiring, role changes, and terminations.
C. It specifies the technical requirements for password length, complexity, and rotation intervals for all user accounts.
D. It lists the change windows and approval process for updating network devices, software, and configurations.
Best answer: A
Explanation: An acceptable use policy (AUP) is a foundational network-related policy that tells users what they may and may not do when using organizational IT and network resources. It typically covers items such as using corporate Wi‑Fi only for business purposes, prohibiting installation of unauthorized software, blocking peer‑to‑peer file sharing, and forbidding access to inappropriate websites. By defining acceptable behavior, the AUP reduces the risk of malware infections, data leakage, and policy violations, directly supporting network security and compliance.
Other policies—like password policies, onboarding/offboarding procedures, and change management policies—are also important but serve different purposes. Password policies govern credential strength, onboarding/offboarding define how accounts are created and removed, and change management controls how network changes are introduced. None of these alone defines the full scope of allowed and prohibited user activities on the network, which is the role of the AUP.
Topic: Networking Fundamentals
Which of the following descriptions of common network services and their default ports is NOT correct?
Options:
A. HTTPS typically uses TCP port 443 to provide encrypted web browsing.
B. SMTP commonly uses TCP port 25 to transfer email between mail servers.
C. DNS typically uses UDP port 53 for standard hostname-to-IP resolution queries.
D. SSH typically uses TCP port 23 to provide secure remote login to devices.
Best answer: D
Explanation: The question focuses on recognizing the standard ports and purposes of common TCP/IP services, a key part of Network+ fundamentals.
HTTPS uses TCP port 443 by default for encrypted web traffic, DNS typically uses UDP port 53 for normal queries, and SMTP normally uses TCP port 25 for email transfer between mail servers. SSH, however, uses TCP port 22 by default, not port 23. Port 23 is assigned to Telnet, which is an older, unencrypted remote access protocol.
Knowing these well-known ports helps technicians quickly identify services in firewall rules, packet captures, and troubleshooting scenarios.
Topic: Network Implementations
An organization connects its headquarters and six branch offices using provider-managed MPLS circuits. Finance reports that recurring WAN charges are about 40% over budget, but management insists that inter-site traffic must remain confidential when it traverses external networks. Each site already has business-grade internet access. Which change would BEST reduce ongoing WAN costs while still meeting the security requirement?
Options:
A. Replace the MPLS circuits with unmanaged consumer broadband links and route traffic over the internet without tunneling to avoid VPN overhead.
B. Downgrade the MPLS circuits to lower bandwidth tiers and rely on the provider’s private backbone instead of encryption.
C. Replace the MPLS circuits with site-to-site IPsec VPN tunnels over the existing business broadband links at each site.
D. Keep the MPLS circuits in place but move all internet access to a single breakout at headquarters to centralize security controls.
Best answer: C
Explanation: MPLS and other dedicated private WAN circuits typically provide predictable performance and logical separation, but they are usually more expensive than consumer or business broadband internet links. They also do not inherently encrypt customer data; confidentiality often depends on the provider’s internal controls and optional add-on services.
A common way to reduce WAN cost while maintaining security is to use site-to-site IPsec VPN tunnels over business-grade broadband. The inexpensive internet links provide the underlying connectivity, while IPsec supplies strong encryption and authentication so that traffic remains confidential even though it traverses the public internet.
In this scenario, the business wants to cut recurring circuit costs but has a clear requirement that inter-site traffic must remain confidential on external networks. Moving from MPLS to encrypted VPNs over existing broadband is the best single change that addresses both the cost and security requirements at once.
Topic: Network Security
Which TWO of the following statements about logging and audit trails for network change tracking and security investigations are INCORRECT and represent poor practice? (Select TWO.)
Options:
A. Log files should be protected from unauthorized access and tampering, with access granted only to authorized staff who need them for operations or investigations.
B. Synchronizing device clocks with a reliable NTP source makes it easier to align timestamps and correlate events during incident analysis.
C. To save storage space, organizations should log only failed login attempts and ignore successful logins and configuration changes.
D. Relying solely on local logs on each switch and router is sufficient; central log collection provides little value for investigations.
E. Storing device logs on a central syslog or SIEM server helps preserve evidence and makes it easier to correlate events across multiple systems.
Correct answers: C and D
Explanation: For effective change tracking and security investigations, organizations should collect detailed logs from network devices, store them centrally, protect their integrity, and keep them long enough to support audits and incident response. Centralized logging (for example, to a syslog or SIEM platform) allows analysts to correlate events across multiple devices and reduces the risk of losing evidence if a single device is compromised or fails.
Good audit trails typically include both successful and failed authentication attempts, configuration changes, administrative access, and security‑relevant events. Protecting log confidentiality and integrity, along with synchronizing timestamps using NTP, ensures investigators can reconstruct accurate timelines and trust that log data has not been tampered with. By contrast, logging only a narrow subset of events or relying solely on local logs undermines the usefulness of logs for compliance and incident response.
Topic: Networking Fundamentals
A network technician is planning several new IPv4 subnets inside a /24 network. Each subnet must have enough usable host addresses for its requirement, with room for all needed devices. Which of the following subnet plans should you AVOID because they do NOT provide enough usable host addresses? (Select TWO.)
Options:
A. Use a /28 subnet for a lab that needs up to 10 usable IP addresses.
B. Use a /27 subnet for a department that needs up to 50 usable IP addresses.
C. Use a /30 subnet for a camera network that needs up to 4 usable IP addresses.
D. Use a /29 subnet for a printer VLAN that needs up to 5 usable IP addresses.
E. Use a /26 subnet for a department that needs up to 40 usable IP addresses.
Correct answers: B and C
Explanation: Subnet sizing in IPv4 depends on the prefix length (CIDR notation). The number of usable host addresses in a subnet is given by \(2^{(32 - \text{prefix})} - 2\), where the 2 addresses subtracted are the network and broadcast addresses.
To decide whether a subnet plan is acceptable, compare the required number of usable IP addresses to the capacity of the proposed prefix. Any plan where the host requirement exceeds the subnet capacity should be avoided, because devices will not all be able to obtain unique addresses.
In this question, the incorrect choices are the ones where the subnet prefix does not allow enough usable host addresses for the stated requirement. The remaining plans provide sufficient capacity (and in some cases a small growth margin), so they are acceptable.
Topic: Networking Fundamentals
Which TWO of the following IPv4 subnetting statements are INCORRECT? (Select TWO.)
Options:
A. Borrowing 3 bits from the host portion of a /24 network creates 8 smaller subnets.
B. An IPv4 /26 subnet provides 64 total addresses and 62 usable host addresses.
C. Using a /30 mask for point-to-point router links is common because it provides exactly 2 usable IP addresses.
D. An IPv4 /29 subnet supports up to 8 usable host addresses.
E. A /24 subnet mask is equivalent to 255.255.255.192.
F. An IPv4 /16 network contains 65,536 total IP addresses, including the network and broadcast addresses.
Correct answers: D and E
Explanation: Subnetting in IPv4 is based on the number of host bits available in the subnet mask or prefix length. The total number of addresses in a subnet is 2 raised to the number of host bits \(2^{\text{host bits}}\). Two of those addresses are typically reserved: one for the network ID and one for the broadcast address, leaving \(2^{\text{host bits}} - 2\) usable host addresses in most unicast subnets. Understanding how prefix lengths map to dotted-decimal masks and how many addresses each subnet provides is essential when planning networks and avoiding miscalculations that can lead to too few usable IPs or misconfigured devices.
Topic: Networking Fundamentals
Which of the following statements about virtual switches and virtual NICs in a virtualized environment is NOT correct?
Options:
A. Virtual NICs operate only at OSI Layer 4 because they manage TCP and UDP ports for virtual machines.
B. A virtual switch can connect multiple VMs on the same host so they can communicate without using an external physical switch.
C. A virtual NIC appears to the guest operating system as a normal network adapter that uses standard network drivers.
D. Virtual switches commonly support features such as VLAN tagging, similar to managed physical switches.
Best answer: A
Explanation: In a virtualized environment, hypervisors provide virtual networking components so virtual machines (VMs) can communicate just like physical hosts. Two key building blocks are virtual NICs (vNICs) and virtual switches.
A virtual NIC is presented to the guest operating system as if it were a physical network adapter. The guest OS installs drivers and sends/receives frames through this vNIC, which the hypervisor then maps to the appropriate virtual or physical network.
A virtual switch is a software-based Layer 2 switch running inside the hypervisor. It connects virtual NICs from multiple VMs and may also connect to physical NICs on the host. This allows VMs to communicate with each other and with external networks. Because virtual switches mimic many functions of physical managed switches, they typically support VLAN tagging and similar segmentation features.
NICs, whether physical or virtual, work mainly at Layer 2 of the OSI model, handling MAC addresses and Ethernet frames. TCP and UDP ports are part of Layer 4, which is implemented in the host or guest OS networking stack, not in the NIC hardware or its virtual equivalent. Therefore, any statement claiming a vNIC operates only at Layer 4 is incorrect.
Topic: Network Security
A company’s finance application servers currently share VLAN 10 with general office workstations. A recent malware infection on a sales workstation performed port scans and attempted connections to the finance servers. Management wants to reduce the attack surface against finance systems and increase visibility into suspicious connections, while still allowing authorized finance users to access the application and its database.
Which of the following actions/solutions will best address this issue or requirement? (Select TWO.)
Options:
A. Enable port security on all access switch ports in the office area, limiting each port to a single learned MAC address.
B. Deploy an IDS/IPS sensor on the path between user VLANs and the finance server VLAN to monitor and alert on suspicious connections targeting finance systems.
C. Configure WPA3-Personal on the corporate Wi-Fi network used by employees to access internal resources.
D. Move the finance application servers into a dedicated VLAN and apply Layer 3 ACLs so that only required ports from the finance user subnet and database subnet can reach them.
E. Relocate the finance application servers into the existing internet-facing DMZ that hosts public web servers.
Correct answers: B and D
Explanation: This scenario is about protecting sensitive finance servers by applying proper network segmentation, access control, and monitoring.
The key risks are:
The best answers therefore combine segmentation and access control (separating finance servers and tightly restricting traffic) with monitoring (observing and alerting on potentially malicious traffic to those servers.
Option “Move the finance application servers into a dedicated VLAN and apply Layer 3 ACLs so that only required ports from the finance user subnet and database subnet can reach them” directly addresses segmentation and access control. A separate VLAN and subnet for finance servers limits which devices can even attempt to reach them. ACLs at the Layer 3 boundary further restrict communication to specific source subnets and required ports.
Option “Deploy an IDS/IPS sensor on the path between user VLANs and the finance server VLAN to monitor and alert on suspicious connections targeting finance systems” satisfies the monitoring requirement. It provides visibility (and possibly prevention) for scans, brute-force attempts, or anomalous access toward finance resources.
The other choices either focus on more generic hardening (port security, Wi-Fi encryption) or actively worsen exposure (putting finance servers into a public DMZ). While they may improve security in other areas, they do not best meet the stated goals of segmenting the finance servers from general users and monitoring access to those servers.
Summary of each option:
Topic: Network Operations
A small e‑commerce company is creating a disaster recovery plan for its primary web application and database. Management states that during a disaster they can tolerate a maximum of 30 minutes of downtime and a maximum of 15 minutes of data loss.
Which of the following continuity strategies should the company AVOID because it does NOT meet these requirements?
Options:
A. Take nightly full backups to local disk with copies written to tape weekly, planning to rebuild servers and restore from the most recent backup during a disaster.
B. Maintain a warm standby environment in a public cloud, replicate databases every 5 minutes, and use scripted DNS failover that restores service within 20 minutes.
C. Use virtual machine replication to a secondary site with 10‑minute application snapshots and documented runbooks to bring services online at the secondary site within 25 minutes.
D. Use an active/passive cluster in the primary data center with synchronous database replication and automatic failover tested to recover service in under 5 minutes.
Best answer: A
Explanation: This scenario is about matching a disaster recovery strategy to specific business requirements: a maximum of 30 minutes of downtime (RTO) and a maximum of 15 minutes of data loss (RPO). Any acceptable option must keep both downtime and data loss within those thresholds.
Nightly full backups with weekly tape copies are a traditional backup approach but provide a poor recovery time and recovery point. If a disaster occurs late in the business day, the last backup is likely many hours old, violating the 15‑minute RPO. Rebuilding servers and restoring from backup, especially from tape, typically takes hours, which far exceeds the 30‑minute RTO.
In contrast, options that use clustering, warm standby environments, or VM replication with frequent snapshots are aligned with tighter RTO and RPO requirements because they keep a near‑current copy of data available and have a pre‑planned, scripted failover process to bring services back quickly.
Topic: Network Operations
Which TWO statements about network performance metrics and their impact on user experience are INCORRECT? (Select TWO.)
Options:
A. Bandwidth utilization close to 100% on a shared WAN link can cause web pages to load slowly for users.
B. Small amounts of packet loss are often masked for web browsing because TCP retransmits missing segments, but this can make downloads take longer.
C. High jitter on a VoIP call can cause choppy or out-of-order audio even when overall bandwidth utilization is low.
D. Latency primarily measures how much data can be sent over a link per second; higher latency always means higher throughput for large file transfers.
E. Packet loss is usually not noticeable for real-time apps like video calls because they automatically retransmit every lost packet without impact on quality.
Correct answers: D and E
Explanation: Network performance metrics each describe a different aspect of how traffic flows and how users experience applications.
Bandwidth (and bandwidth utilization) is about capacity: how much data can be carried per second and how much of that capacity is currently in use. When utilization is very high, queues build up and users see slow web pages, delayed file transfers, or lag in interactive apps.
Latency is about delay: the time it takes a packet to go from one end to the other. Even on high-bandwidth links, high latency makes applications feel sluggish, especially interactive apps (remote desktops, web apps) and large TCP transfers over long distances.
Jitter is the variation in delay between packets. Real-time applications like VoIP and video conferencing are very sensitive to jitter because the media stream expects packets to arrive at a steady pace.
Packet loss is when packets never arrive at the destination. For real-time apps using UDP, loss directly translates to missing audio or video. For TCP-based apps like web browsing, loss causes retransmissions, which usually preserve correctness but slow down transfers.
The false statements in this question confuse latency with bandwidth and misunderstand how real-time applications handle packet loss. Accurate understanding of these metrics helps explain why users experience choppy calls, slow web pages, or laggy applications even when some metrics look fine on a graph.
Topic: Network Operations
A company recently had an outage after a misconfiguration on a core switch. The networking manager is implementing stricter change control with configuration backups, templates, and versioning to improve disaster recovery and rollback. Which of the following practices should you AVOID? (Select TWO.)
Options:
A. Using a standardized, tested baseline template when deploying new switches and documenting any intentional deviations
B. Manually deleting all but the most recent configuration backup for each device to save storage space, without any defined retention policy
C. Making urgent CLI changes directly on production devices and not updating the configuration repository as long as the network appears stable
D. Linking each committed configuration version in the repository to an approved change ticket and brief change description
E. Saving daily configuration backups from all network devices to a centralized repository with timestamps and device identifiers
Correct answers: B and C
Explanation: Configuration backups, templates, and versioning are key parts of network operations and change control. Backups enable disaster recovery when hardware fails or a bad change is applied. Versioning tracks how configurations evolve over time so that you can roll back to a known-good state. Templates provide consistent, repeatable baselines that reduce configuration drift and human error across many devices.
Practices that bypass version control or aggressively delete older backups directly undermine these benefits. Without an accurate, historical record of configurations, rollback and root-cause analysis become much harder, and the organization is more vulnerable to outages and misconfigurations. In contrast, centralizing backups, using tested templates, and tying versions to change tickets all strengthen operational reliability and security.
Topic: Network Troubleshooting
A technician is troubleshooting complaints of slow 2.4GHz Wi‑Fi in an open office. The office AP (“Office-AP”) is on channel 6. A Wi‑Fi analyzer shows the following nearby networks:
| SSID | Channel | Signal (dBm) |
|---|---|---|
| Office-AP | 6 | -48 |
| Neighbor1 | 5 | -50 |
| Neighbor2 | 7 | -52 |
| Lobby-AP | 1 | -72 |
Based on this information, which type of interference is MOST likely causing the performance problem?
Options:
A. Non-Wi‑Fi interference, such as a microwave oven
B. Adjacent-channel interference from overlapping 2.4GHz channels
C. Free-space path loss due to excessive distance from the AP
D. Co-channel interference from multiple APs sharing channel 6
Best answer: B
Explanation: In the 2.4GHz band, Wi‑Fi channels significantly overlap in frequency except for non-overlapping sets like 1, 6, and 11. When nearby APs use channels that partially overlap, their transmissions interfere with each other, especially when signal strengths are similar.
In this scenario, the office AP is on channel 6 with a strong signal around -48dBm. Two neighboring APs are on channels 5 and 7 with similarly strong signals. Channels 5 and 7 overlap heavily with channel 6, so their frames collide in the shared RF space and cause retransmissions and reduced throughput. This pattern is classic adjacent-channel interference.
If several strong APs were all on channel 6, co-channel interference would be the concern. If signal were weak, distance or obstacles would be more likely. If a non-Wi‑Fi device like a microwave were the cause, you would not see distinct strong Wi‑Fi SSIDs on adjacent channels, but rather broad-band noise affecting many channels during device operation.
Use the CompTIA Network+ N10-009 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try CompTIA Network+ N10-009 on Web View CompTIA Network+ N10-009 Practice Test
Read the CompTIA Network+ N10-009 Cheat Sheet on Tech Exam Lexicon for concept review before another timed run.