Free CompTIA Security+ SY0-701 Full-Length Practice Exam: 90 Questions

Try 90 free CompTIA Security+ SY0-701 questions across the exam domains, with explanations, then continue with full IT Mastery practice.

This free full-length CompTIA Security+ SY0-701 practice exam includes 90 original IT Mastery questions across the exam domains.

These questions are for self-assessment. They are not official exam questions and do not imply affiliation with the exam sponsor.

Count note: this page uses the full-length practice count maintained in the Mastery exam catalog. Some certification vendors publish total questions, scored questions, duration, or unscored/pretest-item rules differently; always confirm exam-day rules with the sponsor.

Need concept review first? Read the CompTIA Security+ SY0-701 Cheat Sheet on Tech Exam Lexicon, then return here for timed mocks and full IT Mastery practice.

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try CompTIA Security+ SY0-701 on Web View full CompTIA Security+ SY0-701 practice page

Exam snapshot

  • Exam route: CompTIA Security+ SY0-701
  • Practice-set question count: 90
  • Time limit: 90 minutes
  • Practice style: mixed-domain diagnostic run with answer explanations

Full-length exam mix

DomainWeight
General Security Concepts12%
Threats, Vulnerabilities, and Mitigations22%
Security Architecture18%
Security Operations28%
Security Program Management and Oversight20%

Use this as one diagnostic run. IT Mastery gives you timed mocks, topic drills, analytics, code-reading practice where relevant, and full practice.

Practice questions

Questions 1-25

Question 1

Topic: Security Program Management and Oversight

Which of the following statements about security audits and assessments is NOT correct?

Options:

  • A. The primary purpose of security audits is to identify individuals to punish for violating security policies.

  • B. Audit and assessment findings are usually documented, risk-rated, and tracked until management remediates them or formally accepts the risk.

  • C. Internal audits are typically performed by personnel from within the organization who are independent of the process being reviewed.

  • D. External audits are often conducted by regulators or independent third-party firms to provide assurance about compliance and controls.

Best answer: A

Explanation: Security audits and assessments are governance tools used to verify that controls are designed and operating effectively and that the organization complies with relevant policies, standards, and regulations. They focus on processes, controls, and evidence, not on punishing individuals.

Internal and external audits both examine how well controls work in practice, compare them against defined requirements, and document any gaps as findings. Those findings are then risk-rated and tracked until they are either remediated or formally accepted as residual risk by management. While an audit may uncover violations or weaknesses, its core purpose is to improve the control environment and demonstrate compliance, not to act as a disciplinary mechanism.


Question 2

Topic: Security Architecture

Which application deployment model is characterized by many small, independently deployable components that expose network APIs, increasing the number of entry points but improving isolation between functions?

Options:

  • A. Monolithic application

  • B. Microservices architecture

  • C. N-tier application

  • D. Two-tier client–server application

Best answer: B

Explanation: The described model focuses on small, independently deployable components that each expose their own network APIs. This is the defining feature of a microservices architecture, where each service implements a narrow function, can be updated separately, and communicates with others over lightweight protocols such as HTTP/HTTPS or messaging.

From a security perspective, microservices increase the application’s attack surface because there are more network-exposed endpoints, authentication points, and configurations to manage. However, they also offer better isolation and blast-radius reduction: a compromise of one service does not automatically expose all application functionality or data, especially if each service has its own datastore and least-privilege permissions. This contrasts with monolithic and simple client–server models, where a single compromise can more easily affect the entire application.

N-tier architectures focus on separating layers (presentation, business logic, data) to control access paths, which can help with segmentation but do not inherently break the system into many small services. Two-tier client–server designs are even more coarse-grained, typically with a single client layer and a single back-end, and therefore do not match the description of many independent deployable components.


Question 3

Topic: Threats, Vulnerabilities, and Mitigations

A junior security analyst at a mid-sized company has been asked by the IT manager to “run some penetration tests” against the production web application this week. There is no existing security testing policy, no documented scope, and no written approval from system or data owners. The analyst is concerned about potential legal and business impacts but still wants to improve the organization’s security posture.

Which of the following is the MOST appropriate action for the analyst to take before starting any testing?

Options:

  • A. Obtain written authorization that defines the test scope, targets, schedule, and allowed techniques, signed by appropriate system and data owners

  • B. Engage a third-party penetration testing firm immediately and let them determine the appropriate scope and rules

  • C. Proceed with a limited vulnerability scan during off-hours based only on the IT manager’s verbal request

  • D. Update the incident response plan to include penetration testing as a detection method, then begin testing as requested

Best answer: A

Explanation: Before conducting any security testing—such as vulnerability scanning or penetration testing—an organization must have clear written authorization and well-defined rules of engagement. This protects both the tester and the organization from legal issues, misunderstandings, and unintended outages.

Written authorization should identify who is approving the testing, who owns the systems and data, and who is responsible if something goes wrong. Rules of engagement typically define the scope (what is in and out of bounds), targets (which systems and applications), schedule (when testing can occur), and allowed techniques (for example, no social engineering or denial-of-service).

In this scenario, there is currently no policy, no documented scope, and no written approval. The safest and most professional next step is to pause testing and obtain a formal, signed authorization that clearly spells out scope and rules before performing any tests.


Question 4

Topic: Threats, Vulnerabilities, and Mitigations

A company hosts a public web application on virtual machines in a public cloud. The current security group allows all inbound ports from any IP, and developers all have full administrator access to the cloud account. Compliance now requires that only HTTPS be exposed to the internet, administrative access be restricted, and changes to cloud resources be auditable. Which of the following is the BEST action to meet these requirements?

Options:

  • A. Disable SSH and RDP on the security group entirely, keep developers as full administrators, and rely on periodic manual reviews of the cloud console to detect configuration changes.

  • B. Deploy a web application firewall in front of the application and leave the existing security group and developer permissions unchanged, relying on the WAF logs for auditing.

  • C. Install host-based firewalls on the virtual machines to block all ports except HTTPS, while continuing to allow any source in the security group and keeping developers as full administrators.

  • D. Update the security group to allow only HTTPS from the internet, restrict admin ports to the corporate IP range, assign least-privilege IAM roles to developers, and enable centralized cloud logging for IAM and network changes.

Best answer: D

Explanation: This scenario targets cloud security basics: tightening network exposure with security groups or network security groups (NSGs), enforcing least-privilege IAM, and enabling logging for accountability. The application currently has an overly permissive security group and overprivileged developers, and there is a new compliance requirement for restricted exposure (only HTTPS from the internet), restricted administrative access, and auditable changes.

The best answer must therefore: 1) restrict inbound traffic at the cloud network layer so only HTTPS is exposed globally and admin ports are limited; 2) apply least-privilege IAM so developers do not all have full administrator rights; and 3) enable centralized logging of IAM and configuration changes so they can be audited and sent to a SIEM or log management system. Combining these controls directly mitigates the described risks and satisfies the compliance requirement.


Question 5

Topic: Security Operations

After a ransomware incident is fully contained and systems are restored, the security team holds a lessons-learned meeting. Compared with earlier incident response phases, what is the PRIMARY purpose of this meeting?

Options:

  • A. Identify breakdowns in processes and controls and define improvements to reduce likelihood and impact of future incidents

  • B. Determine which employee or team is at fault and document disciplinary actions

  • C. Verify that backups and restorations worked correctly and schedule the next restore test

  • D. Finalize public-relations messaging and notify external stakeholders

Best answer: A

Explanation: Post-incident reviews and lessons-learned meetings occur after an incident has been contained, eradicated, and systems have recovered. At this stage, the organization is no longer trying to stop the active threat; instead, it is trying to learn from the event.

The core purpose is continuous improvement: to examine what went well, what did not, and where processes, controls, communications, and training can be updated to better prevent, detect, respond to, and recover from future incidents. This can include updating runbooks, refining escalation criteria, improving monitoring rules, adjusting access controls, clarifying roles, and planning additional training.

This is different from using the meeting as a blame session or a narrow check that a single control, such as backups, worked. A Security+-level practitioner should recognize that modern incident response emphasizes learning and process improvement rather than punishment or box-checking compliance alone.


Question 6

Topic: Threats, Vulnerabilities, and Mitigations

A regional bank’s SOC observes repeated automated scans and brute-force login attempts against its public web portal from many home-ISP IP addresses. The tools used are freely available, and successful compromises result only in website defacement with bragging text; there is no evidence of data theft or financial fraud. Management wants cost‑effective controls focused on this low‑skill, opportunistic threat rather than nation‑state–level capabilities. Which of the following actions/controls will BEST meet these requirements? (Select TWO.)

Options:

  • A. Purchase a 24/7 managed threat hunting service specializing in detecting zero‑day kernel‑level rootkits

  • B. Develop and deploy custom quantum‑resistant cryptography for all online banking communications

  • C. Implement MFA and stricter password policies for all external user and admin logins to the web portal

  • D. Harden all internet-facing servers with secure configuration baselines, prompt patching, and removal of unnecessary services

  • E. Schedule an annual red‑team exercise specifically emulating a nation‑state adversary

Correct answers: C and D

Explanation: The scenario describes a threat actor using freely available tools, performing automated scans and brute‑force attempts, and defacing a website for bragging rights with no evidence of data theft or financial fraud. This behavior is characteristic of script kiddies: low‑skill, opportunistic attackers motivated by reputation or curiosity rather than sophisticated, targeted espionage or large‑scale financial gain.

Because management explicitly wants cost‑effective defenses focused on this low‑skill threat, the best responses are controls that close off easy attack paths: secure baselines and patching on internet‑facing systems, and stronger authentication to resist brute‑force and credential attacks. Advanced, high‑cost capabilities designed for nation‑state‑level threats or exotic cryptographic risks exceed what is needed for this actor profile.

Recognizing threat actor type (motivations and capabilities) helps prioritize controls appropriately: for script kiddies, basic hygiene and strong authentication provide excellent risk reduction without the expense of specialized, high‑end defenses more suitable for nation‑states or organized crime groups.


Question 7

Topic: General Security Concepts

A security administrator is standardizing TLS for all internal web portals using the organization’s PKI. Which of the following practices related to certificates and trust should the administrator AVOID? (Select TWO.)

Options:

  • A. Keeping the root CA offline with its private key protected in a hardware security module (HSM) and using subordinate issuing CAs for day‑to‑day certificate issuance

  • B. Distributing the organization’s internal root CA certificate to managed endpoints so browsers trust certificates issued by that CA

  • C. Configuring web servers to validate certificate chains and use OCSP stapling or CRL checks to verify certificate revocation status

  • D. Reusing the same wildcard certificate and private key on many servers and emailing the private key to administrators so they can install it easily

  • E. Telling users to ignore browser certificate warnings for internal sites and proceed as long as the URL looks correct

Correct answers: D and E

Explanation: Public key infrastructure (PKI) relies on trusted certificate authorities (CAs) to issue digital certificates that bind identities (such as hostnames or users) to public keys. Endpoints trust certificates when they can build a valid chain from the certificate through intermediate CAs to a trusted root CA, and when the certificate details (name, validity period, revocation status) are acceptable.

Because the trust model depends on the integrity of CA private keys and on clients correctly validating certificates and warnings, certain practices are particularly dangerous. Sharing private keys widely or distributing them through insecure channels makes it much easier for attackers to impersonate systems. Similarly, training users to ignore certificate warnings removes an important defense against spoofed or man‑in‑the‑middle connections.

In this scenario, the task is to identify which practices should be avoided when deploying TLS using an organization’s PKI. The unsafe choices are the ones that break basic PKI trust principles: protecting private keys and respecting validation and warnings. The other options describe standard or even recommended PKI practices that support strong trust relationships.


Question 8

Topic: Security Operations

A security engineer is updating procedures for provisioning and deprovisioning Windows laptops using a standardized image. Which of the following steps should the team NOT include in the secure provisioning and deprovisioning process?

Options:

  • A. Use the same local administrator account and password on all imaged laptops to simplify access during support and deprovisioning.

  • B. Configure the base image so that, on first boot, each laptop automatically applies the latest OS patches and antivirus definitions.

  • C. Include full-disk encryption and a host-based firewall configuration as part of the standard baseline image.

  • D. As part of deprovisioning, perform an approved drive wipe or sanitization before reassigning or disposing of the laptop.

Best answer: A

Explanation: Secure provisioning and deprovisioning ensure that systems start and end their lifecycle in a controlled, hardened state. During provisioning, organizations typically use a standardized image (gold image) that embeds a secure baseline: current patches, endpoint protection, encryption, and core configuration settings. Deprovisioning focuses on removing access, sanitizing data, updating inventory, and safely retiring or reassigning assets.

Using a single shared local administrator account and password across all imaged endpoints is a classic anti-pattern. If that credential is ever disclosed or guessed, an attacker can move laterally to every device that shares it. It also makes individual accountability impossible because multiple people can use the same account. Secure operations practices instead favor unique credentials, centralized management, and least privilege.

By contrast, building in automatic patching and antivirus updates, enforcing full-disk encryption, enabling host firewalls, and securely wiping drives at retirement are all aligned with secure provisioning and deprovisioning processes described in Security+ Domain 4.


Question 9

Topic: Security Operations

A security team is adopting infrastructure as code (IaC) for its cloud environment. Their main goal is to quickly detect and alert on unauthorized manual changes (configuration drift) to production servers. Which approach BEST takes advantage of IaC to meet this requirement?

Options:

  • A. Schedule monthly vulnerability scans of all production servers and review the findings for unexpected configuration issues.

  • B. Store infrastructure definitions in version control and run automated checks that compare the live environment to the approved templates on every change.

  • C. Enable detailed audit logging in the cloud provider console and review logs weekly for unexpected changes.

  • D. Require administrators to document any infrastructure changes in a ticketing system before making updates.

Best answer: B

Explanation: Infrastructure as code (IaC) means defining infrastructure (such as networks, servers, and security settings) in version-controlled, machine-readable files. These files become the source of truth for how environments should look.

For security operations, a major benefit of IaC is the ability to detect configuration drift—when the actual deployed environment no longer matches the approved definitions. By continuously or periodically comparing the live cloud resources to the IaC templates, teams can automatically flag unauthorized manual changes and enforce policies.

The option that stores infrastructure definitions in version control and runs automated checks against the live environment directly uses IaC for drift detection and policy enforcement. The other choices improve documentation, logging, or vulnerability detection, but they do not leverage the core IaC capability of comparing real state to codified, version-controlled desired state.


Question 10

Topic: Threats, Vulnerabilities, and Mitigations

Which TWO statements about an organization’s attack surface are MOST accurate? (Select TWO.)

Options:

  • A. The attack surface is simply the list of vulnerabilities found in the latest vulnerability scan, regardless of which assets are actually exposed.

  • B. Internal administrative interfaces on a dedicated management network are never part of the attack surface and can be safely ignored during threat analysis.

  • C. Offline encrypted backup tapes stored in a vault are part of the network attack surface because attackers can reach them directly over the internet.

  • D. Reducing or hiding exposed services, for example by closing unused ports or placing systems behind firewalls, can significantly reduce the attack surface.

  • E. An application’s attack surface includes all externally reachable interfaces such as web interfaces, APIs, and open network ports.

Correct answers: D and E

Explanation: An organization’s attack surface is the collection of all ways an attacker could attempt to interact with or enter a system across its trust boundaries. This includes exposed services, user interfaces, APIs, open ports, and other externally reachable entry points on assets such as servers, applications, and cloud services. The larger and more complex the set of exposed entry points, the more opportunities an attacker has to find and exploit a weakness.

Reducing the attack surface is a core mitigation strategy in Domain 2 (Threats, vulnerabilities, and mitigations). By closing unnecessary ports, disabling unused services, and restricting access across network segments and trust boundaries (for example, using firewalls or API gateways), defenders shrink the number of paths an attacker can try. This complements vulnerability management: vulnerability scans help find weaknesses, but the attack surface concept is about where an attacker can actually reach and interact.

Not all assets are equally exposed. Highly reachable interfaces in DMZs or on the public internet are a critical part of the attack surface, while offline media, such as vaulted backup tapes, are not directly reachable over the network. However, internal admin interfaces, even on separate management networks, still count as part of the attack surface because a threat actor who reaches that network could target them. Understanding what is exposed, where trust boundaries lie, and how services are published is key to classifying threat vectors and prioritizing defenses.


Question 11

Topic: Security Operations

Which of the following statements about monitoring and oversight controls for third-party or outsourced services is NOT appropriate? (Select TWO.)

Options:

  • A. Security expectations for the provider should be documented in SLAs, including reporting frequency for uptime, incidents, and key security metrics.

  • B. The customer can use independent third-party audit reports, such as SOC 2 Type II or ISO 27001 certifications, as part of assessing the provider’s control effectiveness.

  • C. It is acceptable to rely solely on the provider’s public status page for security visibility instead of defining formal SLAs and log-sharing requirements.

  • D. The customer should negotiate rights to access security and access logs, or receive regular log exports, so provider activity can be correlated in the customer’s SIEM.

  • E. Contracts should require the provider to promptly notify the customer of incidents that affect the customer’s data or services and to share relevant incident details.

  • F. The organization should allow the provider to classify all security-related information as proprietary and avoid any right-to-audit or reporting clauses to preserve the vendor relationship.

Correct answers: C and F

Explanation: Monitoring and oversight of third-party or outsourced services rely on clearly defined, enforceable controls rather than informal trust. Effective oversight typically includes contractual SLAs with security metrics, rights to obtain logs and reports, and requirements for timely incident notification and independent assurance (such as SOC 2 or ISO 27001 reports). Relying only on public status pages or allowing a provider to withhold all security information removes the ability to monitor risk and undermines due diligence, making those practices inappropriate as oversight controls.


Question 12

Topic: Threats, Vulnerabilities, and Mitigations

A manufacturing plant has several legacy PLCs and IP cameras on its OT network. Many still use vendor default passwords, and their management interfaces only support unencrypted protocols. The devices have limited resources and cannot run endpoint agents, and maintenance windows are rare. The security team wants to reduce risk from weak authentication and insecure communication, especially for remote vendor access. Which of the following actions/controls will BEST meet these requirements? (Select TWO.)

Options:

  • A. Move all PLCs and cameras to a dedicated OT VLAN behind a firewall that only allows required ports from the corporate network.

  • B. Require vendors to connect through an MFA‑protected VPN that terminates on a tightly controlled jump host before accessing the OT network.

  • C. Install full-featured EDR agents on each PLC and camera to block unauthorized logins and insecure network traffic.

  • D. Run aggressive, authenticated vulnerability scans directly against the OT devices during production hours to identify missing patches.

  • E. Replace all default credentials on the PLCs and cameras with unique, complex passwords and disable any unused accounts or services.

Correct answers: B and E

Explanation: This scenario focuses on common IoT/OT weaknesses: default or weak credentials, unencrypted management protocols, limited device resources, and long lifetimes with few maintenance windows. Because these devices cannot easily be upgraded or run agents, compensating controls must strengthen authentication and protect communications without installing heavy software on the endpoints.

Strengthening device authentication by replacing default passwords with strong, unique credentials reduces the likelihood of successful unauthorized logins, especially when default credentials are widely known. For remote access, wrapping insecure legacy protocols in a secure, MFA‑protected VPN and forcing all vendor access through a monitored jump host provides encryption and strong identity verification while minimizing changes to the devices themselves.

Other controls like segmentation, EDR, and aggressive scanning may be useful in some contexts, but they either do not directly address the stated problems (weak authentication and insecure communication) or are impractical for constrained OT equipment and fragile production environments.


Question 13

Topic: Threats, Vulnerabilities, and Mitigations

Which of the following statements about an application’s attack surface is NOT correct?

Options:

  • A. Disabling unused features, closing unnecessary ports, and removing default accounts are common ways to reduce an application’s attack surface.

  • B. Any network-exposed service or port that can be reached from an untrusted network segment contributes to the application’s attack surface.

  • C. Public web interfaces and APIs that accept input from external users are considered part of an application’s attack surface.

  • D. An application’s attack surface includes only internet-facing components; internal admin portals on the corporate network are not part of the attack surface.

Best answer: D

Explanation: The attack surface is the total set of ways an attacker can interact with a system and attempt to compromise its assets. This includes exposed services, web interfaces, APIs, authentication mechanisms, and management portals—whether they are internet-facing or internal—whenever they can be reached across a trust boundary.

An internal admin portal is still a reachable entry point into sensitive functionality, especially if a compromised internal host or malicious insider can access it. Limiting the definition of attack surface to only internet-facing components ignores significant real-world risks and does not reflect standard security practice.

Effective security work aims to identify all such entry points and then reduce or harden them, for example by closing unused ports, disabling unnecessary features, and securing default accounts and configurations.


Question 14

Topic: Security Operations

In a standard incident response process, which phase focuses on immediately limiting the spread and impact of an incident, such as by isolating affected systems, while the root cause is still being investigated?

Options:

  • A. Identification

  • B. Containment

  • C. Recovery

  • D. Eradication

Best answer: B

Explanation: In the incident response lifecycle, containment is the phase where responders act quickly to limit the spread and impact of an incident, for example by isolating affected endpoints, blocking malicious IPs, or disabling compromised accounts. Identification is about recognizing and confirming that an incident is occurring, whereas containment is the follow-up phase where you stop it from getting worse while you continue deeper analysis.


Question 15

Topic: Security Program Management and Oversight

An organization creates one plan that defines how critical business processes will continue operating at an acceptable level during a major disruption, and another plan that details how to restore IT systems, applications, and data afterward. Which statement BEST describes how these plans relate to each other?

Options:

  • A. The disaster recovery plan defines how all business units will continue operations, and the business continuity plan defines only the steps technology staff take to restore systems.

  • B. The business continuity plan handles short, minor incidents, whereas the disaster recovery plan is used only for long-term strategic planning and compliance reporting.

  • C. The business continuity plan and disaster recovery plan are interchangeable terms that both describe restoring IT infrastructure after a major outage.

  • D. The business continuity plan focuses on keeping essential business functions running, while the disaster recovery plan focuses on restoring IT services and data to support those functions.

Best answer: D

Explanation: Business continuity planning (BCP) and disaster recovery (DR) are closely related but distinct concepts within security program management.

A business continuity plan describes how an organization will continue its most critical business functions at a minimum acceptable level when facing a major disruption (for example, a data center outage, natural disaster, or loss of a key site). It focuses on the business processes: what must keep running, in what order of priority, and with what workarounds or alternate locations.

A disaster recovery plan is more narrowly focused on the technical recovery of IT systems, applications, and data. It specifies how to restore servers, networks, storage, and critical applications, and how to meet defined recovery time objectives (RTOs) and recovery point objectives (RPOs).

The two work together: business continuity defines what the business needs to keep doing and at what level; disaster recovery defines how IT will be brought back to a state that supports those needs. DR is usually considered a subset or supporting component of the overall BCP.

Therefore, the statement that the business continuity plan keeps essential business functions running, while the disaster recovery plan restores IT services and data to support those functions, best reflects how BCP and DR relate.


Question 16

Topic: Security Operations

During a major data breach, the incident response team is revising its communication plan. Leadership wants to reduce legal and reputational risk from inconsistent messages to customers and the media. Which approach to external communication should the team define in the plan?

Options:

  • A. Allow IT and security staff to answer external questions individually as long as they share the latest technical details.

  • B. Delay all communication to customers and media until the incident is fully investigated, even if regulations require earlier notification.

  • C. Have the SOC post detailed, real-time forensic updates on a public status page to show transparency.

  • D. Require that all public statements about the incident come from a single designated spokesperson working with legal and executive leadership.

Best answer: D

Explanation: This question focuses on one key attribute of effective incident communication plans: centralized, coordinated external messaging. During a security incident, especially a data breach, organizations must manage what is said to customers, regulators, and the media so that it is accurate, consistent, and compliant with legal and contractual obligations.

A well-designed incident communication plan distinguishes between internal communications (often more detailed and technical, shared among responders and leadership) and external communications (carefully worded, higher level, and vetted by legal, compliance, and communications teams). For external audiences, the plan typically designates a single spokesperson or a very small, authorized group responsible for all public statements. This reduces the risk of conflicting stories, premature conclusions, or disclosure of sensitive investigative details.

In Security+ Domain 4 (Security operations), incident response procedures emphasize not only containment and technical analysis, but also communication and coordination. Good communication planning helps avoid additional harm to the organization and affected parties, while supporting regulatory notification requirements and overall trust.


Question 17

Topic: General Security Concepts

Which of the following statements about separation of duties and job rotation is NOT correct?

Options:

  • A. Job rotation can reduce insider threat risk by preventing one employee from maintaining exclusive control over the same sensitive process for a long period of time.

  • B. Separation of duties and job rotation are primarily used to optimize employee performance and have little impact on preventing fraud or insider misuse.

  • C. Job rotation can help detect misuse because a different employee may notice irregularities left behind by a predecessor when they take over the role.

  • D. Separation of duties reduces the risk of a single employee committing undetected fraud by requiring more than one person to complete a critical task.

Best answer: B

Explanation: Separation of duties and job rotation are fundamental administrative controls used to manage insider threat risk, especially fraud and misuse in business processes.

With separation of duties, critical tasks are split so that at least two people must participate. For example, in an accounts payable process, one person enters invoices, another approves them, and a different person releases payments. This makes it much harder for a single insider to both commit and hide fraudulent activity.

Job rotation requires employees to periodically switch roles or responsibilities. When someone new takes over a role—such as reconciling a bank account or reviewing access logs—they may notice irregularities that the original person either missed or intentionally hid. Rotation also discourages long-term schemes, since an attacker cannot assume indefinite control of a function.

The incorrect statement is the one that describes these practices as mainly performance-optimization measures with little impact on fraud or misuse. In reality, their primary security value is exactly in fraud prevention and early detection of insider abuse.


Question 18

Topic: Threats, Vulnerabilities, and Mitigations

Which of the following statements about common malware types is NOT accurate? (Select TWO.)

Options:

  • A. Spyware is designed to covertly monitor users or systems and exfiltrate data such as browsing history, credentials, or other sensitive information.

  • B. A worm can spread across multiple systems without user interaction by exploiting operating system or network vulnerabilities.

  • C. Ransomware typically renders data or systems unusable until a payment is demanded, often by encrypting files it finds.

  • D. A keylogger’s primary behavior is to spread automatically across networks; capturing keystrokes is only a minor side effect.

  • E. A Trojan appears to be legitimate software but secretly performs malicious actions once installed by the user.

  • F. A classic file-infecting virus does not need a host program or file and instead runs entirely from network memory.

Correct answers: D and F

Explanation: This question focuses on distinguishing common malware types by their behavior rather than by signatures or specific delivery mechanisms. A virus needs a host file or program to infect; it does not exist independently in memory by definition. Worms self-propagate over networks without user action. Trojans masquerade as legitimate software while performing hidden malicious activity. Ransomware typically encrypts or otherwise locks data and demands payment. Spyware focuses on covertly gathering and exfiltrating information. Keyloggers specifically record keystrokes; while they may be distributed in various ways, self-propagation is not what defines them.

Understanding these behavioral traits helps analysts quickly infer likely malware types from observable effects such as rapid propagation, encryption of files, or stealthy data theft, which is central to Security+ Domain 2 on threats, vulnerabilities, and mitigations.


Question 19

Topic: Security Operations

A company uses a cloud-based identity provider (IdP) to provide SSO and MFA for all SaaS applications. The SOC receives an alert that a user account successfully authenticated from an overseas IP address only three minutes after a successful login from the user’s home city. Storage on the forensic evidence share is limited, so the analyst must prioritize collecting the most relevant data source to quickly determine whether the account was compromised and what IPs and locations were involved.

Which data source should the analyst export and preserve FIRST to support this investigation?

Options:

  • A. Full packet captures from the on-premises firewall for the entire day of the alert

  • B. Web access logs from the company’s public-facing corporate website for the week of the alert

  • C. Authentication logs from the cloud identity provider for the affected account over the relevant time window

  • D. Endpoint EDR telemetry from the user’s corporate laptop for the past 30 days

Best answer: C

Explanation: This scenario focuses on choosing the most appropriate data source to support a suspicious login / account-takeover investigation. In digital forensics, analysts should collect targeted, high-value evidence first, especially when storage or time is limited.

Because the suspicious behavior involves unusual authentication activity (logins from two distant locations within minutes), the most relevant evidence is the identity provider’s authentication logs. These logs record:

  • Timestamps of login attempts
  • Success/failure status
  • Source IP addresses and geolocation
  • MFA prompts and results
  • Application and device details

With that data, the analyst can quickly confirm whether the same account authenticated from physically impossible locations, whether MFA was bypassed or not used, and which IPs and devices were involved. This is precisely the information needed for an initial triage of a possible account compromise.

Other data sources like packet captures, EDR telemetry, or unrelated web logs may be useful later, but they are either too broad, too large relative to storage constraints, or not directly tied to the IdP authentication events in question. For Task 4.5 in Domain 4 (basic digital forensics concepts), this question highlights the importance of matching the data source to the type of investigation and the specific questions the analyst needs to answer.


Question 20

Topic: Security Operations

A SOC analyst is reviewing web application authentication logs to distinguish between credential stuffing and brute-force activity. Which of the following patterns MOST clearly indicates a credential stuffing attack rather than a traditional brute-force attempt?

Options:

  • A. A single public IP address attempts logins for hundreds of different user accounts over a few minutes, with only one or two attempts per account before moving on to the next.

  • B. Several failed login attempts to an administrator account occur from internal IP addresses during a shift change window each day.

  • C. A single user account shows multiple failed logins from various geographic locations over several hours, followed by a successful login from a familiar location.

  • D. Hundreds of rapid login attempts against a single user account from one IP address, each using a different password.

Best answer: A

Explanation: Credential stuffing and brute-force attacks are both password-guessing techniques, but they leave different patterns in authentication logs.

A brute-force attack usually targets one or a few accounts and makes many password guesses per account, often rapidly. Logs will show a single username with a high number of failed attempts from one or a few source IP addresses.

A credential stuffing attack typically uses previously compromised username/password pairs from breaches and tries them across many different accounts, often with very few attempts per account. Logs will show many different usernames, each with one or two failed attempts (and possibly some successes), often from the same source IP or a small set of IPs.

Recognizing the “many accounts, few attempts each” pattern is the key discriminating factor that tells you the logs most likely represent credential stuffing rather than a traditional brute-force attack.


Question 21

Topic: Threats, Vulnerabilities, and Mitigations

Which term BEST describes an attack in which an adversary uses large lists of previously compromised username–password pairs from other breaches to attempt logins against many different web applications?

Options:

  • A. Password spraying

  • B. Dictionary attack

  • C. Credential stuffing

  • D. Brute-force attack

Best answer: C

Explanation: The scenario describes an attacker who has lists of real username–password pairs from previous data breaches and then tries those same combinations on many other unrelated sites or applications. This pattern is called credential stuffing and is a common cause of account takeover when users reuse passwords across services.

Brute-force and dictionary attacks are about guessing passwords for one or a few accounts using large keyspaces or wordlists. Password spraying is about using a small set of popular passwords against many accounts on the same service to avoid lockouts. Credential stuffing is distinct because it uses already-known valid credentials from other compromised services and applies them at scale to different targets, which matches the description in the question.

This concept falls under Domain 2 (Threats, vulnerabilities, and mitigations), specifically recognizing different password attack types so that appropriate defenses like MFA, anomaly detection, and user education on password reuse can be applied.


Question 22

Topic: Security Architecture

Which of the following statements about mobile device management (MDM) and mobile application management (MAM) is NOT correct?

Options:

  • A. Containerization separates corporate applications and data from personal content on the same device, often using an encrypted workspace.

  • B. Remote wipe allows an administrator to erase data on a managed mobile device if it is lost or stolen.

  • C. MAM solutions can only manage corporate apps on devices that are fully enrolled in corporate MDM and cannot be used with BYOD devices.

  • D. Application allow-lists are used to restrict devices so that only approved apps can be installed or run.

Best answer: C

Explanation: Mobile device management (MDM) and mobile application management (MAM) are key components of endpoint and mobile security. MDM focuses on the entire device, enforcing policies like encryption, passcodes, and remote wipe. MAM focuses on controlling specific corporate applications and their data, which is especially important for bring-your-own-device (BYOD) environments.

Remote wipe is a standard MDM capability that lets administrators erase data on a lost or stolen device to protect organizational information. Application allow-lists limit which apps can be installed or run, reducing exposure to malicious or unapproved software. Containerization separates corporate data and apps from personal content, often in an encrypted workspace, allowing organizations to manage or wipe corporate data without affecting the user’s personal data.

It is incorrect to claim that MAM can only manage apps on devices fully enrolled in MDM and cannot be used with BYOD. Many modern MAM solutions are designed specifically to support BYOD by controlling only the corporate apps and their data, without taking full control of the personal device.


Question 23

Topic: Threats, Vulnerabilities, and Mitigations

A company’s finance staff have recently fallen for several phishing and phone scams where attackers impersonated the CFO, used urgent language about “overdue wire transfers,” and hinted at possible disciplinary action for delays. Some employees say they trusted the messages because they appeared to come from a senior executive and sounded extremely urgent.

Leadership wants low‑cost controls this quarter that specifically reduce the effectiveness of social engineering that leverages trust in authority, urgency, and fear, rather than purely technical exploits.

Which of the following actions/controls will BEST meet these requirements? (Select TWO.)

Options:

  • A. Require all staff to digitally sign and encrypt internal email messages as the default configuration

  • B. Deploy additional perimeter firewall rules to block traffic from IP ranges associated with high-risk countries

  • C. Roll out recurring, role-based phishing and vishing simulations for finance staff that highlight authority, urgency, and fear tactics, followed by targeted coaching

  • D. Implement a mandatory out-of-band verification process (such as a call-back to a known number) for any urgent payment, bank detail change, or bonus request from executives

  • E. Increase password complexity and shorten password expiration intervals for all finance user accounts

Correct answers: C and D

Explanation: This scenario focuses on social engineering attacks that exploit trust in authority, urgency, and fear of consequences. The attackers are not breaking technical controls so much as persuading employees to bypass normal caution.

The best mitigations are controls that change human behavior and supporting processes: training that explicitly teaches staff to recognize these psychological levers, and procedures that force verification of unusual or high-risk requests. Technical controls like stronger passwords or more firewall rules do little to slow someone who has already convinced an employee to act.

Role-based phishing/vishing simulations with coaching make employees more aware of these tactics and give them safe practice saying “no” or verifying before acting. A mandatory out-of-band verification step for sensitive transactions forces employees to pause and confirm, breaking the attacker’s manufactured urgency and authority pressure.

Stronger passwords, added firewall rules, or default email encryption all have their place in a security program, but they do not directly counter the human factors the attackers are exploiting in this scenario.


Question 24

Topic: Threats, Vulnerabilities, and Mitigations

A security analyst is investigating reports of strange errors on the company’s public login page. The analyst pulls the following application log snippet.

Based on the exhibit, which change would BEST mitigate the underlying issue?

2025-08-12T14:22:01Z ERROR /login DB error:
  SELECT * FROM users WHERE username = 'jsmith'
  AND password = '' OR 1=1--';

2025-08-12T14:22:02Z ERROR /login DB error:
  SELECT * FROM users WHERE username = 'admin'
  AND password = '' OR 'x'='x';

Options:

  • A. Enable client-side JavaScript to block special characters in the password field.

  • B. Rewrite the login code to use parameterized queries for all database access.

  • C. Enforce a minimum 14-character password length for all user accounts.

  • D. Configure account lockout after five consecutive failed login attempts.

Best answer: B

Explanation: The exhibit shows SQL statements in the error log where user-supplied values are directly concatenated into the query string, such as AND password = '' OR 1=1--'; and AND password = '' OR 'x'='x';. These patterns are classic SQL injection payloads that change the logic of the WHERE clause so it always evaluates to true.

The core problem is unsafe query construction: user input is being treated as executable SQL code. The correct mitigation at the design level is to use parameterized (prepared) queries and proper server-side input handling so that any user input is always treated as data, never as SQL instructions. This aligns with web application protections covered under Security+ Domain 2, such as secure coding and input handling to prevent injection attacks.

Password policy and account lockout are valuable controls, but they address credential guessing, not the injection vulnerability shown in the log. Client-side checks are insufficient because the attacker can send requests directly to the server, bypassing any browser-based controls. The most appropriate fix must change how the application builds and executes SQL queries on the backend.


Question 25

Topic: Security Program Management and Oversight

A company is negotiating an SLA with a new SaaS HR provider that will store employee PII. The security team reviews the following excerpt from the draft SLA.

Exhibit: Draft SLA Security-Related Clauses

ClauseDescriptionNotes
Availability99.5% monthly uptime; service credits for downtimeApplies to production only
Incident notificationNotify customer within 48 hours of a confirmed breachEmail notice to admin
Data protectionEncrypt data at rest and in transit; role-based accessProvider-managed keys
Data retention/returnExport customer data and delete within 30 days of endWritten confirmation only

Based ONLY on this exhibit, which additional contract provision should the security team prioritize to ensure ongoing oversight of the provider’s security practices?

Options:

  • A. Shorten the post-termination data deletion period from 30 days to 7 days to further reduce retention of PII.

  • B. Reduce the uptime commitment from 99.5% to 98% in exchange for a lower subscription cost.

  • C. Add a contractual right for the customer to perform or commission security audits of the provider, such as reviewing independent assessment reports.

  • D. Allow the provider to reuse anonymized HR data for analytics and marketing to improve the service.

Best answer: C

Explanation: This question targets vendor and third-party risk management, specifically what should be included in contracts and SLAs to support security and privacy oversight.

The exhibit already shows that the draft SLA covers several important points: availability (99.5% uptime), incident notification within 48 hours of a confirmed breach, data protection through encryption and role-based access, and data retention/return with deletion within 30 days after contract end. However, nothing in the table mentions how the customer can verify or monitor the provider’s ongoing security posture over time.

In third-party risk management, contracts and SLAs should not only state technical and procedural security obligations but also give the customer mechanisms for assurance and verification. A right-to-audit clause or an explicit right to receive independent security reports (such as SOC 2-type reports, penetration tests summaries, or compliance attestations) is a common way to enable ongoing oversight of a provider’s security controls.

Therefore, the most appropriate additional provision is one that ensures the customer can review or audit the provider’s security practices, beyond just trusting the documented commitments in the SLA excerpt shown.


Questions 26-50

Question 26

Topic: Security Architecture

Which of the following statements about secure routing and switching controls is MOST accurate? (Select TWO.)

Options:

  • A. Port security is designed to provide end‑to‑end encryption for all traffic entering or leaving a switch access port.

  • B. Loop protection features are primarily used to encrypt traffic on switch uplinks to prevent eavesdropping between network segments.

  • C. Disabling all spanning tree and loop protection mechanisms is recommended to eliminate unnecessary control traffic and reduce the attack surface.

  • D. Port security can restrict an access port to learned or statically assigned MAC address values, blocking frames from unauthorized devices.

  • E. Access control lists (ACLs) on routers or Layer 3 switches inspect packet headers and permit or deny traffic based on IP addresses, protocols, and ports.

Correct answers: D and E

Explanation: This question focuses on secure routing and switching concepts, particularly ACLs, port security, and loop protection, which are key parts of secure network design.

Port security is a Layer 2 feature that restricts which MAC addresses can send traffic on a given switch port. By allowing only specific MAC addresses (dynamically learned or statically configured), it helps prevent unauthorized devices from connecting or moving to another port unnoticed.

Access control lists (ACLs) are applied on routers or Layer 3 switches. They examine packet header fields such as source and destination IP addresses, protocol numbers, and TCP/UDP ports. Based on these criteria, ACL rules either permit or deny the packet, providing a basic traffic-filtering mechanism that enforces security policy at network boundaries or between internal segments.

Loop protection, often implemented via spanning tree and related features, exists to detect and prevent Layer 2 loops that can cause broadcast storms and take networks down. It is not an encryption mechanism and should not be disabled in switched environments where redundant links may exist.


Question 27

Topic: Threats, Vulnerabilities, and Mitigations

Which statement BEST describes a credentialed internal vulnerability scan?

Options:

  • A. A scan run from inside the network that only probes open ports and banners without logging into hosts

  • B. A scan run from the internet without authentication to identify externally exposed services and ports

  • C. A simulated attack where testers attempt to exploit vulnerabilities and pivot through the network

  • D. A scan run from inside the organization’s network using valid host credentials to identify missing patches and misconfigurations

Best answer: D

Explanation: A credentialed internal vulnerability scan combines two dimensions of scan configuration: location (internal vs external) and access level (credentialed vs non-credentialed).

An internal scan is initiated from inside the organization’s network or VPN, seeing systems much like an internal user or compromised endpoint would. A credentialed scan uses valid credentials for target systems (for example, local admin or read-only accounts) so the scanner can log in, read configuration details, inspect installed software, and check for missing patches and insecure settings. This provides deeper visibility and usually fewer false positives than a non-credentialed scan.

Therefore, the best description is a scan that runs from inside the network and uses valid host credentials to identify issues such as missing patches and misconfigurations.


Question 28

Topic: Security Architecture

A security analyst is reviewing a report from a new cloud-native security service. The report shows the following findings across the organization’s cloud environment.

Check IDResourceFindingSeverity
CC-101vm-prod-01Public IP not associated with load balancerMedium
CC-204db-payrollStorage volume not encrypted at restHigh
CC-305sg-web-tierInbound rule allows 0.0.0.0/0 on port 22High
CC-410obj-bucket-logsBucket policy allows public read accessMedium

Based on the information in the exhibit, which type of cloud-native security service is MOST likely generating this report?

Options:

  • A. An endpoint detection and response platform that analyzes host telemetry and process behavior

  • B. A cloud security posture management service that continuously evaluates cloud configurations against best practices and policies

  • C. A cloud access security broker that monitors and controls user access to third-party SaaS applications

  • D. A cloud-native firewall that inspects and blocks network packets at the perimeter of the virtual network

Best answer: B

Explanation: The exhibit lists several findings across different types of cloud resources: a virtual machine with a public IP not behind a load balancer, a database storage volume without encryption at rest, an overly permissive security group (0.0.0.0/0 on port 22), and an object storage bucket with a public read policy. All of these are configuration and compliance issues, not live traffic events or endpoint behavior.

A cloud security posture management (CSPM) service is designed to continuously evaluate cloud environments for exactly these kinds of misconfigurations and policy violations. CSPM tools scan cloud accounts, inventory resources, compare their settings against security baselines and best practices, and then generate reports like the one in the exhibit.

Other cloud-native services such as CASB, cloud firewalls, and EDR address different layers: CASB handles SaaS usage and data, cloud firewalls handle traffic, and EDR handles endpoint activity. None of those would naturally produce a cross-resource misconfiguration report like the one shown, which clearly points to CSPM as the most appropriate answer.


Question 29

Topic: Threats, Vulnerabilities, and Mitigations

A development team updates their e‑commerce application so that every user-supplied form field is checked on the server for expected data type, length, and format, and any unexpected values are rejected before processing. Which web application protection does this BEST describe?

Options:

  • A. Parameterized queries

  • B. Input validation

  • C. Web application firewall (WAF)

  • D. Code signing

Best answer: B

Explanation: This question targets basic web application protections within Domain 2 (Threats, Vulnerabilities, and Mitigations), specifically the concept of input validation.

The scenario states that the application now checks every user-supplied form field on the server for expected data type, length, and format, and rejects values that do not match these rules. This is exactly what input validation does: it enforces rules about what is considered acceptable input before that input is processed, stored, or passed to downstream components (such as databases or APIs).

Proper input validation is a key mitigation against many attacks, especially injection (SQL injection, command injection) and some types of cross-site scripting. By enforcing strict rules on input, the application reduces the chance that malicious payloads will ever reach sensitive logic or back-end systems.

The other options are important security techniques, but they address different aspects of application security and do not match the behavior described in the stem as directly as input validation does.


Question 30

Topic: Security Program Management and Oversight

A company is redesigning its online form for users to register for a free webinar. The security team wants the change to reflect the privacy principles of data minimization and purpose limitation.

Which proposed change BEST aligns with these principles?

Options:

  • A. Remove optional fields that are not needed to deliver the webinar and clearly state that the remaining data will be used only to send the access link and reminders.

  • B. Require users to create a full customer profile, including mailing address and payment card details, even though the webinar is free.

  • C. Encrypt all fields in transit and at rest and extend data retention from one year to three years for analytics.

  • D. Add a checkbox that gives consent to receive marketing emails and to share data with selected partners, pre‑checked by default.

Best answer: A

Explanation: Data minimization and purpose limitation are core privacy principles. Data minimization means collecting and processing only the personal data that is necessary to achieve a clearly defined purpose. Purpose limitation means defining that purpose up front and not using the data for new, incompatible purposes without additional safeguards (such as new consent).

In the scenario, the main goal is to allow users to register for a free webinar and receive access information and reminders. The option that removes unnecessary fields and clearly states that the remaining data will be used only to send access details and reminders is the only one that both reduces the amount of data collected and restricts how it will be used. The other options deal with security, marketing, or future sales but do not limit collection and use to what is strictly necessary for webinar registration.


Question 31

Topic: Security Operations

Which option BEST represents the approximate order of volatility for common digital evidence sources, from MOST volatile to LEAST volatile?

Options:

  • A. Archived backups → disk drives → RAM → CPU registers and cache

  • B. CPU registers and cache → RAM → disk drives → archived backups

  • C. Disk drives → RAM → CPU registers and cache → archived backups

  • D. RAM → CPU registers and cache → disk drives → archived backups

Best answer: B

Explanation: The order of volatility concept guides forensic investigators to collect the most short-lived (volatile) data first, because it disappears the quickest. In modern systems, CPU registers and cache are the most volatile, often lost as soon as power or a process state changes; next is system and process data in RAM, which is cleared on reboot. Disk drives provide comparatively persistent storage that survives reboots, and archived or offsite backups are the least volatile, designed for long-term retention. Collecting evidence in this approximate order maximizes the amount of transient data preserved for analysis.

In practice, investigators balance this ideal order with legal, operational, and safety constraints, but the underlying principle remains: capture CPU and memory information before relying on disk and long-term backups, which can typically be accessed later without immediate risk of loss.


Question 32

Topic: General Security Concepts

Which TWO statements accurately describe how digital signatures work and what security properties they provide? (Select TWO.)

Options:

  • A. The sender creates a hash of the message and encrypts that hash with the sender’s private key to form the digital signature.

  • B. Digital signatures use only symmetric keys, which both sender and receiver share, to authenticate the message source.

  • C. Anyone with the sender’s public key can verify the signature, confirming the message has not been altered since it was signed.

  • D. If the sender later denies sending the message, the recipient can change the hash inside the signature to prove non-repudiation.

  • E. Digital signatures primarily provide confidentiality by encrypting the entire message with the recipient’s public key.

Correct answers: A and C

Explanation: Digital signatures use asymmetric cryptography to provide integrity, authentication, and non-repudiation. In a typical flow, the sender first computes a hash (message digest) of the data. Instead of signing the entire message (which is inefficient), the sender uses their private key to encrypt this hash, creating a digital signature.

When the recipient gets the message and the signature, they recompute the hash of the received data and use the sender’s public key to decrypt the signature back into the original hash value. If the two hashes match, the data has not changed in transit, so integrity is preserved. Because the public key successfully verifies the signature, the recipient knows the signature could only have been produced by the corresponding private key, which authenticates the sender. This same property also supports non-repudiation: the sender cannot credibly deny having signed the message if their public key verifies the signature.

Digital signatures themselves do not provide confidentiality. To keep data secret, encryption with the recipient’s public key (or a symmetric key wrapped by that public key) is used in addition to any signature. Also, signatures are built on asymmetric key pairs, not on symmetric keys shared by both parties.


Question 33

Topic: Security Program Management and Oversight

Which TWO statements accurately describe the roles of different lines of defense in an organization’s governance model? (Select TWO.)

Options:

  • A. The first line of defense consists of operational management that owns and manages risks in day-to-day activities.

  • B. The first line of defense functions as an independent internal audit group that reports directly to the board to provide assurance.

  • C. The third line of defense is mainly responsible for running business processes and making front-line decisions.

  • D. The third line of defense is responsible for designing and owning all operational controls used in business processes.

  • E. The second line of defense typically includes risk management and compliance functions that monitor risk and support policy enforcement.

Correct answers: A and E

Explanation: Governance models that use multiple lines of defense separate the responsibilities for owning and managing risk, overseeing and coordinating risk and compliance activities, and independently assessing how effective those activities are. At a high level, the first line is operations and management, the second line is risk management and compliance, and the third line is internal audit.

The first line of defense is made up of operational management and staff who own risks in their areas. They design and operate controls within business processes and are accountable for how those processes are run.

The second line of defense consists of functions such as enterprise risk management, compliance, and sometimes information security governance. These roles provide oversight, define policies and standards, monitor adherence, and report on risk, but they usually do not run day-to-day operations.

The third line of defense is internal audit, which should be independent from management. It provides objective assurance on the effectiveness of governance, risk management, and control activities, typically reporting its findings to senior leadership and the board.


Question 34

Topic: Security Operations

A SOC analyst is triaging several alerts for a user’s Windows laptop. They want to focus first on the log entry that is most indicative of an active malware infection on the endpoint itself, rather than a normal event or a simple misconfiguration. Which observation should the analyst prioritize for investigation?

Options:

  • A. A new executable located in the user’s temp directory repeatedly spawning PowerShell, making outbound connections to unknown external IP addresses, and driving CPU usage close to 100%.

  • B. An EDR alert indicating that the laptop’s antivirus signatures are 5 days out of date due to missed updates.

  • C. A spike in network traffic from the svchost process as Windows Update runs during the organization’s scheduled weekly patch window.

  • D. A domain user account generating 20 failed RDP logon attempts within 5 minutes from the same external IP address.

Best answer: A

Explanation: This question targets the ability to distinguish true indicators of an active malware infection on an endpoint from other important but different security issues such as brute-force login attempts, normal system activity, or misconfigurations. In Security+ terms (Domain 4, Security operations), the focus is on recognizing indicators of compromise (IOCs) that suggest malware is already running.

The key discriminating factor is behavior that clearly matches typical malware activity on the host: an unexpected executable in a suspicious location, abnormal process behavior, unusual script execution, strange outbound connections, and heavy resource usage. Other events may indicate attempts to compromise, or weaknesses that make compromise more likely, but they do not by themselves show that malware is currently executing on the asset.

By prioritizing the clear host-based behavioral IOC, the analyst can quickly investigate and contain an active threat before it spreads or causes further damage.


Question 35

Topic: General Security Concepts

Which TWO of the following statements about authentication, authorization, and accounting/auditing are INCORRECT? (Select TWO.)

Options:

  • A. Using multi-factor authentication strengthens the authentication step but does not replace the need for proper authorization checks and activity logging.

  • B. Accounting/auditing focuses on recording and reviewing user and system activities, such as logins and resource access, to support investigations, compliance, and usage tracking.

  • C. Authentication is the process of verifying that a user or system is who they claim to be, typically using credentials such as passwords, tokens, or biometrics.

  • D. Authorization determines what actions or resources an authenticated user or system is allowed to access, usually based on permissions or roles.

  • E. Authorization must occur before authentication so the system can decide what resources a user can access before verifying their identity.

  • F. If strong authentication is implemented, separate authorization and accounting/auditing controls are usually unnecessary in most secure environments.

Correct answers: E and F

Explanation: Authentication, authorization, and accounting/auditing (often called AAA) work together to enforce access control. Authentication verifies identity, such as checking a username and password or multi-factor authentication. Authorization then determines what that authenticated identity is allowed to do, typically through permissions or roles. Accounting/auditing logs and reviews user and system actions, enabling traceability, investigations, compliance checks, and sometimes billing.

These functions are complementary, not interchangeable. A secure system authenticates first, then authorizes access based on policy, and continuously records activity for later review. Strong authentication does not eliminate the need for granular authorization or for good logging and auditing practices.


Question 36

Topic: Security Operations

A user reports that contacts are receiving spam that appears to come from their corporate email address. A SOC analyst reviews recent telemetry in the cloud email logs and must determine whether the account has likely already been taken over via phishing.

Which of the following findings BEST indicates a successful mailbox compromise in this scenario?

Options:

  • A. A single login to webmail from the user’s usual city using a known corporate-managed laptop

  • B. Dozens of failed login attempts from a foreign IP address followed by no further activity

  • C. A newly created mailbox rule that automatically forwards all incoming messages to an external Gmail address

  • D. Several phishing emails to the user that were automatically quarantined by the email security gateway

Best answer: C

Explanation: This scenario focuses on distinguishing attempted or blocked phishing activity from evidence that a mailbox has already been compromised and is under attacker control.

When attackers successfully phish a user and steal credentials or tokens, they commonly log in to the mailbox and then establish persistence and data exfiltration mechanisms. One frequent technique is to create or modify mailbox rules that automatically forward or hide messages. Forwarding to an external personal account is particularly suspicious because it lets the attacker monitor conversations, harvest more information, and potentially bypass internal monitoring.

By contrast, indicators such as blocked phishing emails, failed logins, or normal sign-ins from expected devices and locations do not, by themselves, prove the attacker actually took control of the mailbox. They can show attempts, security controls doing their job, or routine activity, but they lack the behavior change or configuration change that signals a successful takeover.


Question 37

Topic: Security Architecture

Which identity and access management concept allows a user to authenticate once and then access multiple related applications without needing to log in again to each one?

Options:

  • A. Role-based access control (RBAC)

  • B. Multi-factor authentication (MFA)

  • C. Single sign-on (SSO)

  • D. Federation

Best answer: C

Explanation: The scenario describes a user authenticating once and then transparently accessing multiple applications without repeated logins. This is the classic definition of single sign-on (SSO), where a central identity provider issues tokens or assertions that other applications trust.

SSO improves user experience and can enhance security by centralizing authentication and allowing consistent policies (such as MFA and password complexity) to be enforced at a single point. While related concepts like federation, MFA, and RBAC are often used alongside SSO, they serve different purposes and do not, by themselves, enable one-time sign-in across multiple apps.


Question 38

Topic: Threats, Vulnerabilities, and Mitigations

Employees in several departments have started using unsanctioned cloud file‑sharing tools to collaborate with external partners, bypassing the company’s approved collaboration platform and identity provider. From a security perspective, which concept BEST explains why this shadow IT behavior significantly increases organizational risk?

Options:

  • A. Implementation of defense in depth by adding additional independent storage providers

  • B. Application of least privilege by limiting IT administrators’ ability to view user data

  • C. Expanded attack surface caused by loss of centralized visibility and control over data and access

  • D. Use of separation of duties by splitting responsibilities across internal and external systems

Best answer: C

Explanation: Shadow IT occurs when users adopt tools, especially cloud or SaaS services, without approval or integration into the organization’s security controls. When employees use unsanctioned file‑sharing or collaboration apps, security teams lose centralized visibility into where data is stored, who can access it, and how it is protected.

This directly expands the organization’s attack surface: there are more external accounts, services, and data repositories that attackers can target, but they are outside normal monitoring, logging, DLP, and identity and access management. Because these shadow services are not governed by corporate policies, data governance and access control break down. Sensitive files might be shared with personal accounts, weak passwords might be used, and MFA or SSO may not be enforced.

In the Security+ Domain 2 context, this scenario illustrates how shadow IT and unsanctioned SaaS usage increase risk by creating new, uncontrolled threat vectors and blind spots, not by improving layered security or applying core principles such as least privilege or separation of duties.


Question 39

Topic: Security Operations

During a malware incident, a junior analyst quickly grabs a few obvious malicious file hashes and then recommends reimaging the affected workstation. The incident handler instead wants to understand exactly how the attacker got in, what they did, and whether similar activity occurred on other systems. Which action would BEST improve the team’s ability to reconstruct the sequence of events using proper digital forensics practices?

Options:

  • A. Ask each affected user to write a narrative of what they remember happening and rely on these statements as the primary event sequence

  • B. Immediately isolate and reimage the affected workstation, then restore from backup and document the recovery steps in the ticket

  • C. Enable verbose logging on all servers going forward and increase log retention to at least five years for future investigations

  • D. Aggregate relevant host and network logs, file metadata, and other artifacts into a single, time‑ordered view showing events before, during, and after the compromise

Best answer: D

Explanation: This scenario targets digital forensics timeline analysis, which is about reconstructing what happened, in what order, and over what duration using objective data. In an incident, that typically means gathering events from multiple sources—such as endpoint logs, server logs, authentication records, network telemetry, and file system metadata—and arranging them chronologically.

Building a consolidated, time‑ordered view allows analysts to see the attacker’s initial access, privilege escalation, lateral movement, data access, and cleanup attempts as a coherent chain rather than isolated events. This helps answer questions like “How did they get in?”, “What did they touch?”, and “Did they hit other systems?”, which are central to incident scoping and containment decisions.

Simply reimaging systems or enabling more logging for the future does not reconstruct the current attack path. Nor does relying primarily on user memory, which can be incomplete or inaccurate. The best improvement for understanding the incident’s sequence is to systematically combine and order existing forensic artifacts into a unified timeline.


Question 40

Topic: Security Architecture

Which statement BEST describes the primary role of a directory service or cloud-based identity provider in an enterprise environment?

Options:

  • A. It centrally stores digital identities and enforces authentication and access policies for users and resources.

  • B. It aggregates security logs from multiple systems and correlates events for incident detection.

  • C. It establishes encrypted tunnels so remote users can securely connect to the internal network.

  • D. It inspects network traffic for threats and blocks malicious connections at the perimeter.

Best answer: A

Explanation: A directory service or identity provider (IdP) is a core component of identity and access management. It acts as a centralized repository for user accounts, groups, credentials, and related attributes, and it applies authentication and authorization policies when users attempt to access systems or applications. By centralizing identity, organizations can implement consistent password policies, multifactor authentication, group-based access, and single sign-on across on-premises and cloud resources.

Technologies like IDS/IPS, VPN concentrators, and SIEM platforms may integrate with this identity data, but their primary purposes are different: detecting and blocking network threats, providing secure remote connectivity, and aggregating/analyzing logs, respectively. The unique, defining role of directory services and identity providers is to manage who users are and what they are allowed to access, in a centralized, policy-driven way.


Question 41

Topic: Security Operations

A SOC manager is evaluating tasks for their first SOAR playbook to reduce analyst workload without increasing operational risk. The team currently handles incidents manually as shown.

Exhibit:

Task IDIncident typeMonthly volumeAvg handling time (min)Current pain point
T1Phishing email triage42018Analysts manually copy URLs/IPs into threat intel tools for lookup.
T2VPN account lockout review3525Requires analyst judgment and user verification before unlocking.
T3Suspected privileged compromise390High-impact; requires manager approval for containment actions.
T4Web app SQL injection alerts6030Analysts confirm WAF logs, then decide whether to block source IPs.

Based on the exhibit, which automation change is the most appropriate first playbook to implement?

Options:

  • A. Create a playbook that automatically blocks all IP addresses that trigger SQL injection alerts (T4) at the WAF.

  • B. Create a playbook that automatically unlocks VPN user accounts (T2) once failed logins stop for five minutes.

  • C. Create a playbook that automatically disables privileged accounts (T3) whenever a suspected compromise alert is generated.

  • D. Create a playbook that automatically enriches phishing alerts (T1) with URL and IP reputation data and attaches the results to the ticket.

Best answer: D

Explanation: This scenario tests how to identify good candidates for initial security automation in a SOC environment. A solid principle is to start by automating repetitive, high-volume, low-risk tasks, usually focused on data collection and enrichment rather than high-impact containment decisions.

In the exhibit, phishing triage (T1) has the highest monthly volume and a clearly defined, manual enrichment step: copying URLs and IP addresses into threat intelligence tools to obtain reputation data. That step is standardized, repetitive, and does not by itself change production systems. Automating this kind of enrichment is precisely what SOAR tools are well suited for, and it reduces analyst workload while leaving final decisions to humans.

By contrast, the other tasks (T2, T3, and T4) involve actions that can directly affect user access or application availability: unlocking accounts, disabling privileged identities, or blocking source IPs. These are higher-risk containment actions that, if automated too aggressively, can cause outages or help an attacker, so they should not typically be the first candidates for automation. Instead, automation around those tasks usually starts with notifications, enrichment, or standardized checks, keeping the final decision with an analyst or manager.


Question 42

Topic: Security Operations

Which TWO of the following statements about patch management are INCORRECT and should NOT be followed as best practice? (Select TWO.)

Options:

  • A. An effective patch management process maintains an up‑to‑date inventory of hardware and software so that relevant patches can be identified and applied.

  • B. Security patches for vulnerabilities that are actively being exploited should be prioritized for faster deployment, even if that means handling them outside the normal patch cycle.

  • C. Risk‑based patching considers factors such as system criticality, data sensitivity, exposure to the internet, and exploit availability when deciding patch priority.

  • D. Once an initial patch baseline is established, ongoing vulnerability scanning is unnecessary unless a major system change occurs.

  • E. Patches should be deployed directly to production as soon as they are released, without prior testing, to minimize the attack window.

Correct answers: D and E

Explanation: Patch management is an ongoing, structured process for identifying, testing, approving, and deploying updates to systems and applications. A strong program starts with an accurate inventory of hardware and software so you know what needs to be patched. It then uses vulnerability information and risk factors—such as system criticality, data sensitivity, internet exposure, and whether exploits exist in the wild—to prioritize which patches must be applied fastest.

Even when speed is important, patches should normally be tested in a lab or staging environment before production deployment to reduce the chance of outages or new security issues. Emergency changes for actively exploited vulnerabilities may be accelerated, but organizations should still perform at least minimal validation and use change management. Finally, patching is not a “one‑and‑done” activity: regular vulnerability scanning is needed to identify new vulnerabilities, missed patches, and configuration drift over time.


Question 43

Topic: Threats, Vulnerabilities, and Mitigations

A security analyst in a small e-commerce company receives multiple alerts that the public website is intermittently unreachable. Users report timeouts, but no credential prompts or certificate warnings. The SIEM shows a sudden spike in inbound TCP SYN packets to port 443 from thousands of different source IP addresses, while application logs show no errors.

The analyst suspects a network-based attack.

Which of the following response actions should you AVOID? (Select TWO.)

Options:

  • A. Turn off or severely reduce logging on the firewall and web server to conserve CPU and disk resources until the abnormal traffic subsides.

  • B. Apply temporary rate limits and connection thresholds on inbound HTTPS traffic at the NGFW to reduce the impact of the flood while monitoring for collateral damage.

  • C. Temporarily route traffic for the website through a pre-approved cloud-based DDoS protection service as defined in the emergency change procedure.

  • D. Disable the perimeter firewall entirely so traffic can flow without inspection, reducing load on the device and restoring connectivity more quickly.

  • E. Contact the organization’s ISP to enable upstream DDoS scrubbing or filtering for the affected IP address range, using the existing incident-response escalation process.

Correct answers: A and D

Explanation: The described symptoms—users seeing timeouts, a sudden spike in inbound TCP SYN packets to port 443 from many different IP addresses, and no application errors—are classic indicators of a network-based denial-of-service (DoS) or distributed denial-of-service (DDoS) attack, specifically a SYN flood against the HTTPS service.

In this situation, the goal is to preserve availability while maintaining security controls and good operational practices. Appropriate responses include engaging upstream DDoS mitigation, applying rate limits, or routing traffic through a cloud-based DDoS protection service following established change-control procedures. Actions that disable key security controls or eliminate logging are unsafe and should be avoided, because they increase exposure and destroy important evidence needed for investigation and tuning defenses.

The unsafe options are the ones that turn off the perimeter firewall and the ones that disable or severely reduce logging. Both of these directly violate core security principles such as defense in depth, least privilege, and proper incident handling, even if they appear to offer short-term performance relief.


Question 44

Topic: General Security Concepts

A CISO has adopted a recognized security framework (such as NIST CSF or ISO 27001) and completed an initial gap assessment. The goal is to use the framework to create consistent, repeatable security processes and show ongoing compliance to senior leadership.

Based on the exhibit, which action should the CISO prioritize to best achieve that goal?

Exhibit:

Framework controlCurrent stateEvidenceGap
Asset inventory maintainedAd hocTeam spreadsheetsNo organization-wide standard
Access control policy definedDocumentedPolicy SEC-AC-01No enforcement monitoring
Security awareness trainingAnnualCompletion reportsNo ongoing phishing simulations
Risk assessment processOne-time 2022External consultant reportNo recurring schedule or process owner

Options:

  • A. Assign process owners and define recurring, organization-wide procedures for each control area, including standardized inventories, scheduled risk assessments, and monitoring of policy compliance.

  • B. Focus only on tightening technical access-control configurations, since a documented access-control policy already exists for the other areas.

  • C. Treat the framework gap assessment as a one-time project and plan to reassess only after a major security incident occurs.

  • D. Purchase new tools for asset management and phishing simulations before changing any governance or process structure.

Best answer: A

Explanation: Security frameworks such as NIST CSF or ISO 27001 are primarily governance tools. They organize controls into a structured set of expectations so organizations can define standard policies, assign responsibility, and build repeatable processes that can be measured and improved over time.

In the exhibit, several gaps point to weak governance and repeatability rather than pure technical failures: the asset inventory is described as “Ad hoc” with “No organization-wide standard,” and the risk assessment process is listed as “One-time 2022” with “No recurring schedule or process owner.” These phrases show that activities may occur, but they are not standardized or owned in a way that supports consistency and ongoing compliance.

By assigning process owners for each control area and defining recurring, organization-wide procedures (for inventories, risk assessments, and monitoring adherence to access-control policies), the CISO uses the framework to anchor a security program. This turns one-off efforts into managed processes with accountability and scheduled review cycles, enabling continuous improvement and easier compliance reporting to leadership.


Question 45

Topic: Security Operations

Which of the following statements about the use of threat intelligence feeds and information‑sharing communities in identifying and correlating indicators of compromise (IOCs) are TRUE? (Select TWO.)

Options:

  • A. They are designed exclusively for government and critical‑infrastructure entities and should not be used by small or medium‑sized organizations.

  • B. They provide curated lists of suspicious IP addresses, domains, file hashes, and attacker techniques that analysts can correlate with local logs to spot potential compromises.

  • C. They eliminate the need for an organization to collect and analyze its own security logs because external feeds already contain all necessary indicators.

  • D. They enable organizations to share anonymized incident details and IOCs with peers, improving early warning about emerging threats across the community.

  • E. They guarantee that all provided indicators are free of false positives and can always be blocked automatically without human review or tuning.

Correct answers: B and D

Explanation: Threat intelligence feeds and information‑sharing communities play an important role in identifying and correlating indicators of compromise (IOCs) during security operations. Feeds typically provide structured data such as malicious IP addresses, domains, file hashes, and descriptions of attacker tactics, techniques, and procedures (TTPs). Analysts ingest this data into tools like SIEMs and EDR platforms, then correlate it with local logs and alerts to discover whether known malicious indicators have been seen in their environment.

Information‑sharing communities (such as sector‑based groups or regional security forums) allow organizations to contribute and receive IOCs plus narrative context about real incidents. This collaboration improves collective visibility into new campaigns, enables earlier detection of similar attacks at other organizations, and supports better prioritization because shared context clarifies which threats are currently active and impactful.

However, threat intelligence does not remove the need for internal logging or human analysis. Feeds can contain false positives or indicators that are only malicious in certain contexts. Effective use requires validation, tuning, and integration with internal processes and tools rather than blind automatic blocking.


Question 46

Topic: Security Program Management and Oversight

A mid-sized healthcare provider is migrating its patient records to a public cloud. The cloud provider operates data centers in multiple countries.

Regulatory guidance and internal policy state that:

  • Identifiable patient data must remain stored in Country A.
  • The organization wants its global analytics team (in multiple countries) to access business metrics while minimizing cross-border privacy risk.

Which approach is the MOST appropriate to meet these requirements while supporting analytics needs?

Options:

  • A. Allow patient records to be stored in any region as long as they are encrypted with strong algorithms and the encryption keys are kept only in Country A.

  • B. Block all access to patient-related data from outside Country A, including any de-identified reports, to ensure no information ever crosses borders.

  • C. Store all patient records only in the Country A region, generate de-identified or aggregated datasets locally, and allow only those non-identifiable datasets to be replicated and analyzed in other countries.

  • D. Mirror the full patient database to at least three different geographic regions to improve availability and rely on the cloud provider’s standard privacy policy for protection.

Best answer: C

Explanation: This scenario focuses on data residency and cross-border data transfers. The requirement is that identifiable patient data must stay in Country A, while still enabling global analytics. This is a classic problem of balancing compliance with business needs.

The best pattern is to strictly control where identifiable (personal) data is stored, while using de-identification or aggregation techniques to create lower-risk datasets that can be shared and processed across borders. This reduces exposure to foreign jurisdiction laws and government access while preserving the utility of analytics.

Simply encrypting data stored abroad does not eliminate cross-border transfer or storage; regulators and policies often care about the location of data, not just its technical protections. Likewise, blocking all cross-border analytics is unnecessarily restrictive when privacy-preserving techniques can reduce risk and still allow the organization to gain insights from its data.

By storing raw patient records only in Country A and exporting only de-identified or aggregated datasets, the organization respects local residency requirements while minimizing the legal and privacy risks associated with foreign jurisdictions accessing personal data.


Question 47

Topic: Threats, Vulnerabilities, and Mitigations

A small SaaS company recently discovered that a developer had unknowingly added a malicious third-party library from a public package repository into a production microservice. Currently, any developer can add new dependencies directly from public repositories during builds. Management wants to reduce software supply chain and dependency risk while:

  • Allowing developers to keep using open-source libraries,
  • Minimizing manual review overhead,
  • Standardizing how dependencies are obtained.

Which approach is the BEST way to meet these requirements?

Options:

  • A. Deploy an internal package repository that proxies public sources, require builds to pull only from this trusted repository, and integrate automated dependency vulnerability scanning into the CI/CD pipeline.

  • B. Enable MFA on the CI/CD platform and require code signing for internal releases but make no changes to how external dependencies are downloaded.

  • C. Require developers to manually review the full source code of every new third-party library they add and keep downloading dependencies directly from public repositories.

  • D. Block all outbound access to public package repositories so developers can only use the language’s standard library and internally developed code.

Best answer: A

Explanation: This scenario focuses on software supply chain and dependency risks from untrusted third-party libraries. The organization currently lets developers pull arbitrary packages directly from public repositories, which exposes them to malicious or vulnerable components.

The best Security+‑level mitigation is to introduce dependency management and trusted repositories. By standing up an internal package repository (for example, a proxy/cache) and forcing builds to pull dependencies only from that internal source, the organization centralizes control over which packages and versions are allowed. When this is combined with automated vulnerability scanning in the CI/CD pipeline, it both reduces supply chain risk and keeps developer workflows efficient.

This approach meets all the requirements:

  • Reduces risk from malicious/vulnerable libraries (only vetted or scanned packages are available),
  • Minimizes manual review effort (scanning is automated, approvals can be policy-driven),
  • Standardizes how dependencies are obtained (single internal trusted repository used by all builds).

Question 48

Topic: Security Architecture

Which TWO statements about secure boot and device attestation are TRUE? (Select TWO.)

Options:

  • A. Secure boot’s primary purpose is to encrypt all data stored on the device so that it remains confidential if the device is stolen.

  • B. Because device attestation validates boot components, it eliminates the need for regular patching and vulnerability management on the device.

  • C. Remote device attestation allows a server or management system to verify that a device booted with expected, untampered firmware and configuration.

  • D. Secure boot verifies the digital signatures of firmware, bootloaders, and OS components before allowing them to execute.

  • E. Secure boot and device attestation are types of network firewalls that filter malicious traffic before it reaches the operating system.

  • F. Device attestation works by scanning user files for malware after login to ensure that no malicious code is present on the system.

Correct answers: C and D

Explanation: Secure boot and device attestation are integrity-focused mechanisms that help ensure only trusted code runs on a device, particularly during the earliest stages of startup.

Secure boot establishes a chain of trust beginning in firmware. Each component in the boot sequence (firmware, bootloader, kernel, and sometimes drivers) is digitally signed. The firmware verifies the signature of the next component before handing over control. If a component is missing, modified, or unsigned, the device can block or warn about the boot, helping to prevent bootkits and other low-level malware from loading.

Device attestation builds on these integrity checks by allowing a remote verifier (such as an MDM server or access gateway) to confirm that a device started in a known-good state. The device reports cryptographic measurements of its boot components, and the verifier compares them with trusted reference values. If the measurements match, the device is considered trustworthy enough to receive certain credentials, network access, or application access.

Neither secure boot nor attestation encrypts user data or replaces endpoint protection and patching. They are part of a broader defense-in-depth strategy to ensure devices are in a trusted state before they are allowed to interact with sensitive networks and data.


Question 49

Topic: Security Program Management and Oversight

A startup offers a cloud‑hosted budgeting app. During signup, the app collects full name, personal email, home address, phone number, date of birth, employer, and annual income. Product management confirms that only an email address and approximate income range are needed for account creation and core features. The privacy officer wants to lower privacy risk and better align with the principle of collecting and using only what is necessary for the stated service.

Which action BEST applies data minimization and purpose limitation in this situation?

Options:

  • A. Export existing user profiles to a separate data warehouse and keep detailed records for at least seven years to demonstrate compliance if regulators ask.

  • B. Keep collecting all current fields but encrypt every column in the database and restrict access to administrators only.

  • C. Redesign the signup flow and backend so that only the email and an income range are collected and stored for new users, and remove or deprecate fields that are not required for the budgeting features.

  • D. Update the privacy notice to say that all collected user data may be used for any future product features and third‑party analytics partners.

Best answer: C

Explanation: This scenario is about applying data minimization and purpose limitation as privacy principles. The budgeting app’s business need for signup is limited: it only requires an email address to identify the user and an approximate income range to tailor budgeting features. Collecting additional personal information such as full home address, employer, and full date of birth is unnecessary for the described purpose.

Data minimization means collecting and retaining the smallest amount of personal data necessary to achieve a clearly defined purpose. Purpose limitation means using that data only for specific, legitimate purposes that are communicated to the user and not expanding use arbitrarily.

The best response, therefore, is to change the process so that the app no longer collects unneeded personal fields and only gathers what is required for the service, directly implementing both principles. Controls like encryption and access restriction are still important, but they do not by themselves minimize or limit the data being collected and processed.


Question 50

Topic: Security Architecture

A security engineer is designing access controls for a sensitive internal web app. They want the access gateway to verify that each corporate laptop’s boot process has not been tampered with, based on cryptographic measurements sent from the device, before allowing a connection. Which security feature BEST meets this requirement?

Options:

  • A. Full disk encryption

  • B. Signature-based antivirus scanning

  • C. Device attestation

  • D. Secure boot

Best answer: C

Explanation: This question focuses on how to ensure device integrity from the perspective of a remote system, such as an access gateway or network controller. The key requirement is that the gateway can verify that a laptop’s boot process has not been tampered with, using cryptographic measurements sent from the device.

Device attestation provides exactly this. During or after boot, the device generates measurements of critical components (such as firmware, bootloaders, and sometimes key OS files) and sends a signed report to a remote verifier. The verifier compares these measurements against a known-good baseline. If they match, the device is considered trustworthy and can be allowed to connect.

By contrast, secure boot enforces that only trusted code runs during startup but does so locally on the device. It does not inherently send a proof to a remote system. Other controls like full disk encryption and traditional antivirus help with confidentiality or malware detection, but they do not provide remote, cryptographic assurance that the boot chain has remained intact.

Together, secure boot and device attestation help ensure that only trusted code runs and that other systems can verify this state before granting access, which is central to modern zero-trust and strong endpoint security architectures.


Questions 51-75

Question 51

Topic: Security Architecture

A security engineer is designing a three-tier web application with web, application, and database servers. The company will deploy a DMZ between the internet and the internal network. Which of the following placements is NOT an appropriate use of a DMZ for this design?

Options:

  • A. Place web, application, and database servers together in the DMZ to simplify firewall rules, and allow internet clients to connect directly to the web and application servers, with the database reachable from both tiers on open ports.

  • B. Place public web servers in the DMZ, with application and database servers on internal subnets. Allow only HTTPS from the internet to the web tier and tightly scoped traffic from web to application and database tiers.

  • C. Place web servers in the DMZ and application servers on an internal subnet behind another firewall, with databases on a more restricted internal subnet reachable only from the application tier.

  • D. Place web servers in the DMZ while keeping application and database servers on internal segments that are not directly routable from the internet, blocking any direct client access to those internal servers.

Best answer: A

Explanation: A DMZ (demilitarized zone) is a network segment placed between an untrusted network (such as the internet) and trusted internal networks. Its primary purpose is to host public-facing services (for example, web servers, reverse proxies) so that they can be accessed from the internet while still being separated from sensitive internal resources.

In a secure three-tier web application, the web tier that directly serves client requests typically resides in the DMZ. The application and database tiers, which often process and store sensitive business logic and data, should remain on internal network segments behind additional layers of firewalling. Only specific, tightly controlled traffic should be allowed from the web tier into the internal tiers, following defense-in-depth and least privilege.

Placing sensitive components such as databases directly in the DMZ or allowing broad, direct internet access to them undermines the DMZ’s purpose and creates an unsafe architecture.


Question 52

Topic: Security Program Management and Oversight

A startup is redesigning its consumer mobile app and wants to “collect more data for analytics and future product ideas.” The security team is responsible for aligning the design with privacy principles of data minimization and purpose limitation.

Which TWO proposed actions should the security team AVOID to remain compliant with these principles? (Select TWO.)

Options:

  • A. Reuse the customer email list, originally collected for account notifications, to run unrelated advertising campaigns for new partner products without obtaining additional consent or updating notices.

  • B. Provide an in-app setting that lets users opt out of non-essential analytics while still allowing strictly necessary processing for core app functions and security.

  • C. Clearly document in the privacy notice what categories of telemetry are collected, why they are needed, and how long they are retained.

  • D. Delete raw telemetry and detailed session logs after a short, predefined retention period unless a longer period is required for security or legal obligations.

  • E. Limit analytics events to pseudonymous user IDs, coarse city-level location, and feature-usage counts needed to improve existing workflows.

  • F. Configure the app to upload the user’s entire contact list and message history so the company can explore potential future social features.

Correct answers: A and F

Explanation: Data minimization means collecting and retaining only the minimum amount of personal data necessary to achieve a clearly defined purpose. Purpose limitation means using personal data only for the specific, legitimate purposes that were communicated when it was collected, not for unrelated or vaguely defined future uses.

In this scenario, the security team is trying to prevent the app from gathering excessive data or repurposing existing data in ways that were not clearly justified or communicated. Practices that collect broad, sensitive data “just in case” or that reuse data for unrelated marketing without updated notice or consent clearly violate these principles.

Good designs focus analytics on data that is directly needed to improve or operate the app, keep data only as long as necessary, and are transparent about what is collected and why. They also provide users with reasonable control over optional data collection, especially when it is not required for core functionality or security.


Question 53

Topic: Threats, Vulnerabilities, and Mitigations

A security analyst reviews an incident in which a highly skilled, well-funded group used multiple zero-day exploits to quietly exfiltrate sensitive research data over several months. Investigators believe the goal was to gain strategic advantage for a foreign government. Which type of threat actor does this scenario MOST likely describe?

Options:

  • A. Script kiddie

  • B. Insider

  • C. Nation-state

  • D. Hacktivist

Best answer: C

Explanation: This question focuses on identifying the correct threat actor type based on motivation and capabilities, which is a key part of understanding the threat landscape in Domain 2.

Nation-state threat actors are typically sponsored or directly supported by governments. They have significant funding, time, and expertise. Because of these resources, they can develop or purchase zero-day exploits and run long-term, stealthy campaigns focused on espionage and strategic advantage rather than quick financial gain or publicity.

In the scenario, the attackers use multiple zero-day exploits, act over several months without being detected, and are believed to be working to gain strategic advantage for a foreign government. These are hallmark traits of nation-state activity rather than low-skill or purely financially motivated attackers.


Question 54

Topic: General Security Concepts

Which TWO of the following statements about separation of duties and job rotation are NOT accurate from a security perspective? (Select TWO.)

Options:

  • A. Separation of duties is mainly about speeding up workflows by allowing one person to complete all steps without interruption.

  • B. Separation of duties divides high-risk tasks (such as creating and approving payments) between different people to limit opportunities for fraud.

  • C. Job rotation has no security benefit; it is only used for employee development and has nothing to do with detecting misuse.

  • D. Both separation of duties and job rotation reduce the risk that a single insider can carry out and hide fraudulent activity over a long period.

  • E. Job rotation can help uncover inappropriate activities because a new employee may question unusual records, shortcuts, or exceptions left by a predecessor.

Correct answers: A and C

Explanation: Separation of duties and job rotation are fundamental security principles used to reduce insider fraud and detect misuse. Separation of duties requires that no single individual controls all critical steps in a high-risk process, such as creating a new vendor and approving payment to that vendor, or requesting and approving user access. By forcing multiple people to participate, it becomes much harder for one person to commit and conceal fraud.

Job rotation periodically moves employees between roles or duties. Besides operational and training benefits, this exposes records, logs, and workflows to fresh eyes. A new person in a role is more likely to question unusual patterns, undocumented shortcuts, or suspicious transactions the previous person might have been ignoring or deliberately hiding. Together, these controls reduce the likelihood and duration of undetected insider abuse.

The incorrect statements in this question downplay or misstate the security purpose of these principles, treating them as performance or purely HR tools instead of fraud-prevention and detection mechanisms.


Question 55

Topic: Security Architecture

A security engineer divides the corporate network into separate VLANs for user workstations, application servers, and management systems, then uses firewalls to strictly control and log all traffic between these segments. Which security design principle does this practice BEST illustrate?

Options:

  • A. Applying least privilege to user accounts and permissions

  • B. Implementing network segmentation and isolation to limit lateral movement

  • C. Using defense in depth by adding multiple independent security layers

  • D. Providing redundancy and fault tolerance for critical systems

Best answer: B

Explanation: The scenario describes splitting a corporate network into multiple VLANs (for users, application servers, and management systems) and then enforcing strict controls on traffic between those segments using firewalls. This is a clear example of network segmentation and isolation, a core secure network design principle.

Network segmentation uses VLANs, subnets, and firewalls to break a flat network into smaller, controlled zones. By allowing only necessary traffic between segments, segmentation limits lateral movement if an attacker compromises one part of the network. An incident in the user VLAN is less likely to spread to critical server or management networks, thereby reducing overall risk and blast radius.

This directly aligns with Security+ Domain 3.1, where segmentation and isolation using VLANs, subnets, and firewalls are used to limit the spread of attacks and contain compromises within smaller network areas.


Question 56

Topic: General Security Concepts

A security engineer reviews all externally reachable servers and disables unused network services, uninstalls unnecessary applications, and closes listening ports that are not required for business operations. Which fundamental security concept does this activity BEST represent?

Options:

  • A. Enforcing separation of duties

  • B. Providing non-repudiation

  • C. Implementing least privilege

  • D. Reducing the system’s attack surface

Best answer: D

Explanation: The scenario describes a hardening activity: turning off unnecessary services, uninstalling unneeded applications, and closing ports that do not need to be exposed. All of these steps reduce the number of ways an attacker can interact with or reach a system.

This is the essence of attack surface reduction. The attack surface is the set of all reachable and exploitable entry points (services, interfaces, APIs, web endpoints, management ports, etc.). By removing or disabling those that are not required for business use, defenders lower the chance that a vulnerability in one of them can be exploited.

This differs from other principles like least privilege or separation of duties, which focus on who can do what, rather than how many ways the system can be attacked technically.


Question 57

Topic: General Security Concepts

An internal audit at a mid-sized company finds that a single accounts payable clerk can add new vendors to the finance system and also approve and release payments. Management is concerned about fraud and wants a control that both prevents a single employee from hiding misuse and increases the chance that any wrongdoing is detected over time, while still keeping the team cross-trained.

Which approach BEST meets these requirements?

Options:

  • A. Retain the current responsibilities but enable detailed logging of all vendor and payment changes for later forensic review.

  • B. Keep the current single-clerk process but enforce strong passwords and MFA for access to the finance system.

  • C. Split the vendor-creation and payment-approval duties between different employees and require periodic job rotation between those roles.

  • D. Assign all vendor and payment responsibilities to the most senior accountant and perform an external audit once a year.

Best answer: C

Explanation: This scenario targets two related principles: separation of duties and job rotation.

Separation of duties means that no single individual should have full control over a critical business process from start to finish. In accounts payable, this typically means different people should be responsible for adding or changing vendors, approving invoices, and releasing or reconciling payments. This makes it harder for one insider to commit and conceal fraud because they would need collusion from others.

Job rotation means periodically moving employees between roles so that no one remains in a sensitive position indefinitely. Rotation provides cross-training (so others can cover key tasks) and also acts as a detective control: when a new person takes over a role, they may spot irregularities, suspicious patterns, or shortcuts the previous person was using.

In the scenario, management wants to prevent a single clerk from hiding fraud, increase the chance of detecting misuse over time, and keep the team cross-trained. The best answer is the one that explicitly splits the critical steps between different people and periodically rotates those people between roles. This directly applies separation of duties and job rotation to the invoice payment process, aligning with core Security+ principles for preventing and detecting fraud in business operations.


Question 58

Topic: Threats, Vulnerabilities, and Mitigations

An organization configures its vulnerability scanner to run internal credentialed scans against production servers during a weekly maintenance window. Which primary security objective does this approach BEST achieve compared to external non-credentialed scans?

Options:

  • A. Maximizing visibility into host-level vulnerabilities and misconfigurations while reducing the risk of service disruption from aggressive probing

  • B. Primarily testing the effectiveness and responsiveness of the incident response team during a realistic attack scenario

  • C. Accurately simulating how an unauthenticated attacker on the public internet would discover and exploit exposed services

  • D. Eliminating the need for any additional penetration testing by fully proving exploitability of all discovered weaknesses

Best answer: A

Explanation: This question targets vulnerability assessment concepts in Domain 2, specifically the trade-offs between credentialed vs non-credentialed and internal vs external scans.

An internal credentialed vulnerability scan runs from inside the network and authenticates to systems (for example, with OS or domain credentials). Because it can log in, it gains detailed information on installed software, patch levels, configuration settings, and local security controls. Most scanners perform these checks in a “safe” way, reducing the likelihood of crashing services.

In contrast, an external non-credentialed scan runs from outside the network (often from an internet-facing location) and probes targets without logging in. This better simulates an unauthenticated attacker but has limited visibility into host internals; it sees primarily open ports, banners, and some remotely detectable vulnerabilities.

Therefore, internal credentialed scans primarily support the goal of maximizing vulnerability visibility with lower operational risk to production services, while external non-credentialed scans focus on the attacker’s outside-in perspective and exposure of internet-facing services.


Question 59

Topic: Security Operations

A mid-sized company recently deployed a SIEM. During the first week, analysts receive thousands of alerts on CPU spikes, login failures, and brief network traffic surges that investigations show are caused by normal batch jobs and user logins. The CISO wants to (1) quickly spot truly unusual behavior, (2) reduce false positives, and (3) avoid turning off useful detections. Which action should the security team take next to BEST meet these goals?

Options:

  • A. Collect several weeks of normal system and user activity to establish baselines, then tune SIEM thresholds and correlation rules around those patterns.

  • B. Forward only critical server logs to the SIEM and stop ingesting workstation and network device logs.

  • C. Disable all alerts related to CPU, logins, and network usage and rely only on malware signatures from the endpoint protection platform.

  • D. Increase log retention from 30 to 365 days so analysts can search a longer history of events.

Best answer: A

Explanation: The scenario describes a SIEM generating many alerts that correspond to normal operations, such as scheduled batch jobs, common login patterns, and short network spikes. This is a classic case where the organization has not yet defined what normal behavior looks like, so the SIEM treats many routine events as suspicious.

Baselining normal behavior means observing and documenting typical patterns of activity over time: usual login times and locations, expected resource usage during batch jobs, normal network volumes per application, and so on. Once these baselines are understood, the security team can tune thresholds, correlation rules, and anomaly-detection logic so that alerts fire when activity deviates meaningfully from those norms.

This approach directly supports the CISO’s goals:

  • It helps quickly spot truly unusual behavior because anomalies stand out more clearly against a known baseline.
  • It reduces false positives by ensuring the SIEM does not alert on routine, predictable behavior.
  • It avoids turning off entire categories of detections, preserving valuable visibility and defense in depth.

Without baselines, teams often either drown in noise or overcorrect by disabling useful alerts, both of which weaken security monitoring.


Question 60

Topic: Threats, Vulnerabilities, and Mitigations

A development team currently lets each developer download open-source libraries from any public site and commit them directly into application repositories. Security wants to reduce software supply-chain risk from untrusted or vulnerable dependencies while still allowing a smooth development workflow. Which change is MOST appropriate?

Options:

  • A. Require all third-party libraries to be obtained only from a centrally managed, approved repository that enforces version control and vulnerability scanning.

  • B. Encrypt all source code repositories that contain third-party libraries to protect them at rest.

  • C. Allow developers to download libraries from any site but require them to email security a list of dependencies before each release.

  • D. Prohibit the use of all third-party libraries, allowing only internally developed code in applications.

Best answer: A

Explanation: This scenario focuses on software supply-chain and dependency risk, particularly the danger of developers pulling libraries from arbitrary, unvetted sources. The best practice at a Security+ level is to control where dependencies come from and monitor them for known vulnerabilities, while keeping development efficient.

Using a centrally managed, approved repository (such as an internal artifact repository or a vetted, authenticated external repository) enforces that all third-party libraries flow through a controlled point. Security can apply vulnerability scanning, version pinning, and approval workflows there, reducing the chance of malicious or outdated packages being used. Developers still obtain dependencies easily, but from a trusted repository that supports good dependency management practices.

Other options either block normal development, fail to control the source and quality of dependencies, or address a different security property (such as confidentiality) rather than the supply-chain integrity risk described in the scenario.


Question 61

Topic: Security Architecture

A security architect is responsible for several IaaS environments hosted in a public cloud provider. They want a cloud-native security service that will continuously review the configuration of resources (such as security groups, storage bucket permissions, and IAM roles) across all accounts, compare them to best practices, and alert on misconfigurations. The service must operate via cloud APIs and not sit inline with user traffic. Which type of cloud-native security service BEST meets this requirement?

Options:

  • A. Cloud access security broker (CASB)

  • B. Web application firewall (WAF)

  • C. Next-generation cloud firewall

  • D. Cloud security posture management (CSPM)

Best answer: D

Explanation: The scenario emphasizes a need to continuously assess the configuration state of cloud resources (security groups, storage bucket permissions, IAM roles) across multiple accounts, compare them to best practices, and alert on misconfigurations. It also explicitly states that the service should operate via cloud APIs and not be inline with user traffic. This is the core design goal of cloud security posture management (CSPM).

CSPM platforms integrate with cloud providers’ control planes through APIs, inventorying resources and evaluating their settings against security baselines, organizational policies, and regulatory standards. They specialize in detecting issues like publicly exposed storage, overly permissive security groups, and risky IAM policies. This is different from tools that focus on monitoring or filtering user traffic, such as CASBs, firewalls, or WAFs.

Understanding the distinction between control-plane configuration monitoring (CSPM) and data-plane traffic control (firewalls, WAF, some CASB modes) is key for Security+ candidates when selecting the right cloud-native control type for a given requirement.


Question 62

Topic: Security Operations

Which statement BEST explains why access to centralized security logs should be tightly controlled?

Options:

  • A. Logs often contain detailed information about systems and users that attackers could use for reconnaissance or to cover their tracks.

  • B. Logs are automatically encrypted at rest, so only availability needs to be protected with strict access controls.

  • C. Logs only record high-level system status messages, so strict access control is needed solely to avoid confusing non-technical staff.

  • D. Logs are not important for normal operations, so restricting access prevents unnecessary use of storage and CPU resources.

Best answer: A

Explanation: Security log management is a core part of secure operations. Centralized logs support monitoring, forensics, and compliance, but they also become a high-value target. Logs can include usernames, internal IP addresses, application error messages, configuration details, and sometimes even fragments of sensitive data.

Because of this, logs must be treated as sensitive. Tightly controlling access helps preserve confidentiality (only authorized staff can view them) and integrity (only authorized processes or roles can modify or delete them). If an attacker gains log access, they can learn about the environment and potentially alter or delete entries to hide their activity, undermining incident response and audits.


Question 63

Topic: Security Operations

An incident response team has contained a suspected data breach on an internet-facing application server by isolating it from the network. Legal counsel indicates that regulatory reporting and potential litigation are likely, so preserving digital evidence is critical. During eradication and recovery, which action should the team NOT perform?

Options:

  • A. Keep the server disconnected from the production network while malware scans and forensic analysis are performed.

  • B. Rotate all credentials, API keys, and certificates associated with the compromised server as part of the recovery process.

  • C. Capture a full disk image and memory snapshot of the server before rebuilding it from a known-good baseline.

  • D. Delete all system and application log files older than seven days to free disk space before creating any forensic images.

Best answer: D

Explanation: This scenario focuses on the eradication and recovery phases of incident response when a breach may lead to regulatory reporting or litigation. In such cases, preserving logs, disk images, and other digital evidence is essential to support a thorough investigation, root-cause analysis, and potential legal proceedings.

A core principle of incident response in high-stakes investigations is: do not modify or destroy potential evidence before it is collected and preserved. That includes system and application logs, disk contents, and volatile memory. Proper practice is to isolate the affected system, capture forensically sound images (disk and memory), maintain chain of custody, and only then proceed with eradication (removing malware, closing vulnerabilities) and recovery (rebuilding from known-good baselines, rotating credentials, and returning to production).

Any action that deletes or alters logs before imaging undermines the ability to reconstruct the attack, validate its scope, and support regulatory or legal requirements. By contrast, actions like imaging systems, keeping them isolated while analyzed, and rotating credentials are consistent with best practices for eradication and recovery while preserving evidence.


Question 64

Topic: Security Architecture

A security analyst is reviewing the internal network layout. The goal is to limit the spread of malware from user workstations to the internal database server. Only administrator workstations should be able to connect to the database.

Exhibit:

[Internet]
    |
[Edge Firewall]
    |
------------------ Core Switch (L3) ------------------
| VLAN 10: 10.10.10.0/24                              |
|   - User PCs                                       |
|   - Database Server                                |
------------------------------------------------------
| VLAN 20: 10.10.20.0/24                              |
|   - Web Server (DMZ)                               |
------------------------------------------------------
| VLAN 30: 10.10.30.0/24                              |
|   - Admin Workstations                             |
------------------------------------------------------
(Inter-VLAN routing enabled on core switch; no ACLs)

Based on the diagram, which network change would BEST improve segmentation to protect the database server from compromised user PCs?

Options:

  • A. Add an additional perimeter firewall between the Internet and the web server in VLAN 20

  • B. Enable port security on all VLAN 10 switch ports to limit each port to a single MAC address

  • C. Move the database server to its own VLAN/subnet and apply firewall/ACL rules so only VLAN 30 can access it

  • D. Enable WPA3-Enterprise on the wireless network that connects to VLAN 10

Best answer: C

Explanation: The exhibit shows the database server placed in VLAN 10 alongside user PCs, with inter-VLAN routing enabled on the core switch and no ACLs. This means malware on any compromised user workstation in VLAN 10 can directly communicate with the database server at Layer 3 with no segmentation barrier.

A core principle of secure network design is segmentation and isolation: high-value assets like database servers should be placed in their own VLANs/subnets and protected by filtering devices (firewalls or ACLs) that restrict which other segments can communicate with them, and on which ports.

By moving the database server into its own VLAN/subnet and then configuring firewall or switch ACL rules to allow connections only from the administrator VLAN (VLAN 30), the organization creates a strong containment boundary. Compromised user PCs in VLAN 10 would no longer have direct Layer-3 reachability to the database, significantly limiting lateral movement and the spread of attacks.


Question 65

Topic: Security Program Management and Oversight

Which of the following statements about backup and recovery strategies in support of business continuity and disaster recovery is NOT accurate? (Select TWO.)

Options:

  • A. Storing all backup copies in the same data center as production is sufficient for disaster recovery, as long as the backups are encrypted.

  • B. Recovery point objective (RPO) defines the maximum acceptable amount of data loss, usually measured as the time between the last good backup and a disruption.

  • C. Keeping offsite backups in a geographically separate region helps protect against regional disasters that could take down the primary site.

  • D. Regularly testing backup restores helps validate that data can be recovered within required RTOs and supports confidence in the DR plan.

  • E. Synchronous replication to a secondary site eliminates the need for separate backups because you can always fail over to the replica if something goes wrong.

Correct answers: A and E

Explanation: Business continuity and disaster recovery rely heavily on well-designed backup and replication strategies. Backups are used to meet recovery point objectives (RPOs) and recovery time objectives (RTOs) by providing restorable copies of data and systems. However, where and how backups are stored, and how they are tested, are critical factors in whether a plan will actually work during a major outage or disaster.

Offsite and geographically diverse backups protect against facility- or region-wide incidents, while replication mainly improves availability and short recovery times for active systems. Replication does not replace traditional backups, because it can mirror destructive events as well as normal changes. Regular restore testing is essential to ensure that backups are complete, consistent, and restorable within the time constraints in the DR plan.


Question 66

Topic: Security Program Management and Oversight

Which of the following statements about ongoing security monitoring of third‑party vendors is NOT correct?

Options:

  • A. If a vendor provides a recent independent SOC report, the customer can safely skip any further security review during the life of the contract.

  • B. Periodic security reviews help confirm that a vendor continues to meet the organization’s security requirements as technologies, threats, or the vendor’s environment change.

  • C. Major changes at a vendor, such as mergers, new data centers, or significant incidents, should trigger an out‑of‑cycle security review by the customer.

  • D. Attestations and security questionnaires can be used between formal audits to gather updated information about a vendor’s controls and practices.

Best answer: A

Explanation: Ongoing vendor and supply chain risk management requires continuous visibility into how third‑party providers protect your data and services. A single assessment at onboarding is not enough because a vendor’s environment, controls, and risk exposure can change over time.

Independent audit reports, such as SOC reports, are valuable inputs but they are only one part of a broader monitoring program. Organizations should also perform periodic security reviews, request updated questionnaires or attestations, track issues identified in prior assessments, and initiate additional reviews when major changes or incidents occur at a vendor.

The incorrect statement is the one that claims a recent SOC report allows the customer to skip any further security review during the contract. That view misunderstands both the point‑in‑time nature of such reports and the need for continuous oversight in vendor risk management.


Question 67

Topic: Security Architecture

A company provides a cloud-based ticketing application that its business partner’s employees need to access. The partner wants its users to sign in with their existing corporate credentials on the partner’s own login page, and the ticketing provider does not want to store or sync the partner’s passwords. Which identity approach BEST meets this requirement?

Options:

  • A. Migrate both organizations’ users into a single shared Active Directory domain and use internal SSO

  • B. Create individual local accounts in the ticketing application and enforce a strong password policy

  • C. Configure a federation trust between the two organizations using SAML-based single sign-on

  • D. Provide a shared service account stored in a password vault for the partner to use

Best answer: C

Explanation: This scenario describes classic cross-organization single sign-on (SSO). The ticketing provider wants partner employees to log in using their existing corporate credentials at the partner’s own login page, while the provider avoids storing or syncing those passwords.

The way to do this is with identity federation, typically using standards such as SAML or OpenID Connect. In a federated setup, the two organizations establish a trust relationship between their identity providers (IdPs) and service providers (SPs). The partner’s IdP authenticates its users and sends a signed assertion or token to the ticketing provider, which trusts that assertion without ever seeing the users’ passwords.

This maintains separate identity boundaries while delivering a smooth SSO experience: users click something like “Sign in with PartnerCorp,” get redirected to their familiar login page, and then are returned to the application already authenticated.


Question 68

Topic: Security Operations

A mid-sized enterprise SOC has just deployed a SOAR platform to improve its phishing response. Management specifies that phishing handling must: 1) be fast and consistent for common, low-risk phishing emails, 2) still allow human judgment for complex or high-impact cases, and 3) be documented so new analysts can follow the process. Which approach BEST meets these requirements?

Options:

  • A. Build an automated SOAR playbook that handles routine phishing end-to-end and create a separate runbook with manual steps for analysts to follow in complex cases.

  • B. Configure a SOAR playbook that fully automates every phishing alert, including approving password resets and closing tickets, with no analyst review.

  • C. Document a detailed runbook for all phishing scenarios and require analysts to follow it manually without any SOAR automation.

  • D. Rely on the SIEM to send email alerts for phishing and let analysts choose their own response steps based on experience.

Best answer: A

Explanation: This scenario is about applying security automation and orchestration concepts correctly, specifically differentiating between SOAR playbooks and runbooks.

A playbook in a SOAR platform is a machine-executable workflow: it can automatically perform predefined steps such as enriching indicators, quarantining emails, blocking senders, and opening or updating tickets. Playbooks are ideal for repetitive, well-understood tasks where you want speed and consistency.

A runbook is a human-oriented, step-by-step procedure that an analyst follows manually. It may reference tools and checks to perform, decision points, and escalation paths. Runbooks are ideal for complex, high-risk, or ambiguous scenarios where human judgment is required.

In the scenario, management wants: 1) fast, consistent handling of routine phishing (good fit for automated playbooks), 2) human judgment preserved for complex/high-impact cases (good fit for runbooks or human approval steps), and 3) clear documentation for new analysts (runbooks and documented playbooks). The best approach is to use an automated playbook for low-risk, common phishing tasks and a separate runbook for complex cases, combining both automation and human-guided procedures.


Question 69

Topic: Security Program Management and Oversight

A company is migrating its on-premises customer support ticketing system to a multi-tenant SaaS platform. The application will store customer PII and detailed incident notes. Regulations and internal policy require at least 1-year retention of security-relevant logs and the ability to investigate suspicious activity. The company also wants to minimize the cloud provider’s standing access to its tenant.

Which TWO actions should the security team AVOID? (Select TWO.)

Options:

  • A. Include in the contract a shared responsibility matrix that specifies who handles identity management, data backups, and security monitoring.

  • B. Require that all provider support access to the tenant use just-in-time elevation with MFA and be recorded in audit logs.

  • C. Allow the provider’s operations staff permanent super-admin access to the tenant so they can troubleshoot issues more quickly, without customer approval workflows.

  • D. Perform annual reviews of the provider’s security reports (such as SOC 2) and document any shared control gaps that require internal compensating controls.

  • E. Rely solely on the provider’s default 7-day log retention and not request longer retention or log export capabilities.

Correct answers: C and E

Explanation: This scenario focuses on managing cloud service provider risk for a SaaS platform that stores sensitive customer PII. Effective vendor risk management requires clarifying shared responsibilities, ensuring sufficient logging for investigations and compliance, and tightly controlling the provider’s administrative access.

Relying on minimal default logging or granting uncontrolled, permanent super-admin access to the provider undermines these goals. In contrast, documenting shared responsibilities, enforcing just-in-time and MFA-protected admin access, and reviewing security reports all help address the unique risks of using a cloud provider for sensitive data.


Question 70

Topic: Security Operations

Which of the following statements about using metrics, key performance indicators (KPIs), and maturity models for continuous improvement in security operations are TRUE? (Select TWO.)

Options:

  • A. Focusing only on counts of blocked attacks, without linking them to goals or risk reduction, is an example of effective KPI design.

  • B. Continuous improvement in security operations is primarily achieved by changing tools frequently rather than measuring existing processes.

  • C. KPIs translate security objectives into specific, measurable targets that can be tracked over time.

  • D. Once a security operations maturity level is assessed, it should remain fixed so that metrics are not affected by organizational changes.

  • E. Good security metrics should be as numerous and detailed as possible, even if most stakeholders cannot interpret them.

  • F. A maturity model helps an organization understand its current security operations capabilities and plan stepwise improvements.

Correct answers: C and F

Explanation: Continuous improvement in security operations relies on using meaningful metrics, KPIs, and maturity models to understand current performance and guide change. KPIs are a subset of metrics that directly support strategic or operational goals, such as reducing mean time to detect (MTTD) or increasing patch compliance. Maturity models describe how advanced and consistent an organization’s processes are, helping prioritize improvements from ad hoc practices to more standardized and optimized operations.

Rather than buying new tools constantly, mature programs measure what they do, analyze gaps, and improve processes iteratively. Good metrics are limited in number, clearly defined, and understandable by decision-makers, so they can drive action. Maturity levels and KPIs should be revisited regularly as the environment, threats, and business priorities evolve.


Question 71

Topic: General Security Concepts

A 120-person software-as-a-service startup has never had a formal security program. The new security lead is asked to quickly harden laptops, servers, and cloud accounts using a prioritized list of concrete technical controls that a small team can implement and measure. Management is not currently seeking formal certification, only practical, step-by-step hardening guidance.

Which framework or standard would be the BEST primary starting point for this organization’s needs?

Options:

  • A. ISO/IEC 27001

  • B. NIST Cybersecurity Framework (CSF)

  • C. CIS Critical Security Controls

  • D. SOC 2 Trust Services Criteria

Best answer: C

Explanation: This scenario focuses on choosing the most appropriate security framework given specific needs: a small SaaS startup, no formal program, limited staff, and a desire for a prioritized, concrete list of technical controls to quickly harden systems.

The CIS Critical Security Controls are designed exactly for this use case. They provide a practical, implementation-oriented set of safeguards, grouped into tiers, that help organizations focus on the most effective technical and operational controls first. They are widely used by small and mid-sized organizations to improve baseline security without immediately adopting a full management-system or certification effort.

In contrast, ISO/IEC 27001 and the NIST Cybersecurity Framework are more about establishing governance and risk management structures (an ISMS or a high-level cyber risk lifecycle) than about providing a ready-made, prioritized checklist of technical controls. SOC 2 is primarily an attestation/reporting framework, not a prescriptive hardening guide.

This aligns with Domain 1, Task 1.1 of Security+: understanding and differentiating the purposes of common frameworks such as NIST CSF, ISO/IEC 27001, and CIS Controls at a conceptual level.


Question 72

Topic: Threats, Vulnerabilities, and Mitigations

Which TWO of the following statements about tailgating and piggybacking as social engineering techniques are true? (Select TWO.)

Options:

  • A. Piggybacking usually occurs when an authorized employee knowingly holds the door open and allows someone else to enter without verifying their identity or credentials.

  • B. Both tailgating and piggybacking are primarily conducted over email to trick users into clicking malicious links.

  • C. Tailgating typically involves an unauthorized person slipping through a secure door behind an employee without the employee’s knowledge.

  • D. Tailgating and piggybacking are ineffective against organizations that use physical access badges.

  • E. Piggybacking always requires the attacker to steal or clone a victim’s access card before approaching the door.

Correct answers: A and C

Explanation: Tailgating and piggybacking are physical social engineering techniques used to bypass access controls at doors or turnstiles. Both rely on human behavior rather than hacking the access control system itself.

In tailgating, an unauthorized person follows closely behind an authorized user through a secure access point without the authorized person realizing they have allowed someone in. In piggybacking, the authorized person is aware of the other individual but, often out of courtesy or misplaced trust, intentionally holds the door open or allows them to enter without properly verifying their authorization.

Because these are physical attacks, they are not carried out via email or links, and they can remain effective even in environments that use badges or card readers if employees do not consistently challenge unauthorized individuals and follow access policies.


Question 73

Topic: Security Architecture

A security administrator at a mid-sized office has reported two issues:

  • Several employee laptops have been stolen from open-plan desks after business hours.
  • Portable backup drives used for nightly backups are left on shelves in the small server room when not in use, and management is worried they could be removed or accessed by unauthorized staff.

The company wants simple, low-cost physical controls that reduce the chance of theft and unauthorized physical access to these devices without changing building access systems or renovating the office.

Which of the following actions will BEST meet these requirements? (Select TWO.)

Options:

  • A. Enable full-disk encryption on all laptops and backup drives.

  • B. Implement a stricter password-complexity policy for all user accounts.

  • C. Attach cable locks to laptops and secure them to desks in open areas.

  • D. Store portable backup drives in a locked safe in the server room when they are not in use.

  • E. Install a motion-activated camera in the hallway outside the office entrance.

Correct answers: C and D

Explanation: This scenario focuses on physical security controls for devices in an office and small server room. The main risks are theft of laptops from open desks and unauthorized physical access to portable backup drives. Logical controls like passwords and encryption help protect data, but they do not physically prevent someone from picking up and walking away with a device.

Physical device controls such as cable locks and safes are designed to prevent or significantly hinder theft and unauthorized handling. In contrast, controls like encryption, password policies, and general hallway cameras either address different aspects of security (data confidentiality, logical access) or protect the facility rather than directly securing specific devices.

The best answers are the options that directly apply device-level physical security to the laptops and backup drives described in the scenario.


Question 74

Topic: Security Program Management and Oversight

A company is negotiating a contract with a cloud-based HR and payroll provider that will store employee PII. Security leadership wants the agreement to clearly define how data must be protected and how security incidents will be reported and verified. Which of the following actions/controls will BEST meet these requirements? (Select TWO.)

Options:

  • A. Allow the provider to freely subcontract services to other vendors to ensure faster scaling and more flexible service delivery.

  • B. Shorten the overall contract term from three years to one year to make it easier to switch providers if problems arise.

  • C. Increase the financial penalties for failing to meet uptime and performance SLAs related to system availability.

  • D. Add a security incident clause that requires timely breach notification (for example, within a defined number of hours), detailed incident reports, and gives the customer the right to review security audit reports or perform its own audits.

  • E. Define minimum security and privacy requirements in the contract, such as encryption of PII, MFA for admin access, secure development practices, and alignment with an accepted security standard.

Correct answers: D and E

Explanation: This question focuses on how to embed security and privacy expectations into contracts and SLAs with third‑party providers, a key part of vendor and supply chain risk management in Security+ Domain 5.

When an organization outsources functions such as HR and payroll, it must ensure the provider is contractually obligated to protect sensitive data like PII. This goes beyond generic service availability SLAs. The contract should define specific security controls the provider must maintain, how quickly it must notify the customer of incidents, what information must be shared, and how the customer can verify that the provider is actually doing what it claims (for example, via audits or third‑party reports).

Defining minimum security requirements ensures there is a clear baseline for how data is protected. Adding explicit incident notification and right‑to‑audit language ensures the customer will be informed promptly when something goes wrong and can independently verify that appropriate controls remain in place over time.


Question 75

Topic: Security Architecture

Which TWO statements BEST describe zero trust network design principles such as microsegmentation and least-privilege access between services? (Select TWO.)

Options:

  • A. Microsegmentation limits communication so that each service or workload can talk only to the specific services it needs, enforcing least-privilege network access.

  • B. Zero trust emphasizes building a hardened perimeter so that systems inside the trusted network can communicate without restrictions.

  • C. Network access decisions are based on identity, context, and device posture rather than trusting anything solely because it is on the internal network.

  • D. Zero trust network design eliminates the need for logging and monitoring because all unauthorized traffic is blocked by default.

  • E. Once a device passes initial authentication to the corporate VPN, zero trust assumes it remains trusted and does not require further verification.

Correct answers: A and C

Explanation: Zero trust network design is based on the idea of “never trust, always verify.” Instead of assuming that anything on the internal network is trustworthy, zero trust continuously evaluates identity, context, and device posture before granting or maintaining access.

Microsegmentation is a key zero trust technique. By breaking the network into many small segments or policy zones, each service or workload is allowed to communicate only with the specific peers it needs. This enforces least-privilege network access between services and dramatically reduces the impact of a compromise, because an attacker cannot easily move laterally.

Zero trust does not rely on a single perimeter, a one-time VPN login, or a “trusted inside/untrusted outside” model. It also increases, rather than reduces, the importance of logging and monitoring so that access decisions and anomalous behavior can be continuously evaluated.


Questions 76-90

Question 76

Topic: Security Program Management and Oversight

Which statement BEST explains why confidential reporting channels are a critical part of an insider threat awareness program?

Options:

  • A. They replace the need for technical controls such as access logging and data loss prevention tools.

  • B. They allow managers to monitor which employees are most likely to become insider threats based on personal characteristics.

  • C. They ensure that only security staff can submit insider threat reports, reducing the risk of inaccurate information.

  • D. They encourage employees to report concerning behaviors early without fear of retaliation or embarrassment.

Best answer: D

Explanation: Insider threat awareness programs emphasize that anyone, including otherwise trusted employees or contractors, can engage in risky or malicious behavior. Because coworkers are often the first to notice concerning behaviors or policy violations, organizations need reporting channels that feel safe to use.

Clear, confidential reporting channels—such as hotlines, web portals, or designated contacts—reduce fear of retaliation, embarrassment, or damaging workplace relationships. When employees trust these channels, they are more likely to report early behavioral warning signs (for example, repeated policy violations, unusual data access, or signs of stress), allowing security and HR to intervene before an incident escalates. This approach is behavior-focused and avoids stigmatizing individuals based on stereotypes or personal characteristics.

Technical controls like logging and DLP remain important, but they work best when combined with a culture where employees understand what to look for and how to report it safely and confidentially.


Question 77

Topic: Security Operations

You are a SOC analyst investigating a SIEM alert about a 60GB outbound transfer from a finance workstation at 02:30 to an unknown public IP address over TCP 443. Which of the following should you AVOID while analyzing this potential data exfiltration incident? (Select TWO.)

Options:

  • A. Confirm whether the destination IP or domain is associated with an approved cloud storage provider or business partner.

  • B. Delete historical firewall and proxy logs older than one day to improve SIEM performance before continuing the investigation.

  • C. Correlate the firewall alert with DLP, proxy, and endpoint logs to determine what files or data types were transferred.

  • D. Search for other large, off-hours outbound transfers from the same workstation or user account to unfamiliar destinations.

  • E. Mark the SIEM alert as a false positive and close the ticket because HTTPS traffic on port 443 is common for web browsing.

Correct answers: B and E

Explanation: The scenario describes classic indicators of potential data exfiltration: a very large (60GB) outbound transfer, occurring at 02:30 (off-hours), from a finance workstation, to an unknown public IP over HTTPS. Even though TCP 443 and HTTPS are common, attackers frequently use them to hide exfiltration in normal-looking traffic.

During such an investigation, an analyst should preserve logs and evidence, correlate data from multiple sources (firewall, proxy, DLP, endpoint), and validate whether the destination is trusted. Actions that prematurely dismiss alerts or destroy logs directly undermine the ability to confirm or scope the exfiltration and violate incident response best practices.


Question 78

Topic: General Security Concepts

A new security analyst is updating the organization’s control inventory. Management specifically asks for an example of an administrative control that helps reduce the risk of data leakage.

Which of the following controls BEST fits this request?

Options:

  • A. A host-based firewall rule that blocks outbound connections on unauthorized ports

  • B. A written acceptable use policy that defines how employees may handle company data

  • C. A video surveillance system monitoring entrances and exits

  • D. A card badge reader installed on the server room door

Best answer: B

Explanation: This question focuses on distinguishing administrative, technical, and physical controls based on how they are implemented.

Administrative controls are management-driven and primarily documented as policies, standards, guidelines, and procedures. They define what users and administrators are allowed or required to do but do not directly enforce those rules by themselves.

Technical (logical) controls are implemented and enforced through technology, such as software, operating system settings, firewalls, and authentication systems. Physical controls use tangible mechanisms like locks, badge readers, and cameras to restrict or monitor access to facilities and hardware.

In this scenario, management asks for an administrative control. The control that is expressed as a written rule created by management, rather than enforced by hardware or software, is the best fit.


Question 79

Topic: Security Program Management and Oversight

A new security manager is reviewing part of the organization’s resilience planning summary shown in the following exhibit.

IDActivity summaryPrimary focus
1Stand up application servers at a cloud DR site within 2 hours of data center lossRestore IT infrastructure
2Route calls to a third-party contact center during any office outageMaintain critical customer service
3Maintain paper-based order forms for up to 3 days if the ordering system is offlineContinue core business processes
4Rebuild damaged on-premises data center hardware after a fireRepair physical facilities

Based on this information, which statement BEST describes how business continuity planning (BCP) and disaster recovery (DR) are being used together?

Options:

  • A. Items 1 and 4 are BCP activities, while items 2 and 3 are DR activities, because BCP focuses on restoring infrastructure and DR on keeping the business running.

  • B. All four items are DR activities, because BCP would be documented separately and does not address specific technical or process steps.

  • C. Only item 4 is BCP, and items 1–3 are DR activities, because BCP deals with long-term facility replacement and DR covers short-term IT and process workarounds.

  • D. Items 1 and 4 are DR activities, while items 2 and 3 are BCP activities that ensure the business can continue operating during the disruption.

Best answer: D

Explanation: This question tests understanding of the difference between business continuity planning (BCP) and disaster recovery (DR) and how they complement each other.

BCP is about keeping critical business functions operating during and immediately after a disruption, even if normal systems or locations are unavailable. Typical BCP measures include workarounds, alternative processes, or temporary service arrangements so the organization can still serve customers and perform core activities.

DR is about restoring IT systems, data, and facilities after a disruptive event so the organization can return to normal operations. DR addresses tasks such as activating a secondary data center, restoring backups, and repairing or replacing damaged equipment.

In the exhibit, item 1 (standing up application servers at a cloud DR site within 2 hours) and item 4 (rebuilding damaged on-premises data center hardware after a fire) are classic DR actions because they restore infrastructure and facilities. Items 2 (routing calls to a third-party contact center during outages) and 3 (using paper-based order forms when the ordering system is offline) are BCP measures, because they allow customer service and core business processes to continue while IT and facilities are being recovered.

Together, BCP (items 2 and 3) keeps the business running at an acceptable level, while DR (items 1 and 4) focuses on bringing primary systems and locations back online. Effective resilience planning uses both: BCP for continuity during the disruption, DR for restoration afterward.


Question 80

Topic: Security Architecture

Which statement BEST describes a key difference between an endpoint protection platform (EPP) and endpoint detection and response (EDR)?

Options:

  • A. EPP analyzes network traffic at the perimeter, while EDR is limited to scanning files on endpoints for known malware signatures.

  • B. EPP is only used on servers, while EDR is only used on user workstations and mobile devices.

  • C. EPP focuses on preventing threats from executing on endpoints, while EDR focuses on detecting suspicious activity and enabling investigation and response to potential compromises.

  • D. EPP provides historical forensics and threat‑hunting capabilities, while EDR is focused on basic real‑time antivirus scanning and blocking.

Best answer: C

Explanation: Endpoint protection platforms (EPP) are primarily prevention‑focused tools: they provide antivirus/anti‑malware, exploit prevention, application control, and other controls that try to stop malicious code from running on the endpoint in the first place.

Endpoint detection and response (EDR) is primarily detection and response‑focused: it continuously collects endpoint telemetry (processes, connections, behaviors), detects suspicious or malicious activity, and supports investigation, threat hunting, and automated or guided containment and remediation when a threat gets past preventive controls.

In a modern security architecture, organizations often deploy both: EPP to reduce the number of successful attacks, and EDR to quickly detect, investigate, and respond to the attacks that still occur.


Question 81

Topic: Security Operations

Which of the following statements about time synchronization and log retention for security monitoring are TRUE? (Select TWO.)

Options:

  • A. Regulatory and business requirements may mandate that certain logs be retained for many months or years, even if this increases storage usage.

  • B. Shortening log retention to only a few days generally improves forensic investigations by reducing the amount of data to review.

  • C. All security-relevant systems should synchronize their clocks to a common, trusted time source (for example, internal NTP servers) to support accurate event correlation.

  • D. It is best practice to disable logging on low-risk systems to avoid overwhelming SIEM storage and processing capacity.

  • E. Using unsynchronized local system clocks is acceptable as long as each system records timestamps in Coordinated Universal Time (UTC).

Correct answers: A and C

Explanation: Time synchronization and log retention are core parts of effective security monitoring. If different systems have unsynchronized clocks, their log timestamps will not line up, making it very difficult to reconstruct attack timelines or correlate events in a SIEM. Organizations typically use one or more trusted NTP servers and configure all servers, network devices, and security tools to synchronize with them.

Log retention must balance storage cost, investigation needs, and compliance. Many regulations and internal policies require retaining specific log types for defined periods (for example, 1–7 years). Retaining logs long enough to cover the full incident detection window and audit requirements is far more important than minimizing storage at the expense of losing critical evidence.


Question 82

Topic: Security Program Management and Oversight

A company is moving its customer data into a new cloud-based CRM platform. The CISO tells senior management they must show due care, not just due diligence, in managing the security risks of this system. Which action BEST demonstrates due care in this context?

Options:

  • A. Approving and funding the rollout of MFA and data loss prevention controls for CRM users, and ensuring they are implemented according to policy

  • B. Commissioning an independent risk assessment of the CRM provider’s environment every year

  • C. Comparing multiple CRM vendors’ security features as part of the request-for-proposal (RFP) process

  • D. Reviewing the cloud provider’s SOC 2 report and security questionnaire before signing the contract

Best answer: A

Explanation: This question focuses on the difference between due diligence and due care in security program management.

At a Security+ level, you can keep the distinction simple:

  • Due diligence is about deciding: investigating, analyzing, and understanding the risks so you can choose appropriate safeguards.
  • Due care is about doing: actually implementing and enforcing the safeguards you decided were necessary.

In the scenario, the CISO explicitly says management must show due care for the new cloud CRM. That means they need to move beyond just reviewing documents and making decisions, and ensure that protective controls are actually put in place and operated effectively.

The option that describes approving, funding, and ensuring implementation of MFA and DLP on the CRM is the one that clearly represents this “doing” side of risk management: putting specific controls into action to protect customer data.


Question 83

Topic: Security Operations

Which TWO of the following statements about applying least privilege and need-to-know in daily administration are INCORRECT and represent unsafe practices? (Select TWO.)

Options:

  • A. To simplify troubleshooting, all administrators should share a common local administrator password on workstations so any admin can log in as needed.

  • B. Operational runbooks should document which roles or groups may perform specific tasks so that sensitive actions, such as modifying backup jobs or firewall rules, are limited to those with a business need.

  • C. It is acceptable to make a widely used service account a member of the Domain Admins group so that it never encounters permission errors during maintenance windows.

  • D. Help desk staff should perform routine user support using standard user accounts, requesting temporary elevation or remote-assistance tools only when additional privileges are necessary.

  • E. Service accounts should be granted only the minimum permissions required for the specific application or service and should not be used for interactive logins.

Correct answers: A and C

Explanation: This question focuses on how least privilege and need-to-know apply to service accounts, local administrator access, and everyday operational tasks.

Least privilege means every account—user, admin, or service—should have only the permissions needed to perform its required tasks, and no more. Need-to-know applies the same idea to access to information and specific operational actions: people should only be able to see and do what their role and business duties require.

For service accounts, best practice is to scope them to a specific application or service, avoid interactive logins, and never grant broad rights like Domain Admin unless there is an extraordinary, well-justified and tightly controlled reason. Over-privileged service accounts are a common path to complete domain compromise.

For local administrator access, organizations should avoid shared local admin passwords and instead use unique, managed credentials and separate privileged accounts, with elevation only when necessary. This limits lateral movement and preserves accountability.

Operational runbooks and role definitions help embed least privilege by clearly stating which roles can perform sensitive tasks, such as changing firewall rules, modifying backup configurations, or managing production databases, ensuring that only those with a business need can perform these actions.


Question 84

Topic: Security Program Management and Oversight

A CISO is onboarding new security analysts and uses the following slide to explain how governance, risk management, and compliance relate to the security program.

Exhibit:

FunctionMain focusExample security activity
GovernanceDecide direction and expectationsApprove security policy; set acceptable risk levels
Risk managementIdentify, analyze, and treat threats and impactsPerform risk assessments; choose risk responses
ComplianceVerify adherence to required rules and standardsConduct audits; map controls to laws and regulations

Based on the exhibit, which planned activity BEST aligns with the governance function?

Options:

  • A. The SOC tunes SIEM correlation rules to alert on violations of the current security policy and reports weekly statistics to management.

  • B. The board formally approves the enterprise security policy and defines the organization’s tolerance for data loss incidents.

  • C. The risk team calculates the likelihood and impact of phishing attacks and recommends specific new controls.

  • D. The compliance team maps existing technical controls to privacy regulations and prepares documentation for an upcoming audit.

Best answer: B

Explanation: The exhibit distinguishes three related but different functions: governance, risk management, and compliance.

Governance is about direction and expectations. It is typically performed by senior leadership or a governing body, and includes decisions such as approving the overall security policy and defining what levels of risk the organization is willing to accept. These high-level decisions guide how the security program operates.

Risk management is about identifying, analyzing, and treating risks. Teams performing risk assessments, estimating likelihood and impact, and choosing mitigations are doing risk management, guided by the risk appetite and policies defined by governance.

Compliance is about checking whether the organization is following required rules and standards, such as laws, regulations, and internal policies. Activities such as audits and mapping controls to regulatory requirements belong here.

Because the exhibit explicitly lists governance example activities as “Approve security policy; set acceptable risk levels,” the activity where the board approves the enterprise security policy and defines tolerance for data loss is the clearest match to governance.


Question 85

Topic: Threats, Vulnerabilities, and Mitigations

Which security control is specifically designed to monitor endpoint behavior in real time and automatically detect and contain malware, including fileless attacks, on workstations and servers?

Options:

  • A. Remote-access VPN

  • B. Traditional network firewall

  • C. Endpoint detection and response (EDR)

  • D. Full-disk encryption

Best answer: C

Explanation: This question targets Domain 2 (Threats, vulnerabilities, and mitigations) and focuses on selecting the most appropriate technical control to detect and contain malware on endpoints.

Endpoint detection and response (EDR) tools are designed to gather detailed telemetry from endpoints (such as process creation, memory use, file access, and network connections), analyze that behavior, and identify patterns consistent with malware, including fileless attacks that may never drop a traditional executable to disk. EDR can alert analysts and, in many deployments, automatically isolate or remediate compromised endpoints.

By contrast, controls like full-disk encryption, traditional firewalls, and VPNs provide important protection for data at rest or in transit, or for network boundaries, but they do not focus on real-time behavioral monitoring and response on the endpoint itself, which is what the question is asking for.


Question 86

Topic: Security Operations

You are reconstructing an incident on a user workstation. The EDR timeline shows the following sequence of events around the suspected compromise. Based only on this exhibit, which conclusion about the attack chain is BEST supported?

Time (UTC)      Event
10:17:03        outlook.exe saved attachment invoice.pdf
10:17:07        AcroRd32.exe opened C:\Users\user\Downloads\invoice.pdf
10:17:12        AcroRd32.exe spawned powershell.exe (hidden window)
10:17:15        powershell.exe downloaded payload.exe from 203.0.113.23
10:17:19        payload.exe created Run key
                HKCU\Software\Microsoft\Windows\CurrentVersion\Run\Updater

Options:

  • A. A preexisting malicious PowerShell script initiated the attack, launching both outlook.exe and AcroRd32.exe to disguise its activity as normal user actions.

  • B. A phishing PDF attachment was opened, which caused AcroRd32.exe to spawn PowerShell, download a payload, and then create a persistence Run key.

  • C. The attacker first used payload.exe to create a persistence Run key, then used PowerShell to send a phishing email from outlook.exe.

  • D. The user directly downloaded payload.exe from a malicious website, then opened it with AcroRd32.exe, which later saved invoice.pdf as an email attachment.

Best answer: B

Explanation: This question focuses on timeline analysis in digital forensics: reading time-ordered events to reconstruct the most likely sequence of attacker actions.

In the exhibit, each log line has a timestamp and an event. At 10:17:03, outlook.exe saves an attachment, suggesting a user received and saved a file from email. At 10:17:07, AcroRd32.exe opens that PDF. Shortly after, at 10:17:12, AcroRd32.exe spawns powershell.exe with a hidden window, which is suspicious because PDF readers do not normally launch PowerShell. At 10:17:15, powershell.exe downloads payload.exe from an external IP address, indicating the actual malicious code is being retrieved. Finally, at 10:17:19, payload.exe creates a Run key under HKCU, a classic persistence mechanism.

A proper timeline reconstruction sticks closely to what the timestamps and process relationships actually show: a user opens a malicious PDF attachment, which triggers PowerShell, which downloads malware, which then establishes persistence. Any conclusion that reverses this order or adds extra behavior (like sending phishing emails) is not supported by the evidence in the exhibit.


Question 87

Topic: Security Operations

A junior SOC analyst seizes a USB drive suspected of containing exfiltrated data. Company policy requires that any evidence might later be used in legal proceedings. The analyst wants to ensure proper chain of custody is maintained as the drive is passed to the incident response lead and then to law enforcement.

Which action BEST satisfies this requirement?

Options:

  • A. Record each transfer of the USB drive, including date, time, names, and signatures of individuals handing off and receiving the evidence.

  • B. Store the USB drive in a locked drawer and verbally inform the incident response lead where it is located.

  • C. Seal the USB drive in an evidence bag and ship it directly to law enforcement using a tracked courier service.

  • D. Create a forensic image of the USB drive and save the hash value in a case notes file without recording who handled the media.

Best answer: A

Explanation: Chain of custody is the documented, chronological record of who has had control of a piece of evidence from the moment it is collected until it is presented in court or the investigation concludes. Its purpose is to show that the evidence has not been altered, substituted, or tampered with and therefore remains reliable.

To maintain chain of custody, every transfer of evidence must be documented, including when the transfer occurred, who released the evidence, and who received it. This documentation supports the integrity and admissibility of the evidence in legal or disciplinary proceedings. Without a clear, written history of control, opposing parties can argue that the evidence may have been compromised.


Question 88

Topic: General Security Concepts

A company’s marketing staff all have local administrator rights on their Windows laptops so they can install any tools they want. Several systems were recently infected with malware after users installed unapproved browser toolbars and file-sharing apps. Management wants to reduce this risk while still allowing staff to easily install approved software without constantly calling the help desk.

Which of the following changes would BEST apply the principle of least privilege in this situation?

Options:

  • A. Remove local administrator rights from marketing users and provide a self-service software portal where they can install only preapproved applications as standard users.

  • B. Require all marketing users to connect via VPN with MFA before accessing company resources, but leave their local administrator rights unchanged.

  • C. Keep local administrator rights for all marketing users but deploy an advanced endpoint detection and response (EDR) solution to automatically block known malware.

  • D. Allow only a few “power users” in marketing to retain local administrator rights so they can install software for the rest of the team as needed.

Best answer: A

Explanation: The principle of least privilege states that users should be granted only the minimum level of access and permissions necessary to perform their job functions. In this scenario, marketing staff do not need full local administrator rights to do their normal work; they mainly need access to approved tools.

By removing local administrator rights and giving users a controlled way to install only preapproved applications, the organization significantly reduces the attack surface. Users can no longer freely install arbitrary, potentially malicious software, but they still maintain enough access to remain productive without constant IT intervention. This both enforces least privilege and reduces malware risk from unapproved installs.

Other changes, like adding EDR or improving VPN/MFA, may improve security in different areas, but they do not solve the core issue of excessive privileges on these endpoints, which is what least privilege is designed to address.


Question 89

Topic: Security Program Management and Oversight

A mid-sized company uses several third-party and open-source components in its customer portal. During a recent high-profile library vulnerability, the security team spent days emailing vendors and manually reviewing code to determine whether the portal was affected. Leadership asks the security team to update procurement and SDLC requirements so they can quickly identify which applications use a vulnerable component and verify that vendors are managing these risks transparently.

Which requirement should the security team add to BEST meet this goal?

Options:

  • A. Require vendors to place a copy of their source code in escrow so it can be accessed if the vendor goes out of business or stops supporting the product.

  • B. Require each software vendor (and internal development team) to provide and maintain a software bill of materials that lists all components and versions used, and to update it when components change or critical vulnerabilities are disclosed.

  • C. Require strict service-level agreements (SLAs) stating that vendors will patch critical vulnerabilities within 30 days of disclosure.

  • D. Require each vendor to provide an annual penetration test report for the application and attest that no critical findings remain open.

Best answer: B

Explanation: This scenario focuses on software supply chain transparency and how a software bill of materials (SBOM) helps manage software risk. When a new vulnerability is announced in a widely used component, organizations need to quickly determine which products include that component and which versions are affected.

An SBOM is essentially an inventory of all software components that make up an application, including third-party libraries and their versions. By requiring vendors and internal teams to maintain an SBOM and keep it up to date, the organization can quickly search for a vulnerable component across its portfolio. This greatly reduces the time spent manually contacting vendors or reviewing code when new vulnerabilities are disclosed.

Other controls in the options (such as penetration tests, patch SLAs, or source code escrow) can be useful, but they do not provide the component-level visibility needed for rapid impact analysis when supply chain vulnerabilities arise. The key concept being tested is that SBOMs and supply chain transparency give organizations clearer insight into what is inside the software they rely on, enabling more effective risk management.


Question 90

Topic: Security Program Management and Oversight

A new security manager is reviewing how the company handles security risks. The manager wants to ensure the organization is practicing both due diligence (investigating and deciding how to handle risks) and due care (implementing and maintaining appropriate controls). Which of the following actions is NOT an example of due care or due diligence in managing security risks?

Options:

  • A. Performing an annual enterprise risk assessment to identify and prioritize major threats and vulnerabilities

  • B. Reviewing cloud providers’ security questionnaires and audit reports before signing a hosting contract

  • C. Ignoring repeated critical vulnerability alerts on production servers because scheduling a maintenance window would be inconvenient

  • D. Implementing MFA and updating access-control policies after a risk assessment identifies account takeover as a key risk

Best answer: C

Explanation: Due diligence and due care are complementary concepts in security risk management.

Due diligence is about investigating and deciding: performing risk assessments, reviewing vendor security, and analyzing threats and vulnerabilities so leadership can make informed decisions.

Due care is about doing and maintaining: implementing, operating, and updating reasonable security controls once risks are understood, and not ignoring known issues. An organization that practices due care responds to identified risks with appropriate safeguards and follows its own policies and procedures.

In this scenario, any action that thoughtfully evaluates risk or implements reasonable controls demonstrates due diligence or due care. The action that ignores known, critical vulnerabilities simply for convenience fails both concepts and is therefore the one that does not demonstrate due care or due diligence.


Continue with full practice

Use the CompTIA Security+ SY0-701 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try CompTIA Security+ SY0-701 on Web View CompTIA Security+ SY0-701 Practice Test

Focused topic pages

Free review resource

Read the CompTIA Security+ SY0-701 Cheat Sheet on Tech Exam Lexicon for concept review before another timed run.

Revised on Thursday, May 14, 2026