Free CPA ISC Full-Length Practice Exam: 82 Questions

Try 82 free Certified Public Accountant Information Systems and Controls (CPA ISC) questions across the ISC blueprint areas, with answers and explanations, then continue in Mastery Exam Prep.

This free full-length CPA ISC multiple-choice question (MCQ) diagnostic includes 82 original Mastery Exam Prep questions across the ISC blueprint areas.

The CPA ISC section also involves task-based simulations and exhibit-heavy work, so use this page as an MCQ diagnostic rather than a complete simulation of every item type. The questions are original practice questions and are not official exam questions.

Practice count note: exam sponsors can describe total questions, scored questions, task-based simulations, duration, or unscored/pretest-item rules differently. Always confirm current exam-day rules with the sponsor.

For concept review before or after this diagnostic, use the CPA ISC guide on CPAExamsMastery.com.

Before you start

CPA means Certified Public Accountant. ISC means Information Systems and Controls. This page is useful when you want one uninterrupted ISC multiple-choice diagnostic before returning to systems, data, security, privacy, and SOC drills.

Use the score as a diagnostic signal, not as a guarantee. ISC also involves task-based simulations and exhibit-heavy work, so a high score here should be paired with continued review of systems exhibits, control objectives, report scope, and data-reliability judgment.

How to use your result

Diagnostic resultPractical next step
Below 70%Return to topic drills. Start with the topic that produced the most misses, then retake mixed sets after the explanations make sense.
70-79%Review every miss and classify it as systems/data, security/privacy, or SOC engagements. Drill the weak category before another timed attempt.
80%+Move to timed mixed practice and focus on process boundaries, control objectives, and careful exhibit reading.
Repeated 75%+ on unseen timed attemptsSchedule or proceed when you can explain the risk, control, and report-scope logic behind each best answer.

Miss pattern to next drill

If your misses cluster around…What to drill next
process flow, data management, reports, or interfacesInformation systems and data management questions . Trace source, input, processing, output, and use.
access, privacy, confidentiality, or safeguardsSecurity, confidentiality, and privacy questions . Identify the objective and control type.
SOC reports, report type, complementary controls, or relianceSystem and organization controls questions . Decide who relies on the report and why.
timing pressure or repeated recognition of familiar stemsTimed mixed practice in the full route. Use larger unseen sets so practice builds control judgment instead of answer memorization.
Use the CPA ISC practice route for timed mocks, topic drills, progress tracking, explanations, and full practice.

Exam snapshot

ItemDetail
IssuerAmerican Institute of Certified Public Accountants (AICPA)
Exam routeCPA ISC
Official exam nameCPA ISC — Information Systems and Controls
Full-length set on this page82 questions
Exam time240 minutes
Topic areas represented3

Full-length exam mix

TopicApproximate official weightQuestions used
Information Systems and Data Management40%33
Security, Confidentiality and Privacy40%33
Considerations for System and Organization Controls Engagements20%16

Practice questions

Questions 1-25

Question 1

Topic: Information Systems and Data Management

A company uses an ERP system for order-to-cash processing. The system includes:

  • Customer records with payment terms, credit limits, and shipping addresses
  • An hourly API transfer of approved web orders into the ERP
  • Sales order posting that updates inventory and creates accounts receivable entries
  • A month-end sales dashboard showing revenue and margin by product line

Which item is best classified as master data?

  • A. A month-end sales dashboard showing revenue and margin by product line
  • B. An hourly API transfer of approved web orders into the ERP
  • C. Sales order posting that updates inventory and creates accounts receivable entries
  • D. Customer records with payment terms, credit limits, and shipping addresses

Best answer: D

What this tests: Information Systems and Data Management

Explanation: Customer records are master data because they are standing records used repeatedly to process many transactions. The API transfer is an interface, the sales order posting is a transaction flow, and the dashboard is a reporting output.

In an ERP or accounting information system, master data refers to core reference records that remain relatively stable and support ongoing processing. Examples include customer, vendor, employee, item, and chart-of-accounts records. Here, the customer file holds attributes such as payment terms, credit limits, and shipping addresses that are reused whenever orders are entered and processed. By contrast, the hourly API transfer describes how data moves between systems, so it is an interface. The sales order posting is an operational transaction flow because it records business activity and updates accounts. The month-end dashboard is a reporting output because it summarizes processed data for review and decision-making.

  • Customer records are the best answer because they are reusable reference data, not a one-time event.
  • The hourly API transfer is an interface since it moves order data from one system component to another.
  • Sales order posting is transaction processing because it records and updates business events such as inventory and receivables.
  • The month-end dashboard is a reporting output because it presents summarized information after processing.

Master data consists of relatively stable reference records used repeatedly in processing transactions across the system.


Question 2

Topic: Information Systems and Data Management

In a SOC 2 engagement, a payroll processor states that a key processing integrity control is a daily reconciliation of employee hours imported from client files to hours posted in the payroll application. The reconciliation report is generated automatically, and a payroll supervisor is required to review and resolve any differences before payroll is processed.

During tests of operating effectiveness for 25 business days, the practitioner finds that the reconciliation report was generated each day, but on 3 days there was no evidence of supervisor review before payroll processing. On one of those days, an import error caused 18 employee time records to be omitted until the next payroll cycle.

How should this finding be characterized?

  • A. A security operating effectiveness deviation
  • B. A processing integrity design deficiency
  • C. A complementary user entity control deviation
  • D. A processing integrity operating effectiveness deviation

Best answer: D

What this tests: Information Systems and Data Management

Explanation: This is an operating effectiveness issue within processing integrity because the reconciliation control existed and was generated daily, but required supervisory review was not performed consistently. The omitted time records show the lapse affected complete and timely processing, which is central to processing integrity.

In SOC 2, processing integrity addresses whether system processing is complete, valid, accurate, timely, and authorized. Here, the control is designed to detect import differences before payroll runs: the report is generated automatically and a supervisor must review and resolve exceptions. Because that control existed but was not performed as required on 3 of 25 days, the issue is a deviation in operating effectiveness, not a design deficiency. It is also a processing integrity matter, not a security matter, because the problem involves incomplete and delayed payroll processing rather than unauthorized access. It is not a complementary user entity control issue because the facts describe a control the service organization itself is responsible for performing.

  • A processing integrity design deficiency would apply if the control were missing or incapable of detecting import errors; here, the control was established but not consistently performed.
  • A security operating effectiveness deviation would focus on protecting systems from unauthorized access or use, which is not the primary issue in these facts.
  • A complementary user entity control deviation would involve a control the customer must perform; the scenario assigns the review to the payroll processor’s supervisor.

The control was appropriately designed and existed, but it did not operate as prescribed on some days, affecting completeness and timeliness of processing.


Question 3

Topic: Considerations for System and Organization Controls Engagements

A CPA is planning a SOC 2 examination for ApexPay using the carve-out method for subservice organizations.

Current facts:

  • Management’s draft system description identifies ApexPay’s payment application, internal database, employees, and change-management procedures.
  • User authentication is performed by an outsourced identity provider.
  • Production servers are hosted by an outsourced cloud provider.
  • The draft description does not mention either outsourced provider, the carve-out method, or any complementary subservice organization controls.

What should the CPA do next?

  • A. Obtain written representations now that the draft description is complete and accurate, then continue the examination as planned.
  • B. Begin testing authentication and hosting controls at the outsourced providers to determine whether the omitted activities can still be relied on.
  • C. Compare the draft to the description criteria and ask management to revise the description to disclose the system boundaries, the outsourced providers’ roles, the carve-out method, and related complementary subservice organization controls.
  • D. Conclude now that the description criteria are not met and plan a modified report without first requesting changes from management.

Best answer: C

What this tests: Considerations for System and Organization Controls Engagements

Explanation: The draft is missing key elements needed for a complete system description, so the CPA should first compare it to the description criteria and have management correct the omissions. In a SOC examination, testing and reporting decisions come after the system description is complete and properly bounded.

When management’s draft system description appears incomplete, the practitioner’s next step is to compare it to the applicable description criteria and identify gaps. Here, outsourced authentication and cloud hosting affect the system boundary, so they cannot be ignored simply because the carve-out method is used. Under carve-out, management still needs to identify the subservice organizations, describe the nature of their services, and address relevant complementary subservice organization controls. Only after the description is complete and aligned with the criteria should the CPA move to testing or reporting conclusions. Written representations are concluding evidence, not a substitute for comparing the draft to the criteria, and an immediate modified report would be premature before management has a chance to revise the description.

  • Testing outsourced authentication and hosting controls first is out of sequence because the CPA must first determine whether management’s description is complete and properly defines system boundaries.
  • Obtaining written representations now is inappropriate because representations support the final conclusion; they do not cure an incomplete draft description.
  • Planning a modified report immediately is premature because management should first be asked to revise the description to meet the description criteria.

Because the draft omits required boundary and subservice-organization disclosures, the CPA should first compare it to the description criteria and resolve the gaps with management.


Question 4

Topic: Considerations for System and Organization Controls Engagements

A CPA firm is evaluating acceptance of a SOC 2 examination for AtlasPay, a service organization. AtlasPay uses VaultCo, a separate hosting provider, as a subservice organization. AtlasPay has not yet decided whether VaultCo will be presented using the inclusive method or the carve-out method. The CPA firm is independent of AtlasPay, but another practice unit of the firm performs bookkeeping services for VaultCo. What should the engagement partner do next?

  • A. Accept the engagement now because independence only needs to be considered with respect to AtlasPay.
  • B. Ask AtlasPay to provide a representation about VaultCo’s controls, then begin planning control tests.
  • C. Determine whether VaultCo will be included under the inclusive or carve-out method, then evaluate whether the firm’s relationship with VaultCo affects independence.
  • D. Require AtlasPay to use the carve-out method so the firm’s relationship with VaultCo does not matter.

Best answer: C

What this tests: Considerations for System and Organization Controls Engagements

Explanation: The service auditor must always be independent of the service organization. When a subservice organization may be included using the inclusive method, the auditor must also consider whether relationships with that subservice organization affect independence, so the reporting method should be resolved first.

In a SOC engagement, independence is always required with respect to the service organization. A subservice organization adds an extra independence consideration when its services and controls are included in the scope through the inclusive method. Under the carve-out method, the subservice organization’s controls are excluded from the service auditor’s opinion, so the firm’s relationship with that subservice organization is not evaluated the same way for the examination scope. Here, the firm is already independent of AtlasPay, but it has a relationship with VaultCo and AtlasPay has not yet chosen inclusive or carve-out presentation. Therefore, the proper next step is to determine the intended method for VaultCo and then assess the firm’s independence implications before accepting or continuing planning.

  • Accepting immediately is premature because an included subservice organization can create an independence issue under the inclusive method.
  • A management representation about VaultCo’s controls does not resolve the auditor’s own independence or the scope decision.
  • Requiring the carve-out method skips management’s responsibility for the system description and is not the auditor’s first step.

Because the firm is already independent of the service organization, the next planning step is to determine whether the subservice organization will be included in scope, which drives whether the VaultCo relationship creates an independence issue.


Question 5

Topic: Security, Confidentiality and Privacy

An online payroll processor uses a cloud-hosted payroll portal. Employees access the admin console through single sign-on from company laptops, and customers access the portal from the internet.

Recent findings:

  • A phished employee password was used to sign in to the admin console from an unfamiliar IP address. No second factor was required.
  • A critical patch for the internet-facing web server was 30 days overdue.
  • A mass download of payroll files was discovered during a weekly log review, not when it occurred.
  • After malware corrupted shared payroll files, operations were restored from a monthly backup, causing significant rework.

Which control mix best reflects defense-in-depth for this environment?

  • A. Encrypt data at rest and in transit, obtain and review the cloud provider’s SOC 2 report, recertify user access semiannually, and block logins from foreign IP addresses.
  • B. Strengthen password complexity rules, provide quarterly phishing training, perform annual penetration testing, and keep monthly backups for file recovery.
  • C. Segment the payroll environment from the corporate network, disable USB storage on laptops, rotate privileged passwords quarterly, and continue weekly manual log reviews.
  • D. Require MFA for portal access, enforce prompt patching of the internet-facing server, use centralized real-time alerting for unusual downloads, and maintain tested daily immutable backups with an incident response playbook.

Best answer: D

What this tests: Security, Confidentiality and Privacy

Explanation: Defense in depth uses complementary controls across multiple layers and control types rather than relying on one safeguard. The combination of MFA, prompt patching, real-time monitoring, and tested immutable backups directly addresses the credential compromise, exposed server, delayed detection, and weak recovery described in the scenario.

Defense in depth means building overlapping security layers so that if one control fails, others still reduce impact. In this scenario, the weaknesses appear at several points: identity security, system hardening, detection, and recovery. MFA is a preventive control that limits misuse of stolen passwords. Prompt patching of the internet-facing server reduces the chance that known vulnerabilities can be exploited. Centralized monitoring with real-time alerts adds a detective layer so unusual bulk downloads are identified quickly instead of days later. Tested daily immutable backups and an incident response playbook are corrective and recovery measures that help restore operations after malware or file corruption. The best answer is the one that addresses all four observed gaps with coordinated preventive, detective, and corrective controls.

  • Password complexity, phishing training, and annual testing help, but they rely too much on periodic or user-dependent prevention and still leave weak detection and weak recovery.
  • Encryption, SOC 2 review, and semiannual access recertification are useful, but they do not directly close the missing MFA, overdue patching, and delayed detection gaps in the scenario.
  • Segmentation, USB blocking, and password rotation are partial safeguards, yet they do not address the specific stolen-credential path, the overdue internet-facing patch, or the need for stronger recovery.

This option layers preventive, detective, and corrective controls across identity, system, monitoring, and recovery points that match the scenario’s specific failures.


Question 6

Topic: Considerations for System and Organization Controls Engagements

A CPA is preparing training materials for new SOC 2 staff. The CPA wants support for this conclusion:

“In a SOC 2 examination, the Trust Services Criteria are the benchmarks used to evaluate controls over a system. They are organized around COSO-aligned common criteria, with supplemental criteria for certain subject matters and additional specific criteria for privacy.”

Which source best supports that conclusion?

  • A. Vendor-service summary: “The cloud provider encrypts stored data, performs daily backups, and commits to 99.95% monthly uptime.”
  • B. SOC report excerpt: “The examination used the Trust Services Criteria to evaluate the service organization’s controls. Common criteria aligned with COSO apply across categories; availability, processing integrity, and confidentiality add supplemental criteria, and privacy adds additional specific criteria when in scope.”
  • C. Control test result: “Of 30 terminated users selected, 30 were removed from active directories within 24 hours of separation.”
  • D. Security assessment finding: “Two privileged accounts did not require multifactor authentication, increasing the risk of unauthorized access to production systems.”

Best answer: B

What this tests: Considerations for System and Organization Controls Engagements

Explanation: The SOC report excerpt is the only source that directly addresses both what the Trust Services Criteria are used for and how they are structured. The other sources discuss individual controls or service features, not the framework organization of the criteria.

In a SOC 2 examination, the Trust Services Criteria serve as the control criteria against which the service organization’s system is evaluated. Their organization matters: the common criteria apply broadly and are aligned with COSO concepts, while certain subject matters use supplemental criteria, and privacy has additional specific criteria when that category is included. A source that best supports this conclusion must explicitly describe both the purpose of the criteria and their structure. Evidence about one access weakness, one vendor capability, or one successful control test may support a conclusion about a specific control or service commitment, but it does not explain how the Trust Services Criteria are organized or why they are used in the examination.

  • The SOC report excerpt is best because it directly ties the criteria to the examination purpose and identifies common, supplemental, and privacy-specific criteria.
  • The security assessment finding supports a conclusion about a security deficiency, not the organization of the Trust Services Criteria.
  • The vendor-service summary supports conclusions about service features such as encryption or uptime, not COSO alignment or criteria structure.
  • The control test result supports operating effectiveness of a termination control, not the purpose or organization of the criteria framework.

This excerpt directly states both the purpose of the Trust Services Criteria and their organization into COSO-aligned common, supplemental, and privacy-specific criteria.


Question 7

Topic: Security, Confidentiality and Privacy

An entity’s Severity 1 incident response plan and actual timeline are shown below.

Plan requirementTime rule
Escalate to incident managerWithin 15 minutes after the analyst confirms the incident
Isolate affected production hostWithin 30 minutes after Severity 1 classification
Notify privacy officerWithin 1 hour after the team determines regulated personal data may be involved
Preserve evidenceCapture a forensic image before reimaging any compromised host
Actual timelineEvent
08:10SIEM alert flags unusual outbound traffic from payroll application server
08:18Analyst confirms unauthorized access and classifies incident as Severity 1
08:27Incident manager notified
08:40Payroll application server isolated from network
08:50Log review indicates exfiltrated file may contain employee SSNs
09:35Privacy officer notified
09:50Server reimaged for restoration
10:20Forensic image captured after reimage completed

Based on the exhibit, which response step was inconsistent with the plan?

  • A. Preservation of forensic evidence was inconsistent.
  • B. Escalation to the incident manager was late.
  • C. Notification to the privacy officer was late.
  • D. Isolation of the affected server was late.

Best answer: A

What this tests: Security, Confidentiality and Privacy

Explanation: The timeline shows that escalation, isolation, and privacy notification all met the plan’s stated trigger points and deadlines. The only mismatch is evidence preservation, because the host was reimaged before the forensic image was captured.

To evaluate an incident timeline, compare each event to the plan’s stated trigger and required deadline or sequence. Here, the analyst confirmed the incident at 08:18 and notified the incident manager at 08:27, so escalation occurred within 15 minutes. The host was isolated at 08:40, which is within 30 minutes of Severity 1 classification. The privacy officer was notified at 09:35, which is 45 minutes after the team determined at 08:50 that employee SSNs might be involved, so that step was timely. The only violation is the evidence-preservation requirement. The plan explicitly says to capture a forensic image before reimaging a compromised host, but the server was reimaged at 09:50 and imaged only afterward at 10:20. That sequence can destroy or alter evidence needed for investigation.

  • Escalation to the incident manager was timely because 08:27 is 9 minutes after confirmation at 08:18.
  • Isolation of the affected server was timely because 08:40 is 22 minutes after Severity 1 classification.
  • Notification to the privacy officer was timely because it occurred 45 minutes after the team identified possible SSN exposure.
  • Preservation of forensic evidence failed because reimaging occurred before the required forensic image was captured.

The plan requires a forensic image before reimaging, but the server was reimaged at 09:50 and the image was not captured until 10:20.


Question 8

Topic: Information Systems and Data Management

A service organization sends approved customer rate changes from its billing platform to its invoicing system each night. Management says a billing supervisor reviews a daily rejected-record report and resolves errors before invoices are issued. During a walkthrough, the CPA learns that the interface does not generate any rejected-record report, and no other reconciliation compares rate changes sent to rate changes posted. Several approved rate changes were later found missing from invoices.

Which remediation best addresses this deficiency?

  • A. Implement an automated rejected-record or source-to-target reconciliation report and require timely review and resolution before invoicing.
  • B. Suspend all invoicing and manually reperform a full year of rate changes before resuming processing.
  • C. Require quarterly user access recertifications for the billing platform and invoicing system.
  • D. Retrain the billing supervisor to perform and document the daily review of the existing rejected-record report more consistently.

Best answer: A

What this tests: Information Systems and Data Management

Explanation: The problem is not a one-time failure to perform an existing control. The needed processing integrity control was never actually in place, so the best response is to design and implement a control that detects rejected or missing interface records and requires timely follow-up.

A design deficiency exists when a control, as designed or actually implemented, cannot prevent or detect errors on a timely basis. Here, management describes a daily review control, but the interface does not produce the rejected-record report and no alternative reconciliation exists. That means completeness and accuracy of transferred rate changes are not being monitored at all, making this a design deficiency. An operating deviation would be different: for example, if a valid exception report existed and the supervisor failed to review it on a particular day. In this scenario, the proper remediation is to add or redesign the control itself, such as an automated exception report or source-to-target reconciliation with timely review before invoicing.

  • Retraining the supervisor assumes an existing control was not performed, but the report does not exist, so this does not fix the missing control design.
  • Quarterly access recertifications address authorization and security, not whether interface data is complete and accurate.
  • Suspending all invoicing and manually reperformance for a full year is an excessive response; the key need is to implement a targeted processing integrity control and remediate affected transactions.

Because no exception report or reconciliation exists, the issue is a design deficiency, so the control itself must be implemented rather than merely enforced.


Question 9

Topic: Security, Confidentiality and Privacy

A company’s SOC manager concluded that recent suspicious VPN activity was a password spraying attack against employee accounts. Which source would best support that conclusion?

  • A. An authentication log extract showing one external IP trying the password “Welcome2026!” against 480 different employee usernames in 9 minutes, with no more than one attempt per account
  • B. An incident record showing the SOC blocked the source IP and forced password resets after detection
  • C. A threat intelligence report linking the source IP address to a financially motivated criminal group
  • D. A security assessment finding stating that VPN access does not require MFA for 35 contractor accounts

Best answer: A

What this tests: Security, Confidentiality and Privacy

Explanation: The authentication log is the best support because it shows the actual attack pattern: one common password attempted across many usernames. That is direct evidence of password spraying, unlike evidence about the attacker, a control weakness, or the company’s response.

To identify an attack type, the strongest support is evidence of the behavior itself. Password spraying typically uses one or a few common passwords across many accounts to avoid repeated failures on a single account and reduce lockouts. The authentication log showing one external IP trying the same password once against hundreds of usernames directly supports that conclusion. By contrast, a threat intelligence report helps identify the threat agent, not the attack technique. A finding that MFA is missing describes a vulnerability or control weakness that could allow the attack to succeed, but it does not prove the attack occurred. An incident record showing IP blocking and password resets describes the control response after detection. Evidence of reconnaissance or account enumeration would indicate an earlier attack stage, not the spraying pattern itself.

  • The threat intelligence report identifies who may be behind the activity, but it does not show the specific attack technique used.
  • The MFA finding describes a vulnerability that increases risk, but a weakness alone is not proof of password spraying.
  • The incident record shows how the company responded after detection, not the underlying pattern that defines the attack type.

A single common password attempted across many accounts is the defining pattern of password spraying.


Question 10

Topic: Considerations for System and Organization Controls Engagements

A CPA is reviewing a SOC 2 scoping worksheet for a cloud payroll platform.

Criterion referenceControl summary
CC6Multifactor authentication is required for privileged remote access.
A1Recovery procedures are tested against system availability commitments.
C1Confidential customer files are encrypted and access is limited to approved personnel.
CC9Management evaluates third-party vendors before onboarding.

Which conclusion about the Trust Services Criteria is supported by the exhibit?

  • A. The recovery testing control is a common criterion that applies to every subject matter.
  • B. The confidential file encryption and restricted-access control is an additional confidentiality criterion.
  • C. The multifactor authentication control is an additional availability criterion.
  • D. The vendor evaluation control is an additional privacy criterion.

Best answer: B

What this tests: Considerations for System and Organization Controls Engagements

Explanation: In the Trust Services Criteria, references beginning with CC are common criteria, while subject-specific references such as C are additional criteria. Because the control over encrypting and restricting access to confidential files is mapped to C1, it is the only option that correctly identifies an additional confidentiality criterion.

SOC 2 uses common criteria, labeled CC, across all engagements and subject matters. Additional criteria are used only when the engagement includes availability, processing integrity, confidentiality, or privacy. Those subject-specific criteria are identified by prefixes such as A, PI, C, and P. In the exhibit, CC6 and CC9 are common criteria, so they are not additional criteria. A1 is an additional availability criterion, not a common one. C1 is an additional confidentiality criterion because it addresses the protection of confidential information. This is why the control over encrypting confidential files and limiting access is the best-supported conclusion from the worksheet.

  • The multifactor authentication control is mapped to CC6, so it is a common criterion related to logical access, not an availability-only criterion.
  • The recovery testing control is mapped to A1, which is an additional availability criterion rather than a common criterion.
  • The vendor evaluation control is mapped to CC9, so it is a common criterion about risk mitigation and third parties, not an additional privacy criterion.

Criteria labeled C are additional criteria for confidentiality, while CC criteria are common criteria.


Question 11

Topic: Information Systems and Data Management

An entity’s ERP environment has the following conditions for application and server changes:

  • Developers and system administrators can move changes directly to production.
  • Formal approval is not required before deployment.
  • Evidence of testing is not retained.
  • Emergency fixes are usually not reviewed after implementation.

Which risk most directly reflects weak change management rather than a separate access, monitoring, or recovery weakness?

  • A. Systems could fail to recover within required time frames after a disruption.
  • B. Security events could go undetected because log review and alerting are ineffective.
  • C. Inactive or terminated users could retain system access and perform unauthorized transactions.
  • D. Unauthorized or insufficiently tested changes could enter production and cause processing errors or outages.

Best answer: D

What this tests: Information Systems and Data Management

Explanation: The described conditions point to a classic change management weakness: changes can reach production without approval, testing evidence, or post-implementation review. That most directly increases the risk of unauthorized, erroneous, or incompatible changes causing bad processing results or downtime.

Change management controls are designed to ensure application and infrastructure changes are authorized, tested, approved, documented, and migrated to production in a controlled way. When developers or administrators can move changes directly to production without formal approval or retained test evidence, the main risk is that untested, unauthorized, or poorly understood changes will affect live processing. That can lead to system outages, failed integrations, inaccurate data processing, or unintended configuration changes. Emergency changes may be necessary, but they still should be documented and reviewed afterward. The other choices describe important risks in adjacent control areas, but they are more directly tied to logical access administration, security monitoring, or disaster recovery than to change management.

  • Retained access for inactive or terminated users is mainly a logical access provisioning and deprovisioning problem, not the core risk created by weak change approval and testing.
  • Undetected security events point to weak monitoring, logging, or incident detection controls rather than the release of unapproved or untested changes.
  • Missing recovery time targets indicates a business continuity or disaster recovery weakness, which is different from controlling how changes move into production.

Weak change controls primarily increase the chance that unauthorized or untested code or configuration changes are promoted to production and disrupt operations.


Question 12

Topic: Information Systems and Data Management

A manufacturer uses a CRM system for order entry and an ERP system for shipping, invoicing, and the general ledger. Approved customer orders are exported nightly from CRM to ERP.

Process facts:

  • CRM assigns each approved order a sequential order number.
  • The interface file includes order number, customer ID, item code, quantity, and price.
  • ERP rejects any record with an invalid item code and posts all other records.
  • Rejected records are written to an IT operations log, but no business user receives an exception report.
  • No one compares the number of approved CRM orders in CRM to the number of orders successfully loaded into ERP.
  • Accounting found several shipped orders that were never invoiced because their records were rejected during import.

Which correction is the most appropriate response to this processing integrity issue?

  • A. Perform a daily reconciliation of CRM approved-order counts and sequence numbers to ERP successful-load counts, and send rejected records to order management for prompt correction and reprocessing.
  • B. Replace the nightly batch export with a real-time API integration between CRM and ERP.
  • C. Encrypt the nightly interface file and restrict the transfer folder so only IT operations personnel can access it.
  • D. Require dual approval for all changes to the ERP item master before those changes are migrated to production.

Best answer: A

What this tests: Information Systems and Data Management

Explanation: The main issue is not security or system architecture; it is that rejected interface records are not being reconciled or resolved. A daily control-total and sequence reconciliation, combined with exception handling for rejected orders, is the best correction because it addresses missing transactions before shipping and invoicing differences persist.

Processing integrity in an ERP interface depends on transactions being complete, accurate, valid, and processed as intended. Here, approved CRM orders can fail ERP import, yet the process continues without a business-side exception report or reconciliation. That means invalid records are silently excluded from downstream invoicing, creating a completeness failure. The best correction is to reconcile source transactions to loaded transactions using record counts and sequential order numbers, then route rejected items to the responsible business users for timely correction and reprocessing. That response directly targets the broken interface control. Strengthening item-master change approval could help reduce some invalid-code errors, but it would not detect or resolve rejected orders already missing from ERP. Encryption and API replacement do not by themselves fix the missing reconciliation control.

  • Daily reconciliation with exception routing is correct because it detects omitted orders and supports correction of rejected transactions.
  • Dual approval for item-master changes may improve change control, but it does not address the immediate completeness gap between CRM and ERP.
  • Encrypting the file and limiting folder access improves security, not whether all approved orders are successfully processed.
  • Replacing the batch process with a real-time API is an unnecessary redesign; the missing control can be corrected within the current process.

This directly addresses completeness and processing integrity by detecting missing orders and ensuring rejected transactions are corrected and re-entered.


Question 13

Topic: Considerations for System and Organization Controls Engagements

A CPA firm issued a SOC 2 Type 2 report on May 15 covering January 1 through March 31. On May 25, the service auditor learns from a completed internal investigation that, during March, terminated administrators retained privileged access for several days because an access-deprovisioning script failed. The system description in the report states that privileged access for terminated personnel is removed within 24 hours, and the control was concluded to operate effectively throughout the period.

What should the service auditor do next?

  • A. Retest the current deprovisioning control for a new sample and issue an updated conclusion only for the current month
  • B. Immediately withdraw the report from all users because any post-issuance control problem automatically invalidates the SOC 2 report
  • C. Discuss the newly discovered facts with service organization management, determine whether the facts are reliable and affect the system description or conclusion, and assess whether the report needs revision or other action
  • D. Ignore the matter because the investigation was completed after the report date, so it is outside the examination period

Best answer: C

What this tests: Considerations for System and Organization Controls Engagements

Explanation: The correct next step is to evaluate the newly discovered information with management to determine whether it is reliable and whether it affects the system description or the service auditor’s conclusion. Only after that assessment would the auditor decide whether revision, disclosure, or further action is necessary.

In a SOC engagement, if the service auditor becomes aware after report issuance of facts that existed at the report date and those facts might have affected the report, the auditor should not jump straight to withdrawal, ignore the matter, or switch to testing a new period. The proper response is to discuss the matter with service organization management, assess the reliability of the information, and determine whether the system description or the opinion on design or operating effectiveness is affected. Here, the newly discovered deprovisioning failure occurred during the covered period and directly contradicts both the stated control operation and the conclusion that the control operated effectively throughout the period. If the matter is confirmed and is material, the auditor would then consider revising the report and taking appropriate steps regarding report users.

  • Immediately withdrawing the report is premature; the auditor must first verify the facts and evaluate their effect on the issued report.
  • Ignoring the matter is inappropriate because the access failure occurred during March, which is inside the examination period, even though the investigation finished later.
  • Retesting the current month addresses a different period and does not resolve whether the issued report about January through March is now misstated.
  • The correct response focuses first on evaluating the subsequent facts and their impact on the system description and conclusion.

When facts existing at the report date are discovered after issuance, the service auditor first evaluates their reliability and effect on the report with management before deciding on revision or user notification.


Question 14

Topic: Information Systems and Data Management

A company uses a SaaS billing application hosted by a cloud provider. Relevant facts:

  • The provider contract states that the provider manages physical data center security, network perimeter security, server operating system patching, application updates, and nightly production database backups.
  • The customer organization configures user roles, approves single sign-on group mappings, classifies uploaded customer data, and performs quarterly user access reviews.
  • An internal audit noted that several terminated employees still had active access to the billing application for 10 days after termination.

Under this shared-responsibility arrangement, which responsibility remains with the customer organization?

  • A. Applying security patches to the server operating system supporting the application
  • B. Provisioning, modifying, and removing end-user access to the billing application
  • C. Maintaining physical entry controls and environmental safeguards at the provider’s data center
  • D. Performing nightly backups of the production billing database

Best answer: B

What this tests: Information Systems and Data Management

Explanation: The customer retains responsibility for logical access administration in the SaaS application because the facts say it configures roles, approves SSO mappings, and performs access reviews. The delayed removal of terminated users is therefore a customer control failure, not a provider infrastructure failure.

In a shared-responsibility model, the exact split depends on the service model and the contract. Here, the arrangement is SaaS, and the stated provider responsibilities cover the underlying environment: data center security, network perimeter, operating system patching, application updates, and database backups. The customer still controls how its own users access the application, including role setup, SSO group mapping, periodic access review, and timely deprovisioning of terminated employees. Because the audit issue involves former employees retaining access, the problem falls within the customer’s retained responsibility for logical access governance. A common mistake is assuming the provider is responsible for all security in SaaS, but customers still own important user-access and data-governance controls.

  • Provisioning, modifying, and removing end-user access is correct because the contract assigns user-role and access-review duties to the customer.
  • Applying server operating system patches is a provider task here because the contract explicitly assigns OS patching to the provider.
  • Maintaining physical and environmental controls at the data center is part of the provider’s infrastructure responsibility.
  • Performing nightly production database backups is not retained by the customer because the contract says the provider performs those backups.

Logical user access within a SaaS application remains a customer responsibility when the contract assigns role configuration and access reviews to the customer.


Question 15

Topic: Information Systems and Data Management

A CPA is reviewing documentation for a sales reporting mart. Management concludes the mart uses a snowflake schema rather than a star schema because a central fact table stores measures and foreign keys, while descriptive product data is further normalized into related dimension tables. Which source best supports management’s conclusion?

  • A. Data dictionary excerpt: FactSales(date_key, product_key, customer_key, units_sold, net_sales); DimProduct(product_key, product_name, brand_name, category_name).
  • B. SQL result showing monthly net sales by category and region for the last quarter.
  • C. Data dictionary excerpt: FactSales(date_key, product_key, customer_key, units_sold, net_sales); DimProduct(product_key, product_name, brand_key, category_key); DimBrand(brand_key, brand_name); DimCategory(category_key, category_name).
  • D. Process narrative stating the ETL job extracts sales from the ERP nightly and loads the reporting mart before 6:00 a.m.

Best answer: C

What this tests: Information Systems and Data Management

Explanation: The best support is the data dictionary showing FactSales linked to DimProduct, with DimProduct further linked to DimBrand and DimCategory. That layout shows a central fact table plus normalized dimension tables, which identifies a snowflake schema.

In both star and snowflake schemas, the fact table holds numeric measures and foreign keys used for reporting. The difference is how dimension data is organized. A star schema keeps descriptive attributes together in a single denormalized dimension table, while a snowflake schema normalizes that dimension into related tables, such as separate product, brand, and category tables. Because the conclusion is about schema structure, the strongest evidence is documentation that shows table names, keys, and relationships. The data dictionary excerpt with FactSales, DimProduct, DimBrand, and DimCategory directly shows normalized dimensions branching from the fact table, so it best supports the snowflake conclusion.

  • The data dictionary with separate DimBrand and DimCategory is best because it shows normalized dimension tables around a central fact table.
  • The data dictionary with brand_name and category_name inside one DimProduct supports a star schema instead, not a snowflake schema.
  • The SQL result shows reporting output, but results alone do not reveal whether dimensions are denormalized or normalized.
  • The ETL process narrative explains data movement and timing, not how fact and dimension tables are structured.

It directly shows a central fact table and a product dimension split into related brand and category tables, which is the defining snowflake pattern.


Question 16

Topic: Information Systems and Data Management

A CPA is documenting how a company’s ERP affects the sales, cash collections, and reporting processes before deciding what controls to test.

  • Customer orders are entered in the sales module.
  • When warehouse staff confirm shipment, the ERP automatically creates the invoice and records accounts receivable and revenue.
  • Customer payments are received by a bank lockbox provider; a daily file posts cash receipts to the accounts receivable subledger.
  • Each night, the ERP posts summary journal entries from the accounts receivable subledger to the general ledger.
  • Management’s daily sales report is generated from the invoice table, not from the general ledger.

At month-end, several shipments made on the last day of the month appeared in accounts receivable and the general ledger on the next day, and those shipments were missing from the month-end daily sales report.

What should the CPA do next to understand how the AIS affects the sales and reporting process?

  • A. Recalculate the month-end lockbox reconciliation and compare deposits to cash receipts postings.
  • B. Review role-based access for warehouse and billing users and identify segregation-of-duties conflicts.
  • C. Confirm selected month-end customer balances and investigate any exceptions.
  • D. Walk through one last-day-of-month sale from shipment confirmation to invoice creation, subledger posting, general ledger posting, and daily sales report output.

Best answer: D

What this tests: Information Systems and Data Management

Explanation: The right next step is to trace a month-end sale through the automated ERP flow and into the report source. Because the issue is a timing difference between shipment, invoicing, subledger and general ledger posting, and report output, a walkthrough shows exactly where the AIS affects recognition and reporting.

To determine how an accounting information system affects a business process, the CPA should first map the transaction through its trigger, automated processing steps, interfaces, and report source. In this scenario, shipment confirmation triggers invoicing and the receivable/revenue entry, cash receipts are posted later from a lockbox file, nightly summaries update the general ledger, and the daily sales report comes from the invoice table. Since the symptom involves last-day shipments appearing on the next day in both accounting records and the sales report, the most useful next step is an end-to-end walkthrough of a month-end sale, including the report logic or data source. That identifies whether the timing issue arises at shipment confirmation, invoice creation, batch posting, or report generation before moving to control testing or substantive procedures.

  • Walking one last-day-of-month sale through shipment, invoicing, posting, and report output directly addresses how the ERP affects sales and reporting.
  • Recalculating the lockbox reconciliation focuses on cash collections, which occur after the sales transaction and do not explain the month-end sales timing issue.
  • Confirming customer balances is a later substantive step; it does not build the needed understanding of transaction flow first.
  • Reviewing access rights may matter for security or segregation of duties, but it does not identify where the transaction timing changes in the process.

An end-to-end walkthrough is the next step because it reveals where the transaction timing changes across the integrated sales, subledger, general ledger, and reporting flow.


Question 17

Topic: Security, Confidentiality and Privacy

An ISC practitioner reviews the following source material for a retail company:

Policy and notice excerpts
- Customer personal information may be shared with third parties only for purposes described in the privacy notice and supported by recorded consent when consent is required.
- Restricted data must be encrypted at rest and in transit.
- Financing-application SSNs must be deleted 90 days after the credit decision.

Incident facts
- Marketing sent an analytics vendor a file containing customer names, email addresses, purchase history, and financing-application status.
- The file did not include SSNs or payment card data.
- The vendor stored the file unencrypted in a shared workspace for 10 days.
- The vendor used the file to build targeted advertising audiences.
- The company had no recorded customer consent for using purchase-history data for targeted advertising.

Which characterization is most supported by the source material?

  • A. It is primarily a confidentiality issue because the encryption requirement governs all customer-data handling, making consent irrelevant to the conclusion.
  • B. It is primarily an availability issue because storing the file in a shared workspace affects whether vendor personnel can access the data when needed.
  • C. It is primarily a security issue because privacy concerns arise only when an attacker gains unauthorized access through a malicious intrusion.
  • D. It is primarily a privacy issue because personal information was disclosed and used beyond documented consent, while the unencrypted storage is a separate confidentiality weakness.

Best answer: D

What this tests: Security, Confidentiality and Privacy

Explanation: The decisive distinction is privacy, not just confidentiality. The source material shows customer personal information was shared and used for targeted advertising without the recorded consent required by policy, while the unencrypted storage separately indicates a confidentiality control failure.

Privacy focuses on whether personal information is collected, used, retained, and disclosed in line with stated notice, consent, and policy obligations. Confidentiality focuses on protecting information from unauthorized disclosure, often through controls such as encryption. Here, the source material explicitly says third-party sharing must align with the privacy notice and recorded consent when required. The vendor received identifiable customer data and used it for targeted advertising without recorded consent, so the core issue is an improper use and disclosure of personal information. The fact that the file was stored unencrypted is also important, but that fact supports a separate confidentiality weakness rather than replacing the privacy conclusion. Nothing in the scenario suggests availability is the main concern, and privacy issues do not require an external cyberattack to exist.

  • Treating the incident as primarily a confidentiality matter overlooks the documented consent requirement; encryption failure matters, but it does not resolve whether the data use was permitted.
  • Treating it as primarily a security matter is too narrow; privacy violations can occur through improper internal or vendor use even without a malicious intrusion.
  • Treating it as an availability matter misstates the issue because the facts do not concern uptime or timely access to systems.
  • Treating it as a privacy matter fits both the data-handling facts and the policy language governing third-party sharing and consent.

Policy focuses on permitted use and disclosure of personal information, so missing required consent makes privacy the decisive issue even though unencrypted storage also weakens confidentiality.


Question 18

Topic: Security, Confidentiality and Privacy

During a SOC 2 confidentiality walkthrough, the CPA notes the following control test result:

ItemObservation
DataVendor bank account and routing numbers submitted through the onboarding portal
Current handlingA nightly CSV extract is copied to a shared network folder for exception review
AccessThe folder is readable by all AP clerks, 3 interns, and 8 IT developers
ProtectionThe folder is not encrypted at rest
RetentionFiles are retained indefinitely
PolicyConfidential banking data must be encrypted, accessible only to designated AP exception reviewers, and deleted after 90 days

Which corrective response best addresses the deficiency shown in the exhibit?

  • A. Perform a weekly reconciliation of portal uploads to the vendor master file.
  • B. Move the exception files to an encrypted repository, restrict access to designated AP exception reviewers, and automatically delete the files after 90 days.
  • C. Require all AP staff and interns to complete annual confidentiality awareness training.
  • D. Increase the backup frequency for the shared network folder from nightly to hourly.

Best answer: B

What this tests: Security, Confidentiality and Privacy

Explanation: The best response is the one that fixes the actual confidentiality exposure in the exhibit. The files contain sensitive banking data, are broadly accessible, unencrypted, and kept too long, so the corrective action must tighten access, protect the data at rest, and enforce the retention limit.

A corrective response should directly address the control deficiency shown, not just add a general control around it. Here, confidential vendor banking data is stored in an unencrypted shared folder, accessible to users without a business need, and retained indefinitely even though policy requires encryption, least-privilege access, and deletion after 90 days. The strongest remediation is therefore to move the files to an encrypted location, limit access to the designated reviewers only, and automate retention. Training may improve awareness, but it does not remove the inappropriate access or indefinite retention. More frequent backups address availability, not confidentiality. A reconciliation helps completeness or accuracy of processing, not protection of sensitive data.

  • Annual awareness training is helpful but does not by itself remove excessive access, add encryption, or enforce the 90-day retention rule.
  • More frequent backups improve recovery capability and may even create more copies of exposed data; they do not solve the confidentiality deficiency.
  • A weekly reconciliation addresses data completeness or accuracy, not unauthorized exposure of confidential banking information.

This directly remediates the confidentiality gap by aligning storage, access, and retention with the stated policy for sensitive banking data.


Question 19

Topic: Security, Confidentiality and Privacy

An accounting staff accesses the company’s ERP remotely through a VPN. During a recent phishing campaign, several employees disclosed their usernames and passwords on a fake login page. Attack investigation found successful VPN logins from unusual foreign IP addresses using those valid credentials, and no unpatched-system exploit or malware execution was identified. Management concludes that requiring multifactor authentication for VPN access is the most appropriate preventive control.

Which source best supports management’s conclusion?

  • A. An incident record showing successful VPN logins with phished credentials when the VPN required only username and password.
  • B. A VPN access listing showing several inactive contractor accounts remained enabled after their engagement end dates.
  • C. A firewall log extract showing repeated blocked inbound port scans against the company’s public IP addresses.
  • D. A security assessment finding showing employee laptops were missing recent operating system and browser patches.

Best answer: A

What this tests: Security, Confidentiality and Privacy

Explanation: The best support is evidence that attackers successfully used stolen passwords to access a password-only VPN. That directly supports multifactor authentication as the most appropriate preventive control for this remote-access risk.

Multifactor authentication is a preventive control that is especially effective when the attack path involves stolen or guessed credentials. Here, the facts show a phishing event, successful remote logins using valid usernames and passwords, and a VPN that relied only on single-factor authentication. That combination directly supports adding MFA to remote access. By contrast, missing patches would support vulnerability management, inactive contractor accounts would support deprovisioning and access review controls, and blocked port scans would support perimeter monitoring or firewall hardening. Those controls may still matter, but they do not best address the specific risk of attackers authenticating with compromised user credentials.

  • The incident record is the strongest support because it matches the exact attack path: stolen passwords were enough to gain VPN access.
  • Missing operating system and browser patches point to endpoint hardening and patch management, a different control response.
  • Enabled inactive contractor accounts indicate a deprovisioning weakness, which is important but not the main risk described for active users’ stolen credentials.
  • Blocked port scans relate to perimeter defense and monitoring; they do not show that stronger remote-user authentication is the key mitigation.

It directly ties the unauthorized access to stolen passwords on a single-factor remote-access process, which multifactor authentication is designed to mitigate.


Question 20

Topic: Security, Confidentiality and Privacy

The company’s incident response plan states:

  • A security event is any observable activity noted in logs or alerts.
  • A cybersecurity incident is a security event that results in or is reasonably likely to result in unauthorized access, data exposure, malware execution, or material service disruption.
  • When an incident is identified, relevant evidence must be preserved and the matter must be escalated immediately to the incident response manager and legal/compliance.
  • External notice is considered only after the investigation determines a reportable breach.

At 7:40 a.m., a SOC analyst sees a successful VPN login to a recently terminated employee’s account from an unfamiliar IP address. Five minutes later, database logs show that account ran an export query against a table containing customer Social Security numbers. The analyst does not yet know whether the export file left the network.

What should the analyst do next?

  • A. Classify the activity as a cybersecurity incident, preserve relevant evidence, and escalate under the incident response plan.
  • B. Record the alert as a routine security event because no service disruption or malware has been confirmed.
  • C. Keep the matter within normal SOC monitoring until outbound logs prove the data file left the network.
  • D. Send immediate breach notices to affected customers because the query involved Social Security numbers.

Best answer: A

What this tests: Security, Confidentiality and Privacy

Explanation: This is more than a routine security event. A successful login to a terminated employee account followed by an export query against Social Security number data makes unauthorized access and possible exposure reasonably likely, so the analyst should preserve evidence and escalate as a cybersecurity incident before considering external notification.

A security event is any observable occurrence, such as an alert, login, or log entry. A cybersecurity incident is a security event that actually causes or is reasonably likely to cause unauthorized access, data exposure, malware execution, or material disruption. Here, the successful use of a terminated employee’s account strongly suggests unauthorized access, and the subsequent export query against customer SSN data raises a clear confidentiality risk. That is enough to meet the incident threshold under the stated plan, even though exfiltration is not yet confirmed. The proper next step is to preserve relevant evidence and escalate internally under the incident response plan. Customer or regulatory notification may follow later, but only after investigation and legal/compliance review determine that a reportable breach occurred.

  • Waiting for proof that the file left the network is too late; incident escalation begins when unauthorized access or likely data exposure is identified.
  • Immediate customer notice is premature because external reporting depends on breach determination, not just suspicious access to sensitive data.
  • Treating the matter as a routine event ignores the key fact that a terminated account successfully accessed SSN data.
  • A service outage or malware alert is not required; confidentiality risk alone can make an event a cybersecurity incident.

A terminated employee account successfully accessing SSN data makes unauthorized access reasonably likely, so incident escalation and evidence preservation are required before any external reporting decision.


Question 21

Topic: Information Systems and Data Management

A CPA is testing a control over configuration parameters in an acquired billing application. Production parameters are maintained in an admin console outside the CI/CD pipeline and can affect invoice approval thresholds, automatic write-off limits, and posting dates.

Management states the key control is: each Friday, the application manager reviews a system-generated report of all production parameter changes for the week and agrees each change to an approved ticket.

The CPA has already:

  • obtained the policy for requesting and approving parameter changes,
  • confirmed that system administrators can change parameters directly in production, and
  • obtained a Q2 spreadsheet export of parameter changes that was manually saved by the application manager.

What should the CPA do next?

  • A. Reconcile the Q2 parameter-change export to independent system audit logs to establish the completeness and accuracy of the population before selecting samples.
  • B. Select a sample from the Q2 export and inspect whether each sampled change had an approved ticket and evidence of Friday review.
  • C. Reperform one approved parameter change in a test environment to confirm that the application accepts the configured value.
  • D. Review whether developers can move code into production through the CI/CD pipeline without approval.

Best answer: A

What this tests: Information Systems and Data Management

Explanation: The best next step is to validate the completeness and accuracy of the parameter-change report before using it for control testing. If the population is incomplete or altered, any sample drawn from it would not provide reliable evidence about whether all production parameter changes were reviewed.

When a control relies on a system-generated report, the CPA should first determine whether that report is complete and accurate enough for the intended test. Here, the key control is the weekly review of all production parameter changes, and the application manager’s spreadsheet export is the population that would likely be used for sampling. Because configuration parameters can be changed directly in production outside the CI/CD pipeline, those changes may not appear in normal code-deployment records. Reconciling the export to independent audit logs helps confirm that the report captures all relevant parameter changes and has not been omitted or altered. Only after establishing that reliability should the CPA select samples to inspect approvals and review evidence.

  • Selecting samples immediately is premature because the CPA has not yet established that the report used for sampling includes all production parameter changes.
  • Reviewing CI/CD code-promotion access addresses software deployment controls, not the specific control over production configuration-parameter changes outside the pipeline.
  • Reperforming a parameter change may provide functional insight, but it does not first resolve whether the population for testing the weekly review control is reliable.

Because the control depends on a report of all parameter changes, the CPA must first verify that the report population is complete and accurate.


Question 22

Topic: Security, Confidentiality and Privacy

An online retailer allows customers to post product reviews. After one review was submitted, multiple users reported that opening the product page caused their browsers to run unexpected JavaScript and redirect them to a fake sign-in page. The security analyst concluded the site experienced a stored cross-site scripting attack.

Which evidence best supports that conclusion?

  • A. A web application firewall log showing a login request with admin' OR '1'='1' -- in the username field followed by a database error.
  • B. An incident record showing the review text stored <script>window.location='https://acct-verify.example'</script> in the database and the page template later displayed that review to other users without output encoding.
  • C. A VPN log extract showing the same authenticated API request and token were captured once and accepted again 14 seconds later.
  • D. A crash dump summary showing a very long input string overwrote adjacent memory and changed the process return address.

Best answer: B

What this tests: Security, Confidentiality and Privacy

Explanation: The best support is the incident record showing malicious JavaScript stored in the review field and later served to other users without output encoding. That is the defining pattern of stored cross-site scripting, where untrusted input is persisted and then executed in victims’ browsers.

Cross-site scripting occurs when an application includes untrusted input in a web page in a way that lets the browser execute it as code. In a stored XSS attack, the malicious payload is saved by the application, such as in a review, comment, or profile field, and then delivered to later visitors. The strongest supporting evidence therefore shows both persistence of the script and unsafe rendering to users. The incident record does exactly that by showing a <script> payload stored in the database and displayed without output encoding. By contrast, a quote-based condition like OR '1'='1' points to SQL injection, repeated acceptance of the same authenticated request points to a replay attack, and overwritten memory with a changed return address points to buffer overflow or return-oriented exploitation.

  • The incident record with stored <script> content supports stored XSS because it shows script code persisted and executed in users’ browsers.
  • The login request containing OR '1'='1' is classic SQL injection evidence because it targets database query logic, not browser-side script execution.
  • The repeated authenticated request and token acceptance indicate a replay attack because a previously valid message was resent and accepted.
  • The overwritten memory and changed return address indicate buffer overflow or return-oriented exploitation at the process level, not XSS.

Stored script content that is later rendered in other users’ browsers without output encoding is direct evidence of stored cross-site scripting.


Question 23

Topic: Security, Confidentiality and Privacy

A company uses a SaaS vendor to process customer billing data.

Vendor file summary:

  • The vendor provided a SOC 2 Type 2 report covering Security and Availability for the last 12 months.
  • The report uses the carve-out method for the cloud hosting subservice organization.
  • The report lists these complementary user entity controls (CUECs):
    • Review user access for company personnel with vendor-admin rights at least monthly.
    • Notify the vendor within 24 hours when authorized company users terminate or change roles.
  • Procurement reviewed the SOC report at onboarding but has not obtained an updated report since then.
  • The company does not perform monthly access reviews and has not formally notified the vendor when employees leave.

Which response is the best correction to this issue?

  • A. Implement the listed CUECs by performing monthly reviews of company personnel with vendor-admin rights, notify the vendor promptly of terminations or role changes, and obtain updated vendor assurance reports as part of ongoing monitoring.
  • B. Stop using the vendor until it issues a SOC 1 Type 2 report that includes every subservice organization under the inclusive method.
  • C. Keep the current process but add stronger encryption for billing data sent to the vendor to reduce third-party risk.
  • D. Require the vendor to absorb all access-review responsibilities so the company no longer needs to perform any user-side access controls.

Best answer: A

What this tests: Security, Confidentiality and Privacy

Explanation: The best correction is to perform the CUECs the SOC 2 report assigns to the user entity and to monitor the vendor on an ongoing basis. The problem is not solved by shifting all responsibility to the vendor or by demanding a different report that is not required by the facts.

When a service provider’s SOC report identifies complementary user entity controls, the user entity must implement those controls for the overall control environment to work as intended. Here, the company failed to review vendor-admin access monthly and failed to notify the vendor when users terminated or changed roles, creating an access-control weakness. In addition, relying only on the onboarding review is insufficient vendor due diligence; ongoing monitoring should include obtaining updated assurance reports and evaluating any carve-out implications for the subservice organization. The most appropriate remediation is therefore to implement the required user-side access controls and strengthen periodic vendor monitoring, rather than overreacting by stopping service or demanding a different report type.

  • Requiring the vendor to absorb all access-review duties ignores that the SOC report explicitly assigns certain controls to the user entity.
  • Stopping use until a SOC 1 Type 2 inclusive report is issued overstates the response and focuses on the wrong report objective based on the facts given.
  • Adding stronger encryption may be useful generally, but it does not correct the specific access and monitoring gaps identified in the scenario.

The gap is the company’s failure to perform user-entity responsibilities and ongoing vendor monitoring, not a missing provider-side control.


Question 24

Topic: Information Systems and Data Management

A company’s accounting environment includes staff laptops, a centralized ERP server, an operating system installed on that server, and switches and routers connecting users to shared resources. Which statement best distinguishes the primary purpose of these IT architecture components?

  • A. The operating system is the physical machine hosting the ERP, the server manages memory and files, network infrastructure stores the accounting database, and end-user devices authenticate network traffic.
  • B. The operating system manages hardware and software resources, the server provides shared processing or storage, network infrastructure carries data between devices, and end-user devices let staff access the accounting system.
  • C. The operating system routes packets between offices, the server is the laptop each accountant uses, network infrastructure runs the general ledger application, and end-user devices provide centralized storage.
  • D. The operating system is mainly the screen where accountants enter transactions, the server is the communication path between locations, network infrastructure allocates CPU and memory, and end-user devices deliver shared application processing.

Best answer: B

What this tests: Information Systems and Data Management

Explanation: The correct choice assigns each component to its actual function in a typical accounting environment. Operating systems manage a device’s resources, servers provide shared services, network infrastructure connects devices and carries traffic, and end-user devices are the tools employees use to interact with the system.

In an accounting environment, these components work together but serve different purposes. An operating system is software that manages a device’s hardware resources, files, memory, and processes so applications can run. A server is the computer or virtual instance that provides shared services such as ERP processing, database access, or file storage to multiple users. Network infrastructure, including switches and routers, enables communication between devices and systems. End-user devices, such as laptops and desktops, are the machines accountants use to enter, review, and approve transactions. Distinguishing these roles is important because control, security, and availability issues often depend on which component is actually responsible for a function.

  • Treating the operating system as the physical machine confuses software with the server hardware or virtual instance it runs on.
  • Assigning database storage or application processing to network infrastructure mixes connectivity functions with server functions.
  • Describing the server as each accountant’s laptop confuses centralized shared resources with end-user devices.
  • Saying end-user devices or network infrastructure allocate CPU and memory reverses responsibilities that belong primarily to the operating system.

This option correctly matches each component to its primary role in an accounting environment.


Question 25

Topic: Security, Confidentiality and Privacy

A CPA is reviewing privileged access for Orion Co.’s billing database. Company policy states:

  • Named privileged users must have unique IDs and multi-factor authentication (MFA).
  • Shared privileged accounts are prohibited except one approved emergency break-glass account per system.
  • Any break-glass account must be stored in a password vault, used only with an incident ticket, and reviewed after each use.

Current privileged access listing:

AccountAssigned toRoleMFALast reviewNotes
DBA-MBrownM. BrownDatabase adminEnabled18 days agoNormal admin
SQLADMINDBA teamDatabase adminDisabledNo review on fileUsed for faster troubleshooting
AD-JLeeJ. LeeDomain adminEnabled25 days agoNormal admin

Based on these facts, what should the CPA do next?

  • A. Inspect whether SQLADMIN is an approved break-glass account with password-vault control, incident-ticket support, and post-use review.
  • B. Conclude privileged-access controls are effective because the named administrators have MFA and recent reviews.
  • C. Recommend disabling SQLADMIN immediately without first determining whether it is a documented emergency exception.
  • D. Expand testing to physical badge access for the data center before completing the logical-access review.

Best answer: A

What this tests: Security, Confidentiality and Privacy

Explanation: The shared SQLADMIN account is the only item that departs from normal privileged-access rules, but the policy allows a narrow break-glass exception. The CPA should first inspect whether that exception was formally approved and whether the required compensating controls exist before concluding there is a deficiency or recommending removal.

When a privileged-access listing shows a shared admin account, the next step is to compare it to policy and determine whether it qualifies for an approved exception. Unique named IDs with MFA are the normal standard because they preserve accountability for privileged actions. Here, the policy permits one emergency break-glass account, but only if it is password-vaulted, tied to incident tickets, and reviewed after each use. The exhibit does not show that support for SQLADMIN, and the note “used for faster troubleshooting” raises concern that it may be ordinary shared access rather than emergency access. The CPA should therefore inspect the exception evidence first. Only after evaluating that evidence should the CPA conclude whether there is a design or operating deficiency.

  • Inspecting break-glass evidence is appropriate because the policy permits a narrow exception, so the exhibit alone does not prove whether the account is authorized.
  • Concluding controls are effective ignores the unresolved shared privileged account and the missing review evidence.
  • Disabling the account immediately is premature because the CPA should first determine whether it is a valid emergency-access exception.
  • Testing physical badge access addresses a different control area and does not resolve the logical privileged-access issue shown in the exhibit.

The shared privileged account may be an allowed exception only if required emergency-access controls are documented and operating.

Questions 26-50

Question 26

Topic: Security, Confidentiality and Privacy

A company allows a logistics partner to connect through a site-to-site VPN.

Facts:

  • The partner’s business need is limited to sending shipment updates to one DMZ-hosted shipping API over HTTPS.
  • The security manager concludes that this partner connection creates unnecessary third-party cybersecurity risk because a compromise at the partner could be used for lateral movement into the company’s internal environment.

Which source would best support that conclusion?

  • A. A change ticket showing the shipping API’s input-validation routine was updated to reject malformed tracking numbers
  • B. An incident record showing one night’s shipment-status file was delayed because the partner’s batch job failed
  • C. A SOC 2 report excerpt from the partner stating that multifactor authentication and endpoint encryption are used for partner employees
  • D. A firewall rule extract showing the partner VPN subnet can connect to the DMZ shipping API, an internal domain controller, an ERP database server, and a finance file server

Best answer: D

What this tests: Security, Confidentiality and Privacy

Explanation: The firewall rule extract is the best support because it directly shows the partner connection has broader access than the stated business need. That overbroad access creates a credible path for unauthorized access or lateral movement if the partner environment is compromised.

For third-party connections, the strongest evidence of cybersecurity threat is the artifact that shows the actual trust boundary or access scope. Here, the partner only needs HTTPS access to one DMZ-hosted API, so evidence that the VPN subnet can also reach internal systems such as a domain controller, ERP database, and file server directly supports the conclusion that the connection is overprivileged. That is a classic third-party risk: compromise of the partner could be leveraged to move deeper into the company’s environment. By contrast, a partner SOC report describes the partner’s controls, a delayed file incident shows an operational issue, and an API change ticket addresses application logic. None of those proves the company exposed unnecessary internal network access through the partner connection.

  • A partner SOC 2 excerpt may describe the partner’s control environment, but it does not show what the company’s VPN rules actually allow.
  • A delayed shipment-status file supports an availability or processing issue, not excessive access from the partner connection.
  • An API input-validation change ticket relates to application integrity, not whether the partner can traverse the internal network.
  • The firewall rule extract directly evidences access beyond the required DMZ interface, matching the conclusion being evaluated.

It directly shows the partner VPN can reach internal systems beyond the single DMZ API required, evidencing unnecessary lateral-movement exposure.


Question 27

Topic: Security, Confidentiality and Privacy

Blue Harbor Co. discovered unauthorized access to its e-commerce customer database. Management is preparing a memo for the audit committee concluding that the incident is likely to create significant financial and operational impact through response costs, required notifications, service disruption, and reputational harm.

Which source material would BEST support that conclusion?

  • A. An incident record summarizing that 58,400 customer records containing names, SSNs, and bank account numbers were exfiltrated; counsel concluded notification is required in 11 states; approved response spending is $240,000 for forensics, mailings, call-center support, and credit monitoring; the order portal was offline 36 hours; and customer cancellations increased 12% after the incident notice.
  • B. An access listing showing the compromised administrator account had database read privileges, export rights, and had remained active for 45 days after the employee transferred departments.
  • C. A firewall log extract showing 14 GB of outbound traffic from the customer database server to an unfamiliar external IP address over a 22-minute period late at night.
  • D. A change record showing a critical security patch for the web application was postponed twice because user-acceptance testing had not been completed.

Best answer: A

What this tests: Security, Confidentiality and Privacy

Explanation: The incident record is the strongest support because it connects the breach to concrete business consequences: notification obligations, external response spending, downtime, and customer cancellations. Those facts directly substantiate financial, operational, reporting, and reputational effects rather than merely showing how the breach occurred.

To support a conclusion about the implications of a data breach, the best evidence should address business impact, not just technical cause or occurrence. Strong support includes the number and sensitivity of records affected, whether reporting or notification is required, expected response costs, operational disruption, and signs of customer or market reaction. The incident record does all of that by linking exfiltration to legal notification requirements, approved spending for remediation and customer support, system downtime, and increased cancellations. By contrast, an access listing, a firewall log, and a delayed patch record may help explain how the breach happened or confirm unauthorized activity, but they do not by themselves support the broader conclusion about financial and operational consequences.

  • The incident record is best because it combines evidence of exposed data, notification obligations, direct response costs, downtime, and customer fallout.
  • The access listing supports a control weakness and excessive access issue, but it does not show the breach’s financial or operational effects.
  • The firewall log helps confirm possible exfiltration, but it does not establish reporting requirements, response costs, or business disruption.
  • The delayed patch change record may indicate root cause or poor change management, not the resulting impact of the breach.

It directly ties the breach to reporting obligations, measurable response costs, operational downtime, and customer fallout.


Question 28

Topic: Information Systems and Data Management

Delta Sports stores customer acquisition channel in its CRM, customer invoices in its ERP, and cash receipts in a treasury application. The CFO concludes, “During Q1, customers acquired through trade shows took longer on average to pay invoices than customers acquired through the website.” Which source would best support this conclusion?

  • A. A SQL result that joins CRM customers to ERP invoices and treasury cash receipts, then groups by acquisition channel and calculates average days from invoice date to full payment for Q1
  • B. A CRM report showing the number of customers by acquisition channel and each customer’s credit limit for Q1
  • C. A data dictionary showing that customer_id appears in both the CRM and ERP and invoice_id appears in the ERP and treasury application
  • D. An ERP accounts receivable aging report showing open invoice balances by customer as of March 31

Best answer: A

What this tests: Information Systems and Data Management

Explanation: The best support is an integrated SQL result that combines acquisition channel, invoice dates, and receipt dates across the CRM, ERP, and treasury systems. That source directly measures average payment time by channel, which matches the CFO’s conclusion.

To support a conclusion about payment speed by acquisition channel, the evidence must combine data from multiple sources and calculate the stated metric. The CRM provides acquisition channel, the ERP provides invoice dates, and the treasury application provides receipt dates. A joined SQL result can connect these records using common keys and compute average days from invoice date to full payment for each channel during Q1. The other sources are weaker because they either show only one part of the needed information or only document that integration is possible. Good support for financial or operational analysis should be both relevant to the conclusion and complete enough to reproduce the analysis from underlying data.

  • The CRM report includes acquisition channel but does not show actual payment timing, so it cannot support the conclusion.
  • The ERP aging report relates to receivables status at one date, not average days to pay by acquisition channel.
  • The data dictionary helps identify join fields, but it does not provide analyzed results or evidence that the conclusion is true.

This source directly integrates the needed fields across systems and computes the exact measure used in the conclusion.


Question 29

Topic: Security, Confidentiality and Privacy

A CPA performs a walkthrough of the employee termination access-removal process.

Documented policy:

  • HR must enter a termination ticket immediately after a termination is approved.
  • IT operations must disable the employee’s network and VPN access within 24 hours of the HR ticket timestamp.
  • Security reviews a weekly exception report of any terminations not completed within 24 hours.

Walkthrough observation for one terminated employee:

  • HR entered the termination ticket at 9:00 a.m. Monday.
  • IT operations disabled network and VPN access at 4:00 p.m. Thursday.
  • The weekly exception report identified the late removal on Friday.

How should the CPA best characterize this walkthrough observation?

  • A. An operating deviation from a documented preventive access-removal control
  • B. A security incident because the former employee retained access beyond the required time
  • C. A compensating control because the weekly exception report offsets the delayed removal
  • D. A design deficiency because no detective control exists to identify delayed terminations

Best answer: A

What this tests: Security, Confidentiality and Privacy

Explanation: The documented policy clearly requires access removal within 24 hours, and the walkthrough showed removal occurring well after that deadline. Because the policy and related detective monitoring exist, the issue is best classified as a control operation failure rather than a design problem.

In a walkthrough, the CPA compares what actually happened to what documented policy requires. Here, the policy requires IT operations to disable network and VPN access within 24 hours of the HR ticket. The observed removal took until Thursday afternoon after a Monday morning ticket, so the control did not operate as required. That makes the exception an operating deviation in a preventive access-removal control. It is not primarily a design deficiency because the process includes both a required timely disablement step and a weekly exception report. It is also not automatically a security incident, because the facts show delayed deprovisioning, not confirmed unauthorized use or harm. The weekly exception report is detective and may help identify exceptions, but it does not convert the late disablement into compliance.

  • Operating deviation from a documented preventive access-removal control is correct because the 24-hour requirement existed but was not followed in practice.
  • Design deficiency because no detective control exists is incorrect because the weekly exception report is a detective control already built into the process.
  • Security incident because the former employee retained access is incorrect because delayed access removal alone does not prove unauthorized use or an actual incident.
  • Compensating control because the weekly exception report offsets the delay is incorrect because detection after the deadline does not replace timely preventive removal.

The policy is designed and documented, but the observed execution failed to meet the 24-hour requirement, making this an operating deviation.


Question 30

Topic: Considerations for System and Organization Controls Engagements

A CPA firm is engaged as the service auditor for a payroll processor’s SOC 2 examination. The payroll processor’s system description will use the inclusive method to include a cloud hosting provider as a subservice organization. During the examination period, the CPA firm also designed and implemented the hosting provider’s privileged-access approval workflow. The firm performed no nonattest services for the payroll processor itself.

How should this relationship be characterized for independence purposes?

  • A. Independence is impaired only if the subservice organization is carved out rather than included.
  • B. Independence is not impaired because work performed for the subservice organization is outside the scope of an inclusive presentation.
  • C. Independence is not impaired because only the service organization, not the subservice organization, is relevant to the engagement.
  • D. Independence is impaired because the service auditor must be independent of both the service organization and an included subservice organization.

Best answer: D

What this tests: Considerations for System and Organization Controls Engagements

Explanation: The inclusive method brings the subservice organization into the SOC examination’s subject matter. Because the CPA firm designed and implemented a control for that included subservice organization, the relationship should be characterized as an independence impairment.

In a SOC engagement, the service auditor must be independent of the parties whose system and controls are included in the examination. When the inclusive method is used, the subservice organization’s services and relevant controls are incorporated into the system description and the report’s subject matter. That means the service auditor must maintain independence with respect to both the service organization and the included subservice organization. Here, the CPA firm designed and implemented the hosting provider’s privileged-access approval workflow, which is management-type or nonattest involvement with the included subservice organization. That impairs independence for the SOC examination, even though the firm did not perform nonattest services for the payroll processor itself. By contrast, under a carve-out presentation, the subservice organization’s controls are excluded from the service auditor’s direct opinion.

  • The idea that only the contracting service organization matters is incorrect; an included subservice organization also matters for independence.
  • Saying work for the hosting provider is outside the scope reverses the inclusive method, which specifically brings that subservice organization into scope.
  • Claiming impairment exists only when the subservice organization is carved out is backwards; the added independence concern here arises because the subservice organization is included.

Under the inclusive method, the subservice organization is part of the subject matter, so designing and implementing its control impairs independence.


Question 31

Topic: Considerations for System and Organization Controls Engagements

A manufacturer that is not acting as a service organization wants an independent report it can share with investors, lenders, major customers, and regulators to communicate its entity-wide cybersecurity risk management program and the effectiveness of controls within that program. Which source best supports the conclusion that a SOC for Cybersecurity report is the appropriate report?

  • A. A report excerpt stating that a service organization may distribute a general-use report on selected Trust Services Criteria without detailed testing results.
  • B. A report excerpt stating that the service auditor opines on controls at a service organization relevant to user entities’ internal control over financial reporting, with use restricted to management, user entities, and user auditors.
  • C. A report excerpt stating that management of a service organization describes its system and the suitability and operating effectiveness of controls to meet Trust Services Criteria, with use restricted to knowledgeable parties.
  • D. A report excerpt stating that management describes the entity’s cybersecurity risk management program, asserts whether controls were effective to achieve the entity’s cybersecurity objectives, and the practitioner’s opinion is intended for general use by a broad range of users.

Best answer: D

What this tests: Considerations for System and Organization Controls Engagements

Explanation: The best support is the excerpt describing an entity’s cybersecurity risk management program and a report intended for general use. That combination is the defining purpose and audience of a SOC for Cybersecurity report.

A SOC for Cybersecurity report is designed to help an entity communicate information about its cybersecurity risk management program to a broad range of users, such as investors, customers, business partners, and regulators. It is not limited to service organizations, and it is a general-use report. The report includes management’s description of the program, management’s assertion, and the practitioner’s opinion on whether the description is presented in accordance with the description criteria and whether the controls were effective to achieve the entity’s cybersecurity objectives. By contrast, SOC 1 focuses on controls relevant to user entities’ financial reporting, SOC 2 focuses on a service organization’s system and Trust Services Criteria for restricted users, and SOC 3 is general use but still relates to a service organization’s system rather than an entity-wide cybersecurity risk management program.

  • The internal-control-over-financial-reporting excerpt describes SOC 1, which is for user entities and user auditors, not broad cybersecurity communication.
  • The Trust Services Criteria excerpt with restricted use describes SOC 2, which is for knowledgeable users of a service organization report.
  • The general-use Trust Services Criteria excerpt describes SOC 3, which is tempting because of general use, but it is still about a service organization’s system rather than a SOC for Cybersecurity report.
  • The excerpt about the entity’s cybersecurity risk management program is the only one that matches both the purpose and intended users of SOC for Cybersecurity.

This excerpt matches SOC for Cybersecurity because it addresses the entity’s cybersecurity risk management program and is intended for broad, general-use distribution.


Question 32

Topic: Considerations for System and Organization Controls Engagements

A CPA is reviewing a SOC 2 Type 2 report for a cloud payroll processor. Which excerpt most clearly belongs in the service auditor’s tests of controls and results section, rather than in management’s assertion, the system description, CUECs, CSOCs, or service commitments?

  • A. For a sample of 40 terminated-user tickets, the auditor inspected evidence of access removal within 24 hours and noted one removal completed after three days.
  • B. Management asserts that the system description is fairly presented and that controls were suitably designed and operated effectively throughout the period.
  • C. User entities must promptly notify the provider of terminated employees and review daily payroll exception reports.
  • D. The payroll platform receives employer files through an encrypted portal, validates file format, and stores accepted data in a cloud database.

Best answer: A

What this tests: Considerations for System and Organization Controls Engagements

Explanation: The excerpt about sampling terminated-user tickets and noting one late removal is the only choice that reports the service auditor’s procedure and the result of that procedure. Management assertions, system descriptions, CUECs, CSOCs, and service commitments describe claims, system facts, responsibilities, or promises, not audit test results.

In a SOC 2 Type 2 report, the tests of controls and results section explains what the service auditor actually tested and what was found. Common clues are the control tested, the procedure performed, the sample or items examined, and any exceptions or deviations noted. Management’s assertion is management’s claim about fair presentation, suitable design, and operating effectiveness. The system description explains the service organization’s system and processing environment. CUECs and CSOCs identify complementary controls expected at user entities or carved-out subservice organizations. Service commitments are promises such as availability, security, or processing expectations. Those items help users understand the system and responsibilities, but they are not the service auditor’s testing results.

  • The management assertion is management’s own claim, not the auditor’s testing work or findings.
  • The encrypted-portal and cloud-database statement describes how the system operates, so it fits the system description.
  • The requirement for user entities to notify terminations and review exception reports is a CUEC, meaning a responsibility outside the service organization’s tested controls.
  • CSOCs and service commitments would also be descriptive responsibilities or promises, not a report of the auditor’s sample, procedure, and exception.

This excerpt describes the auditor’s test procedure, sample, and exception noted, which is what appears in tests of controls results.


Question 33

Topic: Information Systems and Data Management

A company is updating its continuity program. In workshops with process owners, management identifies payroll and order entry as critical processes, documents each process’s application, network, and third-party dependencies, sets payroll at an RTO of 4 hours and an RPO of 30 minutes, and ranks payroll ahead of expense reimbursement for recovery. How should this work be characterized?

  • A. A business impact analysis used to identify critical processes, dependencies, and recovery priorities
  • B. A disaster recovery test focused on validating restoration procedures
  • C. A risk assessment focused on estimating threat likelihood and control gaps
  • D. An incident response analysis focused on containment and eradication steps

Best answer: A

What this tests: Information Systems and Data Management

Explanation: The scenario describes classic business impact analysis activities: identifying critical business processes, mapping dependencies, and assigning recovery objectives and priorities. Those steps determine availability requirements and recovery order rather than evaluating threats, testing recovery, or managing a live incident.

A business impact analysis (BIA) helps an organization determine which business processes are most critical, what systems and external services those processes depend on, how quickly they must be restored, and how much data loss is tolerable. Typical BIA outputs include process criticality rankings, dependency mapping, recovery priorities, and availability requirements such as RTO and RPO. In this scenario, management is interviewing process owners, identifying critical functions, documenting dependencies, and setting recovery targets, all of which are hallmark BIA activities. By contrast, a risk assessment emphasizes threats, vulnerabilities, and likelihood; a disaster recovery test checks whether restoration procedures actually work; and incident response focuses on detecting, containing, and recovering from a specific event.

  • A risk assessment is different because it evaluates threats, vulnerabilities, and potential control weaknesses rather than setting process recovery priorities and availability targets.
  • A disaster recovery test would involve executing or simulating restoration procedures to confirm systems can be recovered within targets already established.
  • An incident response analysis deals with a current or suspected security event, emphasizing containment, eradication, and recovery rather than business process prioritization.

This work defines business criticality, supporting dependencies, and recovery targets, which are core outputs of a business impact analysis.


Question 34

Topic: Information Systems and Data Management

During a walkthrough of the billing application, a CPA learns:

  • Code releases go through the CI/CD pipeline with approval and testing.
  • Tax-rate and discount-threshold configuration parameters are edited directly in the production admin console by senior developers.
  • Those parameter changes take effect immediately and do not require a ticket, approval, testing, or automated retention of prior values.

Which control response best addresses this change management gap?

  • A. Require monthly management review of billing exception reports to identify unusual pricing results.
  • B. Require senior developers to document each production parameter change in the ticketing system after implementation.
  • C. Require parameter changes to be version controlled, approved, tested in nonproduction, and deployed through CI/CD with automated logging of prior and new values.
  • D. Require customer service personnel to have read-only access to billing configuration screens.

Best answer: C

What this tests: Information Systems and Data Management

Explanation: Configuration parameters can change transaction processing just like code can. The best response is to bring those parameter changes under formal change management with approval, testing, controlled deployment, and automated audit logging before they affect production.

The gap is that production configuration changes are bypassing formal change control. Even though code moves through CI/CD, key billing parameters are being changed directly in production without approval, testing, or a retained history of what changed. That creates risk of unauthorized, erroneous, or undocumented changes affecting billing results. The strongest control response is to treat these parameters as controlled configuration items: store them under version control, require approved change requests, test changes in nonproduction, deploy them through the controlled release process, and keep automated before-and-after logs. That combination addresses both design and implementation weaknesses by adding preventive and traceability controls. After-the-fact documentation or periodic review may help detect issues, but they do not adequately govern the change before it affects production.

  • Documenting changes after implementation improves recordkeeping but still allows unapproved and untested production changes to occur first.
  • Reviewing billing exception reports is mainly detective and delayed; it does not establish proper change authorization or deployment control.
  • Giving customer service read-only access may be appropriate, but it does not address developers making direct production parameter changes.

This response directly adds authorization, testing, controlled deployment, and an audit trail for production configuration changes.


Question 35

Topic: Security, Confidentiality and Privacy

A company’s incident response plan includes:

  • The service desk opens a ticket and routes suspected security events to the security analyst.
  • The security analyst must preserve relevant logs, perform initial triage, and classify severity within 30 minutes.
  • If the event likely involves a privileged account or regulated customer data, the analyst must notify the Incident Response Manager immediately after classification.
  • The system owner decides whether systems should be taken offline.
  • Legal and the Privacy Officer assess external notification requirements after containment.

At 9:05 a.m., monitoring alerts show a successful login to the production database using an administrator account from an unapproved country. At 9:08 a.m., the service desk opens a ticket and notifies the security analyst. At 9:14 a.m., the analyst confirms that a 2 GB file containing customer records was downloaded during the session.

According to the plan, what should the security analyst do next?

  • A. Assign internal audit to review privileged-access controls before the incident is escalated.
  • B. Complete initial triage by preserving the relevant evidence, classifying the event as high severity, and notifying the Incident Response Manager.
  • C. Take the production database offline immediately and inform the system owner after the shutdown.
  • D. Begin external notification planning with Legal and the Privacy Officer for affected customers.

Best answer: B

What this tests: Security, Confidentiality and Privacy

Explanation: The plan assigns the security analyst the immediate tasks of preserving evidence, classifying severity, and escalating likely privileged-account or customer-data incidents within the initial response timeline. External notifications, shutdown decisions, and audit review belong to other roles or later phases.

A well-designed incident response plan defines who does what, in what order, and by when. In this scenario, the analyst has already received the ticket and confirmed suspicious activity involving both a privileged account and customer data, which fits the plan’s escalation criteria. The next step is to preserve relevant evidence, complete initial triage and severity classification within the 30-minute window, and notify the Incident Response Manager immediately after classification. That sequence supports investigation integrity and timely coordination. The plan also clearly separates responsibilities: the system owner decides on taking systems offline, while Legal and the Privacy Officer evaluate external notifications after containment. Internal audit review is not the immediate next response step.

  • Beginning external notification planning is premature because the plan assigns that work to Legal and the Privacy Officer after containment.
  • Taking the database offline immediately skips the plan’s role assignment because the system owner, not the analyst, decides whether systems are removed from service.
  • Assigning internal audit to review privileged-access controls addresses a later control-evaluation activity, not the immediate incident-response sequence.

The plan requires the analyst to preserve evidence, classify likely privileged-account and customer-data events within the response window, and then escalate them immediately to the Incident Response Manager.


Question 36

Topic: Security, Confidentiality and Privacy

A CPA is testing a company’s response to a malware incident. The approved incident response plan requires responders to assign a severity level within 1 hour, preserve logs, and notify the privacy officer when a system containing customer PII is involved. In the incident file, the team isolated the affected file server and restored operations, but no severity rating or privacy-officer notification was documented. The incident manager says the team used an informal shortcut because production had to resume quickly.

Which documentation or follow-up is most appropriate?

  • A. Treat the issue as incomplete paperwork and ask IT to add the missing severity rating and notification entries after the fact.
  • B. Update the approved incident response plan immediately to match the informal process used during the incident.
  • C. Accept the shortcut because systems were restored quickly and no confirmed customer data loss was reported.
  • D. Document the variance from the approved plan, obtain management’s explanation and support for the alternate actions, and assess whether remediation or plan revision is needed.

Best answer: D

What this tests: Security, Confidentiality and Privacy

Explanation: The incident file shows required plan steps were skipped, so the difference must be treated as an exception, not ignored as a minor documentation gap. The proper follow-up is to document the deviation, understand why it occurred, and determine whether the problem was noncompliance with the plan or a plan that needs revision.

When actual incident response procedures do not match the approved plan, the reviewer should document the deviation and investigate it. That means obtaining management’s explanation, confirming what actions were actually performed, and evaluating the control impact. The key follow-up is to determine whether the approved plan was adequate but not followed, which indicates an operating issue, or whether the plan itself is outdated or incomplete, which indicates a design issue. That conclusion drives remediation such as training, escalation, corrective action, or formal plan updates. Simply accepting fast recovery, filling in missing records after the fact, or changing the plan to match an unapproved shortcut would weaken governance and leave the organization unable to show that incidents are handled consistently and according to approved requirements.

  • Accepting the shortcut because recovery was timely ignores the missing required steps and the governance risk from not following the approved plan.
  • Backfilling the incident ticket after the fact treats the issue as clerical, but the real issue is why required procedures were skipped.
  • Updating the plan immediately to match the shortcut is premature because the alternate process must first be evaluated and formally approved, not assumed to be acceptable.

A mismatch between actual incident handling and the approved plan should be documented as an exception and evaluated to determine whether the issue is poor execution or an outdated plan.


Question 37

Topic: Security, Confidentiality and Privacy

A CPA is evaluating an event under the company’s documented data-handling policy.

Data handling policy
- Tax ID numbers and bank account numbers are classified as Restricted Personal Data.
- Restricted Personal Data may be shared with approved processors only for contracted services and only through the company's approved SFTP portal.
- Email attachments may not be used to transmit Restricted Personal Data, even if password protected.

Incident summary
- PayPro LLC is an approved payroll processor under a signed data processing agreement.
- During scheduled SFTP maintenance, an HR analyst emailed a password-protected spreadsheet to PayPro.
- The spreadsheet contained employee names, tax ID numbers, and bank account numbers.
- PayPro downloaded the file and then deleted the email.

Which conclusion is best supported by the exhibit?

  • A. A privacy notice violation occurred because restricted personal data cannot be shared with any third-party processor.
  • B. An availability control failure is the best conclusion because the SFTP maintenance delayed delivery.
  • C. No control exception occurred because the recipient was approved and the file was password protected.
  • D. A confidentiality control violation occurred because restricted personal data was sent by email rather than the approved SFTP portal.

Best answer: D

What this tests: Security, Confidentiality and Privacy

Explanation: The exhibit shows that the recipient was authorized, but the transmission method was not. Restricted personal data had to be sent through the approved SFTP portal, and the policy explicitly says email attachments are not allowed even if password protected.

The correct conclusion comes from reading the source material in sequence: classify the data, confirm whether the recipient is permitted, and then verify whether the handling method complies with policy. Tax ID numbers and bank account numbers are Restricted Personal Data. PayPro is an approved processor under contract, so sharing for payroll services is allowed in principle. However, the policy limits transmission of that data to the company’s approved SFTP portal and specifically forbids email attachments, even when password protected. Therefore, the facts support a confidentiality/data-handling control violation. The temporary SFTP outage may explain the analyst’s action, but it does not make the transmission compliant, and the policy does not prohibit all sharing with approved processors.

  • The “no control exception” view fails because an approved recipient and password protection do not override the SFTP-only transmission requirement.
  • The availability conclusion is not the best supported one; maintenance caused a delay, but the documented control failure is the unauthorized transmission method.
  • The privacy-prohibition conclusion fails because the policy expressly allows sharing restricted personal data with approved processors for contracted services.

The policy expressly prohibits emailing restricted personal data, so using email violated the required transmission control even though the processor was approved.


Question 38

Topic: Security, Confidentiality and Privacy

A company is reviewing how it reduces exposure of sensitive data across several processes.

ProcessExhibit fact
Payment processingThe internal order system stores a surrogate value for each card number. Only the payment processor can map the surrogate back to the actual card number in a separate secured vault.
Analytics reportingA weekly customer file sent to analysts replaces customer names with random IDs and shows only the birth year instead of the full date of birth.
Outbound communicationsThe email gateway scans messages and file uploads for patterns matching SSNs and payment card numbers and blocks transmission unless an exception is approved by security.

Which conclusion is best supported by the exhibit?

  • A. The payment process uses encryption, the analytics file uses tokenization, and the email gateway uses multifactor authentication.
  • B. The payment process uses DLP, the analytics file uses encryption, and the email gateway uses anti-malware filtering.
  • C. The payment process uses tokenization, the analytics file uses data obfuscation, and the email gateway uses DLP.
  • D. The payment process uses data obfuscation, the analytics file uses hashing, and the email gateway uses tokenization.

Best answer: C

What this tests: Security, Confidentiality and Privacy

Explanation: The exhibit shows three different data protection concepts serving different purposes. The payment system replaces card numbers with surrogate values stored through a separate mapping vault, which is tokenization; the analyst file alters identifying details, which is data obfuscation; and the email gateway detects and blocks outbound sensitive data, which is DLP.

Tokenization reduces exposure by replacing sensitive data, such as a card number, with a non-sensitive surrogate and keeping the original value separate in a secured token vault. Data obfuscation reduces exposure by masking, generalizing, or otherwise altering data so users can work with it without seeing the full sensitive values. DLP focuses on preventing unauthorized transmission of sensitive data by monitoring and blocking outbound channels such as email or file uploads. In the exhibit, the order system stores a surrogate tied to a separate vault, so that is tokenization. The analytics file substitutes random IDs and truncates date of birth to year only, so that is obfuscation. The email gateway scans for SSNs and card numbers and blocks transmission, so that is DLP.

  • Calling the payment control encryption is incorrect because the exhibit describes a surrogate value with separate mapping, not ciphertext decrypted with a key.
  • Treating the analytics file as tokenization or hashing is incorrect because the data is modified for limited analytical use, not replaced through a reversible token vault or transformed into fixed digests.
  • Labeling the email gateway as multifactor authentication or anti-malware misses the stated purpose: detecting and preventing outbound disclosure of sensitive data.
  • Calling the payment process DLP is incorrect because DLP monitors and restricts data movement, while the exhibit describes storage replacement of the original sensitive value.

A surrogate mapped through a separate vault is tokenization, altering visible data for analysis is obfuscation, and scanning/blocking outbound sensitive data is DLP.


Question 39

Topic: Security, Confidentiality and Privacy

A payroll service company discovered that an attacker downloaded a file containing employees’ names, Social Security numbers, and bank account details. The incident response team disabled the payroll portal for 2 days while forensic specialists preserved logs and restored clean backups. Management also engaged outside legal counsel to evaluate notification requirements.

Which consequence is most likely from this breach?

  • A. The breach’s impact should be limited to technology replacement costs because stolen data does not affect ongoing operations.
  • B. If no payroll funds were diverted, the breach should not create significant financial exposure beyond routine IT labor.
  • C. Restoring clean backups eliminates most external reporting concerns because the underlying records were recovered.
  • D. The company will likely incur direct response costs, experience temporary operational disruption, face possible reporting or notification obligations, and risk reputational damage.

Best answer: D

What this tests: Security, Confidentiality and Privacy

Explanation: The best conclusion is that this breach has both financial and operational implications beyond IT repair. The facts show sensitive data exposure, portal downtime, outside specialists, and legal review, all of which point to response costs, disruption, possible reporting, and reputational effects.

When a data breach exposes sensitive personal information, the impact usually extends beyond replacing hardware or recovering files. Direct financial effects often include forensic investigation, legal counsel, notification support, credit monitoring, public relations, and remediation activities. Operational effects can include downtime, delayed processing, and diverted staff attention while systems are investigated and restored. Reporting or notification obligations may arise because personal data was accessed, and those obligations are not removed simply because backups exist or systems are recovered. Reputational harm is also a common consequence, especially for a payroll service provider that is trusted with confidential employee information. In this scenario, the combination of data exfiltration, two days of portal outage, outside experts, and legal evaluation makes the broad business impact the most appropriate conclusion.

  • Limiting the impact to technology replacement ignores the breach response, legal, customer, and business interruption consequences.
  • Restoring backups helps recovery, but it does not erase the fact that sensitive data was accessed or remove possible notification duties.
  • Focusing only on stolen funds is too narrow; privacy breaches can create major costs even without direct cash loss.
  • The correct choice recognizes both direct costs and broader business effects from the same incident.

A breach involving sensitive personal data commonly creates remediation costs, service interruption, potential reporting obligations, and reputational harm all at once.


Question 40

Topic: Considerations for System and Organization Controls Engagements

A payroll processing company is preparing a SOC 1 report. Its production environment and backups are hosted by a third-party cloud provider, and management elects the carve-out method for that subservice organization. The payroll company’s control objectives depend in part on the cloud provider’s physical security and backup infrastructure controls.

Which conclusion is most appropriate?

  • A. The cloud provider’s relevant controls may be identified as complementary subservice organization controls that are necessary to achieve the payroll company’s control objectives, but they are not included in the service auditor’s testing under the carve-out method.
  • B. Because the cloud provider supports key control objectives, the payroll company must use the inclusive method rather than the carve-out method.
  • C. Under the carve-out method, the service auditor should test the cloud provider’s controls and include them in the opinion as if they were the payroll company’s controls.
  • D. The cloud provider’s controls must be treated as complementary user entity controls because the payroll company does not operate them.

Best answer: A

What this tests: Considerations for System and Organization Controls Engagements

Explanation: The correct conclusion is that, under the carve-out method, controls performed by the subservice organization can be identified as complementary subservice organization controls when those controls are necessary to achieve the service organization’s control objectives. They are relevant to users’ understanding of the system, but they are not included in the service auditor’s testing scope under carve-out.

Complementary subservice organization controls are controls at a subservice organization that the service organization expects to be in place because they are needed to meet the stated control objectives or criteria. When the carve-out method is used, the subservice organization is excluded from the scope of the service auditor’s opinion and testing. Even so, management may identify those subservice organization controls in the system description so user entities understand important assumptions about outsourced activities. This differs from the inclusive method, where the subservice organization’s controls are included in the system description and in the service auditor’s testing. It also differs from complementary user entity controls, which are controls the user entity, not the subservice organization, is expected to implement.

  • Treating the cloud provider’s controls as complementary user entity controls is wrong because those controls are performed by the subservice organization, not by customer user entities.
  • Saying the inclusive method is required is wrong because a service organization may choose the carve-out method even when subservice organization controls are significant.
  • Saying the service auditor must test the cloud provider’s controls under carve-out is wrong because testing those controls is characteristic of the inclusive method, not carve-out.

Under the carve-out method, relevant subservice organization controls can be presented as complementary subservice organization controls without being included in the service auditor’s scope.


Question 41

Topic: Information Systems and Data Management

A CPA is evaluating change control over a retailer’s CI/CD process and notes the following:

  • Developers must open a change ticket, and automated unit tests run before code is merged.
  • Any developer in the DevOps group can directly edit production configuration parameters for “hotfixes.”
  • The same developer who makes the production change can close the ticket without separate approval.

Which control should management implement to best reduce the risk of unauthorized or insufficiently reviewed production changes?

  • A. Restrict production configuration access to a limited release role, require separate approval before deployment, and require post-implementation review for emergency changes.
  • B. Retain CI/CD deployment logs for one year and have operations review them weekly.
  • C. Require developers to document acceptance criteria in the change ticket after the production change is completed.
  • D. Expand automated testing to include more unit and regression scripts before each merge.

Best answer: A

What this tests: Information Systems and Data Management

Explanation: The main problem is that developers can change production settings directly and approve their own work. The best response is to restrict production access, require independent authorization, and review emergency changes after implementation so the process preserves separation of duties.

Effective change control policies should require authorized changes, documented acceptance criteria before implementation, appropriate testing or review, logging and monitoring, and separation of duties over production access. In this scenario, automated tests already exist, but they do not address the more serious weakness: developers can bypass normal controls by editing production configuration directly and then closing their own tickets. That creates risk of unauthorized, unreviewed, or improperly implemented changes. Restricting production access to a limited release role and requiring separate approval before deployment directly strengthens access restrictions and authorization. For true emergencies, a break-glass style process can allow expedited changes, but it should still require logging and a post-implementation review.

  • Retaining logs and reviewing them weekly is useful detective monitoring, but it does not prevent developers from making and approving their own production changes.
  • Expanding automated testing improves test coverage, but testing alone does not solve the authorization and separation-of-duties failure.
  • Documenting acceptance criteria after implementation is too late and still leaves the direct production access and self-approval weakness unresolved.

This control directly addresses the core weakness by enforcing access restrictions, separation of duties, and authorization over production changes.


Question 42

Topic: Security, Confidentiality and Privacy

An entity’s incident response plan states:

  • A security event is an observable occurrence that may indicate attempted or actual activity and is logged for review.
  • A cybersecurity incident is a security event that results in, or is likely to result in, unauthorized access, data loss, system disruption, or another compromise requiring formal escalation and reporting assessment.

During daily monitoring, the SOC identifies 180 failed VPN login attempts from one external IP address against one employee account over 10 minutes. MFA blocked access, the account auto-locked after five attempts, and logs show no successful login, data access, privilege change, or service disruption.

What is the most appropriate conclusion?

  • A. Treat it as a nonissue; close it with no follow-up because the lockout and MFA controls worked.
  • B. Treat it as a security event; log, preserve, and monitor it, but do not trigger incident-level response yet.
  • C. Treat it as a cybersecurity incident; activate breach notification because repeated attacks create automatic reporting duties.
  • D. Treat it as a cybersecurity incident; escalate immediately because any blocked unauthorized attempt is an incident.

Best answer: B

What this tests: Security, Confidentiality and Privacy

Explanation: The blocked VPN attempts are a security event because they show suspicious activity without evidence of unauthorized access, disruption, or data compromise. Since the controls worked and no harm is indicated, the facts support logging, preserving, and monitoring the event rather than launching full incident response or breach reporting.

A security event is an observable occurrence, such as failed logins, alerts, or policy violations, that may warrant review. A cybersecurity incident is a subset of events that actually compromises, or is likely to compromise, confidentiality, integrity, or availability, or otherwise triggers formal response obligations. In this scenario, the attempted access was blocked by MFA and account lockout, and there is no evidence of successful entry, privilege change, data access, or service disruption. That means the activity should still be documented and monitored, but it does not yet meet the stated threshold for incident-level escalation or reporting assessment. The key distinction is that suspicious activity alone is not automatically an incident; the classification depends on whether compromise or likely compromise is present.

  • “Activate breach notification because repeated attacks create automatic reporting duties” is incorrect because reporting depends on actual or likely compromise, not merely a high volume of attempts.
  • “Escalate immediately because any blocked unauthorized attempt is an incident” is too broad; many attempted attacks remain events when preventive controls stop them and no compromise is evident.
  • “Close it with no follow-up because the lockout and MFA controls worked” is incorrect because blocked attacks should still be logged, preserved, and reviewed for pattern and trend analysis.

The activity shows an attempted attack, but the facts given show no actual or likely compromise, so event handling is appropriate without incident-level escalation.


Question 43

Topic: Information Systems and Data Management

A manufacturer uses a permissioned blockchain to pay certain suppliers. Once a payment is signed and confirmed on the blockchain, the ERP automatically posts the transaction to cash and accounts payable in the general ledger. The AP supervisor can both prepare the payment and use the wallet’s single private key to sign and release it, and blockchain payments cannot be reversed. Which control would best address the financial reporting risk in this process?

  • A. Require a multi-signature wallet so separate employees approve and sign each supplier payment before broadcast.
  • B. Require daily off-site backups of validator node data and wallet transaction logs.
  • C. Require additional block confirmations before each payment updates the general ledger.
  • D. Require monthly hashing of the accounts payable aging report to the blockchain.

Best answer: A

What this tests: Information Systems and Data Management

Explanation: The key risk is that one person can both initiate and authorize an irreversible blockchain payment that automatically affects the general ledger. A multi-signature wallet adds transaction-level segregation of duties before the payment is committed on-chain.

In a blockchain payment process, control over the private key is effectively control over transaction authorization. Here, the same AP supervisor can prepare and sign the payment, and the confirmed transaction automatically posts to cash and accounts payable. Because blockchain transactions are generally irreversible, an unauthorized or erroneous payment can create an immediate financial reporting misstatement. A multi-signature wallet is the best control because it requires independent approvals before the transaction is broadcast, embedding segregation of duties into the blockchain workflow itself. This directly addresses the authorization risk at the point where the payment becomes final. Controls such as more confirmations, backups, or report hashing may improve finality, recovery, or integrity evidence, but they do not prevent a single person from releasing an improper payment.

  • Additional block confirmations help confirm settlement finality, but they do not stop an unauthorized signer from releasing the payment.
  • Daily off-site backups improve recovery and record retention, but they do not prevent an improper blockchain disbursement.
  • Hashing the accounts payable aging report supports later integrity evidence for a report, not authorization of the underlying payment transaction.

A multi-signature approval control directly reduces the risk of unauthorized irreversible blockchain payments that would automatically misstate cash and accounts payable.


Question 44

Topic: Information Systems and Data Management

An ISC associate is extracting a population to test whether all customer shipments over $25,000 made in June 2026 had a carrier tracking number recorded.

Facts:

  • orders contains one row per customer order.
  • ship_date is the date goods left the warehouse.
  • invoice_date is the billing date and can differ from ship_date.
SELECT order_id, customer_id, invoice_date, order_amount, tracking_no
FROM orders
WHERE invoice_date BETWEEN '2026-06-01' AND '2026-06-30'
  AND order_amount > 25000;

Which conclusion is correct about the relevance of the retrieved data set to the stated objective?

  • A. The query is not relevant because the amount condition should be >= 25000 instead of > 25000.
  • B. The query is relevant because it retrieves order_amount and tracking_no, which are the main fields needed for the test.
  • C. The query is not relevant because including customer_id makes the extracted data set unreliable for assurance testing.
  • D. The query is not relevant because it filters on invoice_date rather than ship_date, so the result set may not represent June shipments.

Best answer: D

What this tests: Information Systems and Data Management

Explanation: The data set is not relevant because the query filters on billing dates instead of shipment dates. Since the objective is specifically about June shipments, using invoice_date can include the wrong records and omit the right ones.

To assess SQL query relevance, compare the business or assurance objective to the table fields and filter logic used in the query. Here, the objective is to test shipments made in June 2026 that exceeded $25,000 and whether they had tracking numbers recorded. The decisive population-defining field is ship_date, because that identifies when goods actually left the warehouse. The query instead filters on invoice_date, and the facts state that invoice dates can differ from shipment dates. As a result, the extracted records may include orders invoiced in June but shipped in another month, and may miss June shipments invoiced earlier or later. That makes the retrieved data set irrelevant to the stated testing objective.

  • Retrieving order_amount and tracking_no is not enough if the query pulls the wrong population in the first place.
  • Including customer_id does not make a result set unreliable; extra columns can be unnecessary, but they do not defeat relevance.
  • Using > 25000 matches the phrase “over $25,000”; using >= 25000 would incorrectly include exactly $25,000 orders.

The objective is defined by shipment timing, and invoice dates can differ from shipment dates.


Question 45

Topic: Security, Confidentiality and Privacy

An entity uses the following control activities for its VPN gateways, firewalls, and employee laptops:

  • Authenticated scans run every week.
  • Results are ranked by severity and internet exposure.
  • IT opens remediation tickets with target dates for each weakness.
  • Devices are rescanned after patches or configuration changes to confirm the weakness is closed.

Which is the best interpretation of this process?

  • A. It is incident response because it is designed mainly to contain and recover from active security events.
  • B. It is vulnerability management because it is a recurring process to identify, prioritize, remediate, and verify weaknesses.
  • C. It is access administration because it focuses primarily on granting and reviewing remote-access permissions.
  • D. It is change management because it is intended mainly to approve system modifications before implementation.

Best answer: B

What this tests: Security, Confidentiality and Privacy

Explanation: The described activities are the core elements of vulnerability management: regular identification of weaknesses, prioritization based on risk, remediation, and follow-up validation. Its purpose is to reduce exposure before weaknesses are exploited.

Vulnerability management is an ongoing process, not a one-time task. It typically includes regularly scanning systems or devices for weaknesses, evaluating and prioritizing the findings based on factors such as severity and exposure, assigning remediation actions, and then retesting to confirm the issues were actually resolved. In this scenario, weekly authenticated scans identify weaknesses, severity ranking prioritizes them, remediation tickets drive corrective action, and rescanning verifies closure. That combination fits vulnerability management for network, device, endpoint, and remote-access environments. It is different from incident response, which addresses actual or suspected security events, and different from access administration or change management, which have narrower purposes.

  • Incident response is triggered by actual or suspected attacks or security events, not by a recurring cycle of finding and fixing weaknesses.
  • Access administration concerns provisioning, changing, and reviewing user access rights, which is not the main activity described here.
  • Change management governs how changes are requested, approved, tested, and deployed, but the scenario centers on weakness discovery and remediation prioritization.

The activities shown match the ongoing cycle of finding weaknesses, ranking them for action, fixing them, and confirming remediation.


Question 46

Topic: Information Systems and Data Management

An entity uses the following accounts payable architecture:

  • Procurement module: system of record for vendor invoices and vendor credit memos
  • AP subledger: stores detailed AP balances and produces the AP aging report
  • Nightly interface: posts summarized AP activity from the subledger to the general ledger
  • BI dashboard: displays AP aging from subledger data

At month-end, the controller notes:

  • AP aging report = $3,240,000
  • AP subledger total = $3,240,000
  • GL accounts payable control account = $3,010,000
  • The $230,000 difference equals all vendor credit memos entered during the month
  • The interface log shows transaction code VCM is not mapped to any GL posting rule

What is the best correction?

  • A. Post a manual journal entry each month for vendor credit memos and leave the current interface unchanged.
  • B. Update the interface mapping so vendor credit memos post from the AP subledger to the GL, and then reconcile and correct the affected GL balance.
  • C. Re-enter the vendor credit memos in the procurement module and regenerate the AP aging report.
  • D. Change the BI dashboard to source AP balances from the GL control account instead of the AP subledger.

Best answer: B

What this tests: Information Systems and Data Management

Explanation: The problem is not in the system of record, the subledger, or the reporting layer. Those amounts agree and already include the credit memos. The missing balance in the GL is caused by an interface mapping failure, so the best correction is to fix that posting rule and remediate the affected GL amounts.

In an accounting information system, the system of record captures the original transaction, the subledger maintains detailed accounting balances, the interface moves data between layers, the general ledger holds summarized account balances, and the reporting layer presents information from its assigned source. Here, vendor credit memos were entered, the AP aging agrees to the AP subledger, and the BI dashboard reads from that same subledger data. That means the source transactions, detailed accounting records, and reporting layer are functioning as intended. The mismatch appears only in the GL control account, and the interface log identifies why: transaction code VCM was not mapped to a GL posting rule. The best remediation is to correct the interface so those subledger transactions post to the GL and then reconcile any periods already affected.

  • Changing the BI dashboard to use the GL would only make reports match an incomplete balance; it would not fix the missing GL posting.
  • Using recurring manual journals is a workaround, not the best correction, because the root cause is a defective interface rule.
  • Re-entering credit memos in the procurement module is unnecessary because the transactions already exist in the system of record and are reflected in the subledger.

The source transactions and subledger are complete, so the defect is the unmapped interface rule that prevents credit memos from reaching the GL.


Question 47

Topic: Security, Confidentiality and Privacy

During an ISC engagement, a CPA reviews an incident summary for a company’s public customer portal:

  • The portal was intermittently unavailable for 25 minutes.
  • Network monitoring showed a surge of HTTPS requests from more than 15,000 external IP addresses across multiple countries.
  • Most requests repeatedly targeted the home page and image files.
  • No unusual administrator logins, malware alerts, or database error messages were detected.

Which attack type best fits this incident?

  • A. Malware attack
  • B. Web application attack
  • C. Social engineering attack
  • D. Distributed denial-of-service attack

Best answer: D

What this tests: Security, Confidentiality and Privacy

Explanation: This incident is best classified as a distributed denial-of-service attack because the key fact is the massive volume of traffic coming from many external IP addresses and disrupting availability. The facts do not indicate code exploitation, malware execution, or user deception.

A distributed denial-of-service attack attempts to make a system or service unavailable by flooding it with traffic or requests from many different sources. In this scenario, the portal became unavailable, the request volume spiked sharply, and the traffic came from more than 15,000 external IP addresses across multiple countries. Those facts point to a distributed attack focused on exhausting capacity rather than exploiting application logic or compromising credentials. A web application attack would usually show signs such as malicious input, abnormal database errors, or targeted exploitation of forms or APIs. Malware would involve malicious code running on a device or server, and social engineering would involve manipulating people into revealing information or taking unsafe actions. The decisive clue here is the distributed traffic flood affecting availability.

  • Web application attack is tempting because the traffic targeted a website, but the facts show volume-based disruption rather than exploitation of application functions or inputs.
  • Malware attack does not fit because there are no signs of malicious code execution, infected hosts, or system compromise.
  • Social engineering attack is incorrect because no employee or user was manipulated into disclosing information or performing an unsafe action.
  • Distributed denial-of-service attack fits because many external sources generated enough traffic to impair system availability.

The incident centers on availability disruption caused by overwhelming traffic from many distributed external sources, which is characteristic of a DDoS attack.


Question 48

Topic: Considerations for System and Organization Controls Engagements

A service organization is issuing a SOC 2 Type 2 report for the period January 1 through December 31, Year 1. The following control was described and tested:

  • Control: The security manager reviews privileged-user access quarterly and documents approval.
  • Testing result: Reviews for Q1, Q2, and Q4 were performed on time. The Q3 review was completed 45 days late.
  • Additional fact: No unauthorized access was identified, and the control design is otherwise appropriate.

How should this matter be characterized in the results of tests of controls?

  • A. A design deficiency because the review was not performed on time in one quarter.
  • B. No exception because the review was eventually completed and no unauthorized access was found.
  • C. A scope limitation because the practitioner could not rely on a late control execution.
  • D. An operating effectiveness exception for the privileged-access review control, described as a Q3 deviation.

Best answer: D

What this tests: Considerations for System and Organization Controls Engagements

Explanation: Because the control is suitably designed but one tested instance was not performed as specified, the issue is an operating effectiveness exception. In a SOC 2 Type 2 report, that deviation should be described in the results of tests for the specific control, even if no unauthorized access was identified.

In a SOC 2 Type 2 report, the results of tests of controls explain whether the described controls operated effectively over the period. Here, the control design is appropriate: quarterly privileged-access reviews with documentation. The identified problem is that one quarter’s review was completed 45 days late, so the control did not operate as described for that instance. That is an operating effectiveness exception, and the results of tests should describe the deviation for Q3. The fact that the review was later completed and no unauthorized access was detected may affect the significance of the exception, but it does not erase the deviation. This is not a design deficiency, and it is not a scope limitation because the practitioner has evidence showing what happened.

  • Calling it a design deficiency is incorrect because the control’s design is adequate; the failure is in execution during Q3.
  • Treating it as no exception is incorrect because later completion and no detected misuse do not change the fact that the control missed its stated timing.
  • Treating it as a scope limitation is incorrect because evidence was available; the evidence showed late performance rather than an inability to test.
  • Reporting it as an operating effectiveness exception is appropriate because the results of tests should disclose deviations in how a control operated during the period.

The control design is appropriate, so the late Q3 review is a deviation in operation that should be reported in the results of tests for that control.


Question 49

Topic: Considerations for System and Organization Controls Engagements

A CPA is finalizing a SOC 2 Type 2 report on the security category for the period January 1-December 31, 20X5. The planned report date is February 7, 20X6.

Timeline:

  • January 18, 20X6: Monitoring detects unauthorized exports from the customer database.
  • January 22, 20X6: Investigation concludes that a database administrator shared a privileged credential with a contractor from December 10-December 29, 20X5, bypassing individual authentication.
  • January 24, 20X6: Shared access is removed and credentials are rotated.
  • The unauthorized exports occurred January 15-January 17, 20X6.

What is the best interpretation for the SOC engagement?

  • A. Perform additional procedures on the credential-sharing condition, evaluate its effect on the Type 2 opinion for the period ended December 31, and date the report no earlier than completion of that evaluation.
  • B. Extend the examination period through January 17 and opine on controls through the export dates.
  • C. Treat the matter only as a January subsequent event and add disclosure, because the unauthorized exports occurred after December 31.
  • D. Keep the December 31 opinion unchanged because the shared credential was remediated before the report date.

Best answer: A

What this tests: Considerations for System and Organization Controls Engagements

Explanation: The key fact is not when the exports occurred, but when the underlying control failure existed. Because the shared privileged credential existed during December, the CPA must perform additional procedures and evaluate whether the SOC 2 Type 2 conclusion for the covered period is affected before dating the report.

In a SOC 2 Type 2 engagement, the practitioner considers relevant events up to the report date. The critical distinction is whether the later-discovered matter arose only after the examination period or instead reveals a condition that existed during the period being reported on. Here, the unauthorized exports happened in January, but the investigation showed that the underlying control problem - shared privileged access bypassing individual authentication - existed during December, within the specified period. That means the CPA cannot treat this as only a post-period disclosure matter. Additional procedures are needed to determine the severity and effect of the control failure on operating effectiveness for the period ended December 31. The report should not be dated earlier than the date that evaluation is completed.

  • Treating this only as a January disclosure is incorrect because the root cause existed during the covered period, not solely after it.
  • Extending the examination period through January is not the normal response; the report still covers the original specified period unless the engagement itself is changed.
  • Remediation before the report date does not erase a control failure that existed during the period or remove the need to reassess the opinion.

The January discovery revealed a control breakdown that existed during the covered period, so the CPA must reassess the period-end opinion before dating the report.


Question 50

Topic: Information Systems and Data Management

Management wants a monthly report showing, by SKU, (1) net revenue and (2) average days from customer order to final delivery. For this report, net revenue equals invoiced sales minus return credits.

SystemJoin fields availableRelevant fields
ERP Order/Invoiceorder_id, invoice_id, skuorder_date, qty_invoiced, unit_price
Warehouse Management Systemorder_id, skuship_date, delivery_date, qty_shipped
CRM Returnsinvoice_id, skureturn_qty, credit_amount, return_date
General Ledger Summaryaccounting_period, product_linetotal_revenue, returns_reserve

Which conclusion is best supported by the exhibit?

  • A. Integrate General Ledger Summary with Warehouse Management System data, because summarized financial data is sufficient for SKU-level net revenue analysis.
  • B. Integrate ERP Order/Invoice, Warehouse Management System, and CRM Returns data using order or invoice identifiers plus SKU.
  • C. Integrate ERP Order/Invoice data alone, because it contains sales amounts and the order date used in both measures.
  • D. Integrate Warehouse Management System with CRM Returns data, because post-order systems capture delivery activity and customer adjustments.

Best answer: B

What this tests: Information Systems and Data Management

Explanation: The report needs multiple transaction-level sources. ERP provides order date and invoiced sales, the warehouse system provides delivery date, and CRM provides return credits, so combining those three sources is necessary to calculate both SKU-level net revenue and delivery cycle time.

When a report requires both financial and operational measures, the best data set usually combines detailed source records from each step of the process. Here, net revenue by SKU needs invoiced sales from ERP and return credits from CRM. Average days from order to final delivery needs the ERP order date and the warehouse delivery date. The shared identifiers shown in the exhibit—order_id, invoice_id, and SKU—support matching the records at a detailed level. The general ledger summary is too aggregated because it is by accounting period and product line, not by SKU or individual order. Using only one or two of the detailed systems would leave at least one required measure incomplete.

  • ERP alone misses delivery_date and return-credit detail, so it cannot produce both requested metrics.
  • General ledger summary data is too aggregated for SKU-level analysis and cannot replace transaction data for joins to specific orders.
  • Warehouse and CRM data cover delivery and returns, but they do not provide the original invoiced sales amount or the order date needed for the full report.

This is the only combination that provides detailed sales, delivery, and return-credit data needed to calculate both requested SKU-level measures.

Questions 51-75

Question 51

Topic: Security, Confidentiality and Privacy

A company allows managers to approve vendor payments through a mobile app on company-issued smartphones. The app keeps users signed in for 30 days, does not require multifactor authentication, and allows payment approval after only the phone’s 4-digit unlock PIN. The phones are used only on trusted networks, no suspicious apps are detected, and no phone has been lost or stolen. This situation is best classified as which mobile cybersecurity risk?

  • A. Insecure mobile network exposure
  • B. Mobile malware infection
  • C. Weak mobile access controls
  • D. Device loss or theft risk

Best answer: C

What this tests: Security, Confidentiality and Privacy

Explanation: The scenario focuses on how access is granted and maintained on the mobile device. Long-lived sessions, no MFA, and reliance on only a simple device PIN indicate weak mobile access controls rather than network, malware, or loss-related risk.

Mobile cybersecurity risks should be classified by the main source of exposure. Here, the weakness is the app’s access design: it allows sensitive approvals after only a basic device PIN, keeps users signed in for an extended period, and does not require MFA. Those facts point to weak mobile access controls because authentication and session management are insufficient for a high-risk function. Insecure network exposure would involve use of untrusted or public networks that could enable interception. Mobile malware infection would involve malicious apps, code, or device compromise. Device loss or theft risk would center on a phone being misplaced or stolen. Although poor access controls can worsen the impact of loss, the direct issue described is still weak mobile access control.

  • Weak mobile access controls is correct because the problem is weak authentication and overly permissive session persistence for a sensitive mobile function.
  • Insecure mobile network exposure is not the best classification because the facts state the phones are used only on trusted networks.
  • Mobile malware infection is unsupported because no suspicious apps or compromise indicators are present.
  • Device loss or theft risk is not the primary classification because no phone has been lost or stolen; the control weakness exists even without that event.

The primary issue is inadequate authentication and session control on the mobile app, which is a weak mobile access control risk.


Question 52

Topic: Security, Confidentiality and Privacy

A CPA is reviewing this excerpt from a company’s workforce-app notice:

  • The app collects employees’ biometric time-clock scans, geolocation at clock-in, and bank account information for payroll.
  • The company uses the data only for timekeeping and payroll unless the employee provides separate consent for another use.
  • Employees can review and correct their profile information through the self-service portal.
  • Biometric data is deleted when no longer needed for payroll processing, unless a legal requirement applies.

Which characterization is most directly supported by the excerpt?

  • A. A confidentiality policy addressing restricted disclosure of designated sensitive information
  • B. A security standard addressing authentication, logging, and intrusion prevention
  • C. An availability plan addressing outage recovery and service restoration
  • D. A privacy notice addressing collection, use, consent, access, and retention of personal information

Best answer: D

What this tests: Security, Confidentiality and Privacy

Explanation: The excerpt is best characterized as privacy-focused because it describes how personal information is collected, used, retained, and made available for review and correction. Those features are more specific to privacy than to confidentiality, security, or availability.

Privacy addresses an entity’s practices for personal information, including notice, purpose limitation, consent, access, correction, retention, and disposal. The excerpt discusses all of those themes: it identifies the personal data collected, limits use to payroll unless separate consent is obtained, allows employees to review and correct information, and states when biometric data will be deleted. By contrast, confidentiality is about protecting designated information from unauthorized disclosure, and security is about protecting systems and information more broadly through safeguards such as access controls or monitoring. Availability concerns whether systems and data are accessible for operation and recovery. Because the excerpt centers on personal information handling and individual rights, privacy is the best classification.

  • A confidentiality policy would emphasize preventing unauthorized disclosure of sensitive data, not consent, correction rights, or deletion timing.
  • A security standard would typically describe safeguards such as MFA, access provisioning, logging, or malware protection, which the excerpt does not emphasize.
  • An availability plan would address uptime, backups, failover, and recovery after outages, none of which appears in the excerpt.

The excerpt focuses on personal information practices such as collection, purpose limitation, consent, individual access, correction, and deletion, which are core privacy elements.


Question 53

Topic: Security, Confidentiality and Privacy

GreenFarm Co. uses a third-party cloud platform to monitor refrigerated inventory at 40 stores.

ItemFact
IoT devicesEach store has freezer sensors connected to a local IoT gateway. The gateway sends data to the cloud platform using a cached API key tied to a store service account. Installation procedures require changing the gateway’s default admin password.
Exception notedStore 18’s gateway was found still using the default admin password.
Mobile accessStore managers review alerts through a company mobile app using SSO and MFA. Logs show no unusual manager logins.
Incident logsAt 2:14 a.m., the cloud platform recorded successful service-account API calls from Store 18’s public IP to export six months of temperature data. At 2:20 a.m., the gateway initiated an outbound session to an unknown external host.
SOC excerptThe cloud vendor’s SOC 2 report states customers are responsible for field-device configuration and local credential management.

Based on these facts, which is the best interpretation of the cybersecurity threat?

  • A. The most likely threat is a denial-of-service attack against the company network, because the gateway opened an external session.
  • B. The most likely threat is compromise of the IoT gateway, allowing misuse of its cached cloud API credentials.
  • C. The most likely threat is a cloud-vendor encryption failure, because the exported data shows weak protection in the vendor database.
  • D. The most likely threat is compromise of mobile user accounts, because manager phone MFA was bypassed.

Best answer: B

What this tests: Security, Confidentiality and Privacy

Explanation: The strongest interpretation is IoT gateway compromise that was used to access the cloud platform with the gateway’s stored service credentials. The default password remained in place, the export came from the store’s IP using the service account, and the gateway then contacted an unknown host.

This scenario points to a threat that is common in cloud-connected IoT environments: a weakly secured field device becomes the path into the cloud application. The key facts are the unchanged default admin password on the gateway, the successful API activity using the gateway’s service account, and the outbound session to an unfamiliar host shortly after the data export. Those facts support credential misuse through a compromised device, not a failure of mobile authentication or vendor-side encryption. The SOC excerpt also matters: it says the customer, not the cloud vendor, is responsible for field-device configuration and local credential management. In practice, IoT gateways that store API keys or service credentials can become a high-risk bridge between local networks and third-party cloud services if default credentials are not changed.

  • Mobile account compromise is not supported because manager access uses SSO with MFA and the logs show no unusual manager logins.
  • Cloud-vendor encryption failure is unsupported because the evidence shows successful authenticated API exports, not proof that database encryption failed.
  • A denial-of-service attack does not fit the facts because the main indicators are data export and suspicious outbound communication, not traffic flooding or service outage.

The default gateway password, successful service-account exports from the store IP, and the gateway’s outbound connection strongly indicate IoT-device compromise leading to unauthorized cloud access.


Question 54

Topic: Information Systems and Data Management

A distributor plans to replace its legacy sales and inventory application with a new hosted ERP module.

FactDetail
Process scopeThe application supports about 65% of company revenue and updates shipment status, inventory, accounts receivable, and daily sales journal entries to the general ledger.
Go-live timingA direct cutover is scheduled for December 29, three days before year-end close and the physical inventory count.
Testing statusUnit testing passed. End-to-end testing across order entry, shipping, billing, and GL posting was completed for 12 of 20 scenarios; 3 tested scenarios produced duplicate invoices, and 5 scenarios were not tested.
Conversion planThe legacy system will become read-only at go-live. No parallel processing is planned. If the new system fails, orders will be captured manually in spreadsheets until issues are fixed.
Access/control statusAutomated credit-limit and price-override approvals are configured, but the user-role access review for sales supervisors and billing clerks is scheduled for after go-live.
Vendor assuranceThe ERP vendor has a SOC 1 Type 2 report covering its hosted infrastructure.

Based on these facts, which is the best interpretation of the proposed conversion approach?

  • A. The approach is acceptable because the vendor’s SOC 1 Type 2 report compensates for incomplete business-process testing and the delayed user-role review.
  • B. The approach creates unacceptable operational, reporting, and control risk because a year-end direct cutover is planned before end-to-end processing, fallback, and user access controls are adequately validated.
  • C. The approach’s main concern is data confidentiality, so the conversion risk would be acceptable if encryption of order data is confirmed.
  • D. The approach is acceptable if finance performs daily reconciliations after go-live, because detective controls can replace pre-go-live testing for this conversion.

Best answer: B

What this tests: Information Systems and Data Management

Explanation: This conversion plan combines a high-impact direct cutover with incomplete end-to-end testing, known duplicate invoicing errors, no practical rollback, and delayed access validation right before year-end. In that environment, the proposal creates unacceptable operational disruption, financial reporting risk, and control risk.

A conversion approach should be evaluated in light of the system’s business significance, timing, testing results, fallback capability, and control readiness. Here, the new system drives revenue, inventory, receivables, and GL postings, so defects can affect operations and financial reporting quickly. A direct cutover shortly before year-end heightens the risk because processing failures or duplicate invoices could distort revenue and inventory during close. The plan also lacks a strong fallback, since the legacy system becomes read-only and manual spreadsheets are not an equivalent recovery method. Delaying the user-role access review until after go-live adds control risk because approval workflows may operate with inappropriate access. The vendor’s SOC 1 Type 2 report may support reliance on certain hosted-service controls, but it does not replace customer-side conversion testing, configuration validation, or user access review.

  • The vendor SOC 1 Type 2 report does not substitute for the company’s own end-to-end testing, conversion readiness, or user access validation.
  • Confidentiality is not the primary issue in these facts; the stronger risks involve transaction processing, financial reporting, and authorization controls.
  • Daily reconciliations are useful detective controls, but they do not make an under-tested year-end direct cutover acceptable or prevent duplicate or unauthorized processing.

This is correct because the conversion affects revenue-significant processing and key controls, yet unresolved processing errors, incomplete testing, no practical rollback, and delayed access review remain immediately before year-end.


Question 55

Topic: Information Systems and Data Management

A CPA is reviewing why an April sales-detail report may be incomplete.

SELECT o.order_id, ol.line_id, ol.product_id, ol.line_amount
FROM Orders o
JOIN OrderLines ol
  ON o.order_id = ol.order_id
WHERE o.order_date >= '2026-04-01'
  AND o.order_date < '2026-05-01';
ItemFact
Ordersorder_id is the primary key
OrderLinesline_id is the primary key
OrderLines.order_idRequired field, but no foreign key constraint to Orders.order_id
Data profiling214 OrderLines rows contain an order_id value that does not exist in Orders
Other testingNo duplicate line_id values were found

Which database structure concern is the best interpretation of these facts?

  • A. A missing index on Orders.order_date is likely causing some April rows to be skipped by the query.
  • B. Storing line_amount in OrderLines rather than deriving it from another table is the main structural problem affecting this report.
  • C. Using line_id instead of a composite primary key on OrderLines is the main reason the report is incomplete.
  • D. Unenforced referential integrity between Orders and OrderLines allows orphaned detail rows that the inner join will exclude.

Best answer: D

What this tests: Information Systems and Data Management

Explanation: The key issue is missing referential integrity. Because OrderLines.order_id is not enforced as a foreign key, orphan detail rows can exist, and the report’s inner join will drop those rows from the result set.

In a relational database, referential integrity helps ensure that each child row points to a valid parent row. Here, OrderLines is the child table and Orders is the parent table. Because OrderLines.order_id is not protected by a foreign key, the database accepted 214 detail rows whose order_id does not exist in Orders. The report uses an inner join, so only rows with matches in both tables are returned. As a result, those orphaned detail rows are omitted, which directly affects completeness and can understate reported sales totals. By contrast, an index mainly affects performance, not which matching rows qualify, and the other suggested design concerns are not the best explanation of the facts provided.

  • A missing index can slow query performance, but it does not by itself cause valid matching rows to disappear from the result set.
  • The facts do not show a primary key failure on OrderLines; no duplicate line_id values were found, and the identified issue is unmatched parent-child rows.
  • Storing line_amount in the detail table may raise other design questions, but it does not explain why rows are excluded by this specific join.

Without an enforced foreign key, orphan OrderLines rows can exist and an inner join will omit them from the report.


Question 56

Topic: Security, Confidentiality and Privacy

A company documented the following VPN security items:

VPN security notes
- Security goal: Payroll data should be accessible only by authorized employees.
- Threat noted: External actors may attempt credential stuffing against VPN accounts.
- Control activity: The SIEM creates an alert when 10 failed VPN logins occur from one IP address within 5 minutes, and a security analyst reviews open alerts each morning.
- Management metric: The SOC manager tracks the monthly average time to close security alerts.

Based on the exhibit, which item is a detective control?

  • A. Restricting payroll data access to authorized employees
  • B. Attempting credential stuffing against VPN accounts
  • C. Reviewing SIEM alerts for repeated failed VPN logins
  • D. Tracking the monthly average time to close security alerts

Best answer: C

What this tests: Security, Confidentiality and Privacy

Explanation: Reviewing SIEM alerts for repeated failed VPN logins is a detective control because it is meant to identify suspicious activity as it occurs. The other items describe a control objective, an attack technique, and a monitoring metric rather than the detective control itself.

A detective control is intended to discover errors, anomalies, or attacks that have occurred or are underway. In the exhibit, the SIEM alert threshold and the analyst’s review of failed-login alerts are used to detect possible unauthorized access attempts, so that activity is a detective control. By contrast, making payroll data accessible only to authorized employees states the control objective, which is the desired outcome. Credential stuffing is the threat or attack technique the company is concerned about, not a control. Tracking average alert-closure time is a monitoring activity used by management to oversee security operations, but it does not itself detect the suspicious login behavior.

  • Restricting payroll data access to authorized employees states the desired result of security, so it is a control objective rather than a detective control.
  • Attempting credential stuffing against VPN accounts describes the attack technique or risk the company is trying to address.
  • Tracking the monthly average time to close security alerts is management monitoring of process performance, not the event-detection control itself.

Reviewing SIEM alerts is a detective control because it is designed to identify potentially unauthorized login activity after it occurs.


Question 57

Topic: Information Systems and Data Management

A CPA is reconciling a documented cash-receipts flowchart to a walkthrough of the actual process.

Documented flowchart:

  • Bank lockbox sends remittance file to ERP.
  • ERP imports the file.
  • ERP automatically posts all receipts to customer accounts.
  • AR supervisor reviews the daily cash-receipts report.
  • Treasury reconciles bank deposits to ERP totals.

Walkthrough results:

  • The ERP auto-posts a receipt only when the customer number and invoice number match an open receivable.
  • If either field does not match, the receipt is routed to a cash application specialist for research and manual resolution.
  • Treasury performs the reconciliation after all auto-posted items and resolved exceptions are reflected in ERP.

Which change is the best correction to the documented flowchart?

  • A. Replace the exception handling with a rule that automatically posts all receipts and lets treasury correct mismatches during bank reconciliation.
  • B. Add quarterly user-access recertification for treasury and cash application staff to strengthen the process documentation.
  • C. Add a decision step after file import for matched versus unmatched receipts, route exceptions to the cash application specialist, and place treasury reconciliation after exception resolution.
  • D. Move the AR supervisor review ahead of the ERP import so posting errors can be prevented before the remittance file arrives.

Best answer: C

What this tests: Information Systems and Data Management

Explanation: The documented flowchart wrongly assumes all receipts post automatically and omits the exception path. The best correction is to show the matched-versus-unmatched decision, the handoff to the cash application specialist, and reconciliation only after posting and exception handling are complete.

When reconciling a process narrative or walkthrough to documented process flows, the key is to identify where the documentation leaves out a branch, handoff, or sequencing detail that affects how the process actually works. Here, the flowchart incorrectly shows one straight-through posting path for all receipts. The walkthrough shows a conditional process: matched receipts post automatically, while unmatched receipts go to a cash application specialist for research before they are posted or otherwise resolved. That exception routing is a material handoff and must appear in the flow. Treasury reconciliation also belongs after the system reflects both auto-posted items and resolved exceptions, because reconciling earlier would not reflect the true processed population.

  • Moving the AR supervisor review before ERP import does not fix the real issue, because the missing problem is the undocumented exception branch and handoff.
  • Posting all receipts automatically and correcting mismatches later creates an unsupported processing assumption and weakens data integrity.
  • Adding quarterly access recertification may be a useful control, but it does not correct the flowchart’s missing step or sequencing error.
  • The correct revision is the only option that addresses both the missing decision point and the timing of the reconciliation step.

This revision fixes the missing decision point, documents the exception handoff, and aligns the reconciliation step with the actual sequence.


Question 58

Topic: Information Systems and Data Management

During an ISC engagement, a CPA is assessing whether an emergency production hotfix to a billing application was managed through formal change control. Management claims the hotfix was requested, approved, tied to a specific code version, successfully tested before release, deployed through the build process, and had a documented rollback plan. Which item would best support that conclusion?

  • A. A baseline configuration report showing production servers are currently running application version 3.8.2 with approved configuration settings
  • B. A regression test library report showing all billing test cases passed in the staging environment for build 771
  • C. A change ticket for hotfix 24-117 showing the requester, approver, linked commit hash, passed staging tests, build number, deployment time, and rollback steps to the prior release
  • D. A version-control log showing commit hash 8f2c91, the developer who merged the hotfix branch, and the timestamp of the merge to main

Best answer: C

What this tests: Information Systems and Data Management

Explanation: The best support is the change ticket that links all key change-management elements for the same hotfix. It provides end-to-end evidence of request, approval, version tracking, testing, deployment, and rollback readiness, while the other records support only one part of the conclusion.

For change management, the most persuasive evidence is documentation that traces a specific change through the full lifecycle. A well-maintained change ticket or change record should connect the request and approval to the exact code version, testing evidence, build or deployment details, and rollback procedure. That makes it possible to confirm the change was formally tracked and could be reversed if needed. By contrast, a version-control log mainly shows code history, a test library report shows testing activity, and a baseline configuration report shows the current approved state of production. Each of those may be useful, but none alone demonstrates the full control path for one production change.

  • A version-control log is useful for code history and traceability, but it does not by itself show approval, testing completion, or rollback planning.
  • A regression test library report supports pre-release testing, but it does not prove the change was formally authorized and deployed under change control.
  • A baseline configuration report shows the approved current setup, but it does not document the request, approval, or rollback steps for the specific hotfix.

This is the strongest evidence because one record ties the specific change to authorization, version control, testing, deployment, and rollback documentation.


Question 59

Topic: Information Systems and Data Management

A CPA is helping a midsize distributor begin its business impact analysis for business continuity planning. Management has already listed its major applications. Same-day shipping drives most revenue, the warehouse management system depends on ERP order data and a third-party network connection, and management has not yet defined acceptable downtime for each process. Which action should the CPA recommend next?

  • A. Interview process owners to identify critical processes, document internal and external dependencies, and determine recovery priorities and required downtime targets.
  • B. Perform a disaster recovery test of the warehouse servers to verify that backups can be restored within a reasonable time.
  • C. Renegotiate the third-party network contract first because external vendor uptime is the main driver of business continuity risk.
  • D. Set a single four-hour recovery target for all production systems so continuity planning can proceed consistently.

Best answer: A

What this tests: Information Systems and Data Management

Explanation: The next BIA step is to work with process owners to determine which business processes are most critical, what they depend on, and how quickly they must be restored. Those facts establish recovery priorities and availability requirements for later continuity and disaster recovery planning.

A business impact analysis starts by identifying the business processes that matter most, then assessing the operational and financial effects of disruption. From there, the organization identifies dependencies such as applications, data, personnel, facilities, and third-party services that support each process. Using that information, management can set recovery priorities and define availability needs, such as acceptable downtime and recovery targets. In this scenario, management already has a list of applications, but it has not yet defined acceptable downtime and the warehouse process depends on both internal ERP data and an external network provider. That means the next step is not testing recovery or setting generic targets; it is completing the BIA by linking critical processes to dependencies and required recovery timing.

  • Performing a disaster recovery test is premature because the BIA should first define what recovery time is required and which processes deserve priority.
  • Setting one uniform recovery target ignores that different processes have different business impacts and availability requirements.
  • Renegotiating the network contract addresses one dependency, but the BIA must first evaluate all critical processes and dependencies before selecting specific responses.

A business impact analysis next identifies process criticality, dependencies, and recovery and availability requirements before detailed recovery testing or solution selection.


Question 60

Topic: Information Systems and Data Management

A CPA is reviewing a change ticket for a cash receipts application update.

Facts:

  • Developers finished coding and unit testing in the development environment.
  • The staging environment closely mirrors production and contains masked data.
  • Operations proposes moving the update to production so business users can perform end-to-end acceptance testing with live transactions.

Which action is most appropriate?

  • A. Approve immediate deployment because completed unit testing in development is sufficient evidence of readiness.
  • B. Require integration and user acceptance testing in staging before promoting the update to production.
  • C. Return the update to development so business users can perform end-to-end acceptance testing there.
  • D. Allow user acceptance testing in production because live data provides the most realistic test results.

Best answer: B

What this tests: Information Systems and Data Management

Explanation: The best response is to complete integration and user acceptance testing in staging before deployment. Development is for coding and unit testing, staging is for production-like predeployment testing, and production should not be used as the primary environment for acceptance testing.

Development, staging, and production serve different purposes in a controlled change process. Development is where programmers build code and perform unit testing on individual components. Staging is a separate environment that closely mirrors production and is used for broader testing, such as integration, system, and user acceptance testing, often with masked or representative data. Production is the live environment for actual business processing, so using it for planned acceptance testing creates unnecessary risk to live transactions and system stability. Because the facts say coding and unit testing are already complete and staging mirrors production, the proper next step is to perform end-to-end and user acceptance testing in staging before release.

  • Allowing acceptance testing in production is inappropriate because production is intended for live operations, not routine predeployment testing.
  • Sending business users to development is weak because development is not the controlled, production-like environment intended for end-to-end validation.
  • Relying only on unit testing is insufficient because unit tests do not confirm that the full process works properly across components and user workflows.

Staging is the production-like environment used for broader predeployment testing, while production should be reserved for live operations rather than acceptance testing.


Question 61

Topic: Security, Confidentiality and Privacy

Maple Co. uses a third-party cloud HR portal that stores employee PII. During a vendor-risk review, HR management says the service provider “handles access security,” so Maple does not perform periodic user access reviews for the portal and does not have a process to notify the provider promptly when HR staff transfer or terminate.

The reviewer concludes that Maple has a gap in user-entity control responsibility and ongoing monitoring over access to the vendor-hosted system.

Which source would best support that conclusion?

  • A. A vendor due diligence questionnaire showing the provider encrypts PII, carries cyber insurance, and performs annual penetration testing
  • B. A current portal user listing showing each user’s assigned role, with no statement about who is responsible for periodic review or removal of access
  • C. A security log extract showing the provider’s MFA blocked several unsuccessful administrator login attempts last month
  • D. An excerpt from the vendor’s SOC 2 report stating that customer management must review user access monthly and notify the vendor within 24 hours of employee terminations or role changes

Best answer: D

What this tests: Security, Confidentiality and Privacy

Explanation: The best support is the SOC 2 report excerpt that identifies complementary user entity controls. It directly shows that Maple, not just the provider, is responsible for periodic access review and timely termination notifications, which supports the conclusion about a monitoring gap.

When evaluating a third-party system, the key issue is not simply whether the vendor has strong controls, but which controls remain the customer’s responsibility. A SOC 2 report excerpt that lists complementary user entity controls is the strongest evidence because it explicitly assigns duties between the service provider and the user entity. If the report says Maple must review user access and notify the provider of personnel changes, then Maple cannot rely solely on the vendor for access control monitoring. That makes the gap a user-entity responsibility issue. By contrast, due diligence materials help assess vendor risk at onboarding, a user listing shows who currently has access, and a log extract shows a specific security event. Those sources may be useful, but they do not directly establish Maple’s ongoing control obligation.

  • The due diligence questionnaire supports vendor assessment, but it does not assign Maple’s ongoing access-review responsibilities.
  • The current user listing may help test access appropriateness, but by itself it does not show who must perform the review or notify the provider of changes.
  • The MFA log extract supports the provider’s operating security controls, not Maple’s complementary monitoring duties.

A SOC 2 excerpt identifying complementary user entity controls directly assigns the access-review and termination-notification duties to Maple.


Question 62

Topic: Security, Confidentiality and Privacy

A health benefits administrator stores members’ Social Security numbers. Analytics users only need a surrogate value, but a small customer service group must be able to retrieve the full SSN through a separate controlled service when identity verification is required. Management is deciding between tokenization and other protection techniques for the production database.

Which statement best captures the decisive distinction relevant to this choice?

  • A. Tokenization usually substitutes a non-sensitive token and stores the mapping separately, while encryption mathematically transforms the SSN into ciphertext recoverable with a key.
  • B. Tokenization and encryption are functionally the same because both depend on the same decryption key to recover the original SSN.
  • C. Tokenization is irreversible like hashing, so it would not support occasional authorized recovery of the full SSN.
  • D. Tokenization is mainly a display technique that hides part of the SSN from users, while encryption changes how the SSN is stored.

Best answer: A

What this tests: Security, Confidentiality and Privacy

Explanation: The best distinction is that tokenization replaces sensitive data with a surrogate token and typically keeps the original-to-token mapping in a separate protected vault or service. Encryption, by contrast, converts the original data into ciphertext that is recovered with cryptographic keys.

This scenario calls for reducing exposure of privacy-regulated data in the production database while still allowing tightly controlled retrieval of the original value for a limited business purpose. Tokenization fits that objective because the application database can store a surrogate token instead of the actual SSN, and only an authorized detokenization service can return the original value. Encryption is also a strong control for protecting data, but its defining feature is cryptographic transformation of the actual data into ciphertext using keys, not substitution with a separate mapped token. Hashing is generally used when recovery of the original value is not needed, and masking mainly limits what users see rather than replacing the underlying stored sensitive value for controlled recovery.

  • Saying tokenization and encryption are the same confuses two different mechanisms: encryption uses algorithms and keys, while tokenization uses a substitute value plus a protected mapping.
  • Describing tokenization as a display-hiding method confuses it with masking, which usually obscures what is shown without being the primary storage design for controlled recovery.
  • Calling tokenization irreversible confuses it with hashing; the scenario requires occasional authorized recovery of the SSN, which hashing does not provide.

Tokenization is distinguished by replacing the SSN with a surrogate token and keeping the recoverable mapping in a separate protected service or vault.


Question 63

Topic: Information Systems and Data Management

A CPA is reviewing the following BPMN-style vendor onboarding process to identify improvements:

StepLaneActivity
1RequestorSubmits new vendor request
2ProcurementApproves request
3APEnters vendor name, tax ID, and bank account into the sourcing system
4PurchasingRe-enters vendor name, tax ID, and bank account into the ERP vendor master
5Sourcing systemAssigns vendor code S-###
6ERPAssigns vendor code V-###
7APMatches invoices in ERP using vendor name
8TreasuryUses bank account data from the sourcing system for ACH payments

During the last quarter, the company created duplicate vendor records and sent two ACH payments to outdated bank accounts.

Which change is the best correction to this process model?

  • A. Suspend ACH payments to newly added vendors until an annual vendor master audit confirms the records.
  • B. Retain both data-entry steps but require a monthly reconciliation of vendor names between systems before Treasury releases ACH payments.
  • C. Create one approved vendor master record with validated tax ID and bank account, then interface a single vendor ID and payment data to ERP and Treasury.
  • D. Add a second Treasury entry of bank account data into ERP so payment information exists in both systems independently.

Best answer: C

What this tests: Information Systems and Data Management

Explanation: The best correction is to redesign the process around a single authoritative vendor master record that feeds all downstream systems. That change addresses the root cause in the model: duplicate manual entry, different vendor IDs, and inconsistent payment data across systems.

When a business process model shows the same master data being entered into multiple systems, assigned different identifiers, and later matched by a weak field such as vendor name, the process is prone to duplicate records and data integrity failures. The strongest improvement is to create vendor data once, validate critical fields such as tax ID and bank account at setup, and distribute that approved record through an interface to ERP and payment functions. This supports a single source of truth, reduces rekeying errors, and keeps invoice matching and ACH payment data aligned. Detective steps or broader payment restrictions may help somewhat, but they do not correct the flawed process design shown in the model.

  • A monthly reconciliation of vendor names is detective and too late; it does not eliminate duplicate entry or weak name-based matching.
  • A second Treasury entry creates another inconsistent data source and increases the risk of conflicting payment information.
  • Suspending ACH until an annual audit is broader than necessary and does not redesign the process to prevent the errors shown.

This correction removes duplicate entry points and inconsistent identifiers by establishing one authoritative vendor record for downstream systems.


Question 64

Topic: Information Systems and Data Management

A company documents its vendor bank-account change process in the cash disbursement system as follows:

Documented processWalkthrough of actual process
Procurement manager approves the vendor bank-account change request.AP master-data clerk updates the vendor bank account in the ERP when the email request is received.
AP master-data clerk updates the vendor bank account in the ERP.Procurement manager reviews and approves the request later that day.
System logs the change and sends a daily change report to the controller.System logs the change and sends a daily change report to the controller.

Which action is the best correction for this discrepancy between the documented and actual process flow?

  • A. Allow the AP master-data clerk to approve the request as long as the controller reviews the daily report.
  • B. Require recorded procurement approval before the ERP bank-account update can be posted, and update documentation only if a redesigned process is formally approved.
  • C. Revise the documented flowchart to show the current sequence because the same steps are still being performed.
  • D. Increase the controller’s review of the change report from daily to hourly.

Best answer: B

What this tests: Information Systems and Data Management

Explanation: The documented flow requires approval before the vendor bank-account change is made, but the walkthrough showed the ERP update occurring first. The best correction is to restore the preventive approval step before the change posts, not to rely on stronger detective review or rewrite the documentation to match a weaker practice.

Reconciling actual process steps to documented process flows means checking whether key activities occur in the same sequence as designed. Here, approval is intended to authorize the bank-account change before it affects the vendor master file. In the walkthrough, the AP clerk makes the change first and the manager approves later, so the actual process no longer matches the documented control flow. That change in sequence matters because a preventive control has effectively become a detective one. A daily change report may help identify problems, but it does not prevent an unauthorized bank-account update from being posted. The strongest correction is to enforce approval before the ERP update, ideally through workflow or system restrictions, and only revise documentation if management formally approves a redesigned process.

  • Increasing controller review frequency improves detection, but it does not fix the out-of-sequence approval.
  • Letting the AP clerk both update and approve weakens segregation of duties and does not restore the documented control design.
  • Revising the flowchart to match the current practice would document a weaker process instead of correcting the control exception.

The actual process moved approval after the update, so the best remediation is to restore approval as a preventive step before the bank-account change is posted.


Question 65

Topic: Information Systems and Data Management

A CPA is evaluating processing integrity controls at a payroll service organization for a SOC 2 engagement.

  • Each client uploads one payroll change file per pay cycle to a secure portal.
  • An automated daily reconciliation report compares the number of files received in the portal to the number of files loaded into the payroll engine.
  • The payroll operations manager is expected to review and sign the report each day.
  • The stated control objective is to detect missing or incomplete payroll input files before payroll is processed.

Current findings:

  • Signed reconciliation reports exist for 19 of the last 20 business days; one day has no evidence of review.
  • During the walkthrough, the systems analyst states that if a file were truncated during transfer, it would still appear as one file received and one file loaded.

What should the CPA do next?

  • A. Assess whether comparing only file counts can detect incomplete payroll files before treating the missing review evidence as only an operating deviation.
  • B. Shift testing to portal access approvals because the issue is more likely an authorization problem than a processing integrity control issue.
  • C. Reperform the reconciliation for the unsigned day and, if the counts match, conclude the processing integrity objective was achieved.
  • D. Conclude the control is properly designed because it operated on most days and classify the unsigned day as an isolated operating deviation.

Best answer: A

What this tests: Information Systems and Data Management

Explanation: The next step is to determine whether the control, as designed, can actually detect the stated risk. A file-count reconciliation may miss a truncated file, which points to a design deficiency; only after design is adequate should the missing daily review be evaluated as a possible operating deviation.

A design deficiency exists when a control, even if performed exactly as intended, is not capable of preventing or detecting the relevant error. Here, the control objective is to detect missing or incomplete payroll input files, but the walkthrough indicates that a truncated file would still be counted as one file received and one file loaded. That means the control may be incapable of detecting an incomplete file, which is a design issue. An operating deviation is different: it occurs when a properly designed control is not performed or does not operate as intended in a specific instance, such as the one day with no review evidence. Because the facts raise a possible design problem, the CPA should evaluate design first before concluding the unsigned day is merely an isolated execution failure.

  • Concluding the control is properly designed because it operated on most days confuses frequency of performance with capability to meet the control objective.
  • Reperforming one day’s file-count reconciliation may address that day only, but it does not show whether the control can detect incomplete files in general.
  • Shifting to portal access approvals changes the objective from processing integrity to authorization and does not address the identified completeness risk.

A control that cannot detect truncated files has a design deficiency even if it is usually reviewed, so design capability must be evaluated before labeling the missing review as an operating deviation.


Question 66

Topic: Information Systems and Data Management

A service organization transfers approved subscription invoices nightly from its Contract Management System (CMS) to its Accounts Receivable (AR) system. During SOC 2 testing of the June 30 batch, the following was noted:

ItemResult
Approved invoices in CMS1,240
Records received in AR1,240
Batch completed before cutoff timeYes
Invoices with valid customer IDs and approval codes1,240
Invoices posted with incorrect amounts because of a field-mapping change37

Which processing integrity issue is best indicated by these results?

  • A. Validity of processing was affected.
  • B. Completeness of processing was affected.
  • C. Accuracy of processing was affected.
  • D. Authorization of processing was affected.

Best answer: C

What this tests: Information Systems and Data Management

Explanation: This scenario points to an accuracy issue in system processing. The batch was complete and timely, and the invoices were valid and approved, but the field-mapping change caused 37 posted amounts to be wrong.

Processing integrity considers whether system processing is complete, accurate, timely, authorized, and valid. Here, completeness is supported because all 1,240 approved invoices in the source system were received in the AR system. Timeliness is supported because the batch finished before the cutoff. Authorization and validity are also supported because the invoices had valid customer IDs and approval codes, so the transactions themselves were legitimate and approved for processing. The problem is that a field-mapping change caused some invoice amounts to be posted incorrectly. When the system processes legitimate transactions but produces wrong values, the primary issue is accuracy. In a SOC 2 context, this could stem from a change management or interface control problem, but the affected processing integrity attribute is accuracy.

  • Accuracy of processing fits because the system converted legitimate invoice data into incorrect posted amounts.
  • Completeness of processing is not the best answer because the AR system received all 1,240 approved invoices.
  • Authorization of processing is not the issue because the invoices had valid approval codes before transfer.
  • Validity of processing is not the issue because the transactions were real and acceptable; the error was in the amounts posted.

All approved invoices were received on time and had valid approvals, but 37 were posted with misstated amounts, which is an accuracy failure.


Question 67

Topic: Considerations for System and Organization Controls Engagements

A cloud-based HR platform provides payroll processing to client companies. Management wants an assurance report it can post publicly for prospective customers and other outsiders. The report should address controls over the platform’s security, availability, and confidentiality, but it does not need to include the service auditor’s detailed tests and results. Which report type best fits this request?

  • A. SOC 3 report
  • B. SOC for Cybersecurity report
  • C. SOC 1 report
  • D. SOC 2 report

Best answer: A

What this tests: Considerations for System and Organization Controls Engagements

Explanation: The best choice is SOC 3 because the company wants a public-facing report about controls over security, availability, and confidentiality at its service organization system. A SOC 2 report covers similar subject matter, but it is more detailed and intended for specified users rather than general public distribution.

This scenario points to a service organization that wants a general-use report about controls over its system using Trust Services Criteria categories. That is the purpose of a SOC 3 report. SOC 3 reports are designed for broad distribution and do not include the detailed description of the service auditor’s tests and results that appear in a SOC 2 report. A SOC 1 report is different because it addresses controls at a service organization that are relevant to user entities’ internal control over financial reporting. A SOC for Cybersecurity report is also different because it reports on an entity’s overall cybersecurity risk management program, not specifically on a service organization’s system used to provide services to customers.

  • SOC 1 report is tempting because it involves a service organization, but SOC 1 is for controls relevant to user entities’ financial reporting, which is not the stated purpose here.
  • SOC 2 report covers security, availability, and confidentiality, but it is a restricted-use report with detailed test procedures and results, unlike the public report requested.
  • SOC for Cybersecurity report is public-facing, but it addresses an entity-wide cybersecurity risk management program rather than a service organization’s system for customer services.

SOC 3 is a general-use report on a service organization’s controls relevant to the Trust Services Criteria and omits the detailed test descriptions and results included in SOC 2.


Question 68

Topic: Security, Confidentiality and Privacy

During a SOC 2 walkthrough, management states these service commitments and system requirements for an analytics replica of its production database:

  • Confidentiality: customer personal information is accessible only to personnel with a documented business need.
  • Privacy: personal information for terminated customers is deleted within 60 days unless retention is legally required.
  • System requirement: analytics users need customer ID, product usage, and region; they do not need full Social Security numbers or bank account numbers.

Walkthrough observations:

  • All 18 data analysts receive the same shared role, which allows browsing the full replica.
  • The replica includes full Social Security numbers and bank account numbers.
  • No documented process purges terminated-customer records from the replica, and backup retention has not been mapped to the 60-day deletion commitment.
  • Annual security awareness training is performed.

Which response is the best correction to address the design deficiency?

  • A. Maintain the current data set and add monthly vulnerability scans and security event monitoring for the analytics server.
  • B. Restrict replica access to approved need-to-know roles, mask or exclude full Social Security numbers and bank account numbers from analytics use, and implement documented retention and disposal procedures for the replica and backups consistent with the 60-day commitment and any legal retention needs.
  • C. Keep the shared analyst role but require semiannual privacy training and signed confidentiality acknowledgments.
  • D. Shut down the analytics replica until the next SOC 2 period so no further exceptions occur.

Best answer: B

What this tests: Security, Confidentiality and Privacy

Explanation: The problem is a design mismatch between stated SOC 2 commitments and the actual control structure. The best correction is to redesign access, data minimization, and retention/disposal controls so the replica supports confidentiality and privacy commitments if the controls operate as intended.

In a SOC 2 engagement, suitability of design asks whether the controls, as designed, would be capable of meeting the entity’s service commitments and system requirements. Here, the design is deficient because broad shared access conflicts with the confidentiality commitment to limit personal information to those with a business need. The design is also deficient because the replica contains sensitive fields that analytics users do not need, which violates data minimization. Finally, the privacy commitment to delete terminated-customer data within 60 days is unsupported because there is no purge process and backup retention has not been aligned to that commitment. The best remediation is the one that fixes all three design gaps: need-to-know access, minimization of sensitive data, and documented retention/disposal procedures.

  • Keeping broad analyst access and adding training or acknowledgments does not fix the missing need-to-know restriction or the lack of deletion controls.
  • Adding vulnerability scans and monitoring addresses technical security risks, but it does not correct excessive authorized access or unsupported privacy commitments.
  • Shutting down the replica is an overreaction; SOC 2 focuses on whether appropriately designed controls can satisfy commitments, not on eliminating a useful process when targeted remediation is available.
  • The correct remediation is the only option that addresses confidentiality and privacy design issues together.

This option directly aligns the analytics environment with the stated need-to-know, data-minimization, and 60-day deletion requirements.


Question 69

Topic: Considerations for System and Organization Controls Engagements

CloudPay, a payroll SaaS provider, asks a CPA to report on controls using the Trust Services Criteria.

ExhibitDetails
Information handledEmployee names, addresses, Social Security numbers, bank account numbers, and benefit elections
Public commitmentsProvide notice about what personal information is collected, use it only for payroll and benefits administration, allow correction requests, and delete records after the retention period
Controls describedNotice acknowledgment logs, workflow for correction requests, retention schedule, and secure deletion procedures

Which Trust Services Criteria subject matter is most directly supported by this exhibit?

  • A. Privacy
  • B. Confidentiality
  • C. Availability
  • D. Security

Best answer: A

What this tests: Considerations for System and Organization Controls Engagements

Explanation: Privacy is the best answer because the exhibit focuses on personal information and the entity’s commitments about notice, permitted use, correction, retention, and deletion. Those are hallmark privacy subject matters under the Trust Services Criteria.

Under the Trust Services Criteria, a practitioner may report on one or more subject matters, including security, availability, processing integrity, confidentiality, and privacy. Privacy is specifically concerned with personal information and whether it is collected, used, retained, disclosed, and disposed of in line with the entity’s commitments and system requirements. In this exhibit, the key facts are the use of employee personal information plus commitments about notice, limited use, correction requests, retention periods, and secure deletion. Those are classic privacy-oriented elements. Confidentiality can also involve sensitive information, but it focuses more broadly on protecting information designated as confidential, not on the full life-cycle obligations for personal information. Availability and security are different subject matters with different emphasis.

  • Privacy is correct because the exhibit emphasizes personal information handling commitments such as notice, use limitation, correction, retention, and deletion.
  • Confidentiality is tempting because payroll data is sensitive, but the exhibit goes beyond restriction of access and addresses privacy life-cycle obligations.
  • Availability is not supported because the exhibit gives no uptime, recovery, or system-operational commitments.
  • Security is too broad here; the facts point more specifically to privacy controls over personal information.

The exhibit centers on personal information and controls over notice, use, correction, retention, and disposal, which are privacy matters.


Question 70

Topic: Security, Confidentiality and Privacy

During a walkthrough of an entity’s cyber risk program, a CPA notes the following:

  • New hires complete training on phishing, password management, and reporting suspicious emails.
  • Employees receive monthly examples of current scams and reminders about handling confidential data.
  • Staff who click simulated phishing emails must complete brief follow-up coaching.

How should this set of activities be characterized?

  • A. A detective monitoring control that identifies malicious activity through alerts and log review
  • B. A corrective incident response control that focuses on containment and recovery after a breach
  • C. A logical access provisioning control that assigns system permissions based on job duties
  • D. An administrative preventive control that provides security awareness training and reinforces expected user behavior

Best answer: D

What this tests: Security, Confidentiality and Privacy

Explanation: The activities focus on communicating security expectations, improving user knowledge, and reinforcing safe actions such as recognizing phishing and protecting confidential data. That makes them security awareness training, which is an administrative preventive control rather than a detective, corrective, or access provisioning control.

Security awareness training is an administrative control used to communicate security information to personnel so they understand risks and model appropriate behaviors. Typical examples include onboarding training, periodic reminders, phishing simulations, and follow-up coaching when employees make mistakes. These activities are aimed at reducing the likelihood of user-driven security failures, especially social engineering and poor data-handling practices. They do not detect intrusions through system monitoring, restore operations after an incident, or assign user access rights. In this scenario, the entity is using recurring communication and education to improve security knowledge and encourage secure conduct, which is the defining purpose of awareness training.

  • The security awareness training choice fits because the entity is educating users and reinforcing secure behavior before incidents occur.
  • The detective monitoring choice is wrong because nothing in the scenario involves log review, alerting, or identifying actual malicious system activity.
  • The corrective incident response choice is wrong because the activities are not about containment, eradication, or recovery after a confirmed event.
  • The logical access provisioning choice is wrong because the scenario does not involve granting, changing, or removing user permissions.

These activities are designed to educate users before incidents occur and shape secure behavior, which is the purpose of security awareness training.


Question 71

Topic: Security, Confidentiality and Privacy

An auditor is evaluating whether a company followed its incident response standards for a confirmed critical cybersecurity incident. Based on the exhibit, which conclusion is best supported?

Incident response standardEvidence from incident 24-017
Confirm or dismiss a high-severity alert within 20 minutes of receipt.SIEM alert received 6/3 at 08:10; analyst confirmed account compromise at 08:25.
Notify the incident commander within 30 minutes after analyst confirmation.Incident commander paged at 09:40 on 6/3.
Contain the affected account or host within 2 hours after analyst confirmation.Privileged account disabled and active tokens revoked at 09:55 on 6/3.
Send any required customer notice within 24 hours after legal approval.Legal approved the required customer notice at 17:00 on 6/3; notices sent at 09:30 on 6/4.
Complete root-cause remediation within 7 calendar days after containment.IAM patch deployed and all privileged credentials rotated at 13:00 on 6/9.
Complete a post-incident review within 10 calendar days after incident closure.Incident closed at 16:00 on 6/10; post-incident review held at 10:00 on 6/21.
  • A. Escalation, containment, and remediation were timely, and only the post-incident review missed its deadline.
  • B. Identification and escalation were timely, but containment and customer communication were not.
  • C. Identification was not timely because analyst confirmation occurred more than 20 minutes after the alert.
  • D. Identification, containment, communication, and remediation were timely, but escalation and post-incident review were not.

Best answer: D

What this tests: Security, Confidentiality and Privacy

Explanation: The timeline satisfies the standards for alert confirmation, containment, customer notice, and remediation. It does not satisfy the 30-minute escalation requirement or the 10-day post-incident review requirement, so the correct conclusion is the one identifying those two late actions.

To evaluate incident response evidence, compare each documented action to the stated response standard. Here, the alert was confirmed 15 minutes after receipt, so identification was timely. The affected account was disabled 90 minutes after confirmation, so containment was timely. Required customer notice was sent 16.5 hours after legal approval, and remediation was completed less than 7 calendar days after containment, so those steps were timely. However, escalation was late because the incident commander was notified 75 minutes after analyst confirmation, exceeding the 30-minute limit. The post-incident review was also late because it occurred after the 10-calendar-day deadline following closure. The best-supported conclusion is therefore the one that separates the timely actions from the two missed deadlines.

  • The statement that identification and escalation were timely fails because escalation occurred 75 minutes after confirmation, beyond the 30-minute limit.
  • The statement that containment and customer communication were not timely is unsupported; both occurred within their stated deadlines.
  • The statement that only the post-incident review was late overlooks that escalation also missed its deadline.
  • The statement that identification was not timely misreads the timeline; confirmation occurred 15 minutes after the alert, within the 20-minute standard.

The evidence meets the deadlines for alert confirmation, containment, customer notice, and remediation, but escalation took 75 minutes and the post-incident review occurred more than 10 calendar days after closure.


Question 72

Topic: Information Systems and Data Management

A company uses the following technology environment:

  • Its general ledger application runs on virtual servers in a hosting provider’s facility, but the servers, storage, and network segment are reserved for this company only.
  • The company defines the security configuration and access rules for that reserved environment.
  • The company also uses a provider’s internet-accessible SaaS expense platform that is shared with many unrelated customers.
  • User authentication and nightly data transfers connect the reserved environment and the SaaS platform.

Which cloud deployment model best describes the company’s overall environment?

  • A. Private cloud deployment
  • B. Multicloud architecture
  • C. Public cloud deployment
  • D. Hybrid cloud deployment

Best answer: D

What this tests: Information Systems and Data Management

Explanation: Hybrid cloud deployment is correct because the company uses both a private-cloud environment reserved for its exclusive use and a public-cloud SaaS service shared with other customers. The connected authentication and data transfers show the two environments operate together.

Cloud deployment models are distinguished mainly by who has access to the infrastructure and how control is allocated. A private cloud is dedicated to a single organization, even if a third party hosts it, so exclusive-use servers and company-defined security settings point to private cloud. A public cloud serves multiple customers on shared infrastructure, so the shared SaaS expense platform is public cloud. When an organization uses both private and public cloud resources as part of one connected environment, the overall deployment model is hybrid cloud. The integration facts matter here because they show the private and public portions are part of the same operating model rather than unrelated services.

  • Public cloud deployment is too narrow because the general ledger environment is reserved for one company rather than shared among many customers.
  • Private cloud deployment is incomplete because the company also relies on a shared SaaS platform delivered over the internet.
  • Hybrid cloud deployment fits because exclusive-use cloud resources and shared cloud services are both present and connected.
  • Multicloud architecture is tempting, but the decisive distinction here is the combination of private and public cloud, not simply the use of more than one cloud service.

The environment combines a private cloud reserved for one company with a shared public cloud service that is connected for authentication and data exchange.


Question 73

Topic: Information Systems and Data Management

An accounting department uses the following components:

  • Employee laptops used by staff to enter invoices and approve journal entries
  • A database server that stores ERP transaction data
  • Windows Server software that manages system resources for the ERP application
  • Routers and switches that connect the office LAN to the ERP system and other networked resources

How should the routers and switches be characterized in this environment?

  • A. Operating system
  • B. Server
  • C. Network infrastructure
  • D. End-user device

Best answer: C

What this tests: Information Systems and Data Management

Explanation: The routers and switches are network infrastructure because their purpose is to connect devices and manage network traffic across the accounting environment. They are not the software layer, the data-processing host, or the user-facing device.

In an accounting environment, different IT architecture components serve different roles. End-user devices are the laptops or workstations employees use to access applications and enter data. Servers host applications or data and provide shared processing or storage. Operating systems manage hardware and software resources so applications can run. Network infrastructure includes components such as routers and switches that transmit, segment, and direct traffic between devices, servers, and external resources. Because the scenario describes routers and switches connecting the office LAN to the ERP system and other resources, the correct characterization is network infrastructure.

  • Operating system is incorrect because an operating system is system software, such as Windows Server, not the physical or virtual devices that move network traffic.
  • Server is incorrect because a server provides shared compute, application hosting, or storage services rather than routing and switching communications.
  • End-user device is incorrect because laptops used by accounting staff are end-user devices; routers and switches are not directly used to input or approve transactions.

Routers and switches are network infrastructure because they provide connectivity and direct data traffic among systems, users, and applications.


Question 74

Topic: Considerations for System and Organization Controls Engagements

A service organization requests a SOC 2 Type 1 report with these engagement facts:

  • Applicable trust services categories: Security and Confidentiality
  • Report date: December 31, 20X5

Draft management assertion excerpt:

“Management asserts that, throughout the period January 1 through December 31, 20X5, the description fairly presents the system used to collect, use, retain, disclose, and dispose of personal information, and the related controls were suitably designed and operated effectively to provide reasonable assurance that personal information was handled in conformity with the entity’s privacy notice.”

Which interpretation is best?

  • A. The draft is inappropriate because it combines privacy-specific subject matter with Type 2 period and operating-effectiveness language in a SOC 2 Type 1 engagement limited to security and confidentiality.
  • B. The draft is appropriate because Type 1 versus Type 2 changes only the practitioner’s testing and opinion, not management’s assertion.
  • C. The draft is appropriate because confidentiality includes handling personal information under a privacy notice, and a Type 1 report may assert operating effectiveness throughout the year.
  • D. The draft is inappropriate only because SOC 2 reports must cover security alone; confidentiality and privacy would require separate reports.

Best answer: A

What this tests: Considerations for System and Organization Controls Engagements

Explanation: The draft assertion mixes the wrong subject matter and the wrong report type. A SOC 2 Type 1 on security and confidentiality should be as of a specified date and address fair presentation and suitability of design, not operating effectiveness throughout a period or compliance with a privacy notice for personal information.

In SOC 2, management’s assertion depends on both the trust services categories included and whether the report is Type 1 or Type 2. Security and confidentiality focus on protecting the system and information designated as confidential based on service commitments and system requirements. Privacy is a separate category and is tied to personal information being collected, used, retained, disclosed, and disposed of in line with privacy commitments or a privacy notice. Type 1 is point-in-time reporting as of a specified date and addresses the fairness of the description and the suitability of control design. Type 2 adds whether controls operated effectively throughout a period. Here, the excerpt uses privacy language and asserts operating effectiveness throughout the year, so it does not match a SOC 2 Type 1 engagement limited to security and confidentiality.

  • Treating confidentiality as the same as privacy is the main trap; privacy is the category tied to personal information and privacy-notice commitments.
  • Saying SOC 2 must cover security alone is incorrect; security is required, but additional categories such as confidentiality, availability, processing integrity, and privacy may also be included.
  • Saying Type 1 and Type 2 differ only in the practitioner’s work is incorrect; management’s assertion also changes because Type 2 includes operating effectiveness over a period.

The excerpt uses privacy-specific wording and Type 2 operating-effectiveness language, which does not fit a SOC 2 Type 1 assertion for security and confidentiality as of a date.


Question 75

Topic: Information Systems and Data Management

A manufacturer uses the following process:

  • The warehouse system records each shipment.
  • A cloud integration platform receives shipment records, transforms them, and sends them to the ERP.
  • The ERP creates customer invoices only for shipment records successfully processed by the integration platform.
  • If the integration platform is down, shipping continues and records wait in the platform queue.

How should the integration platform and its primary monitoring implication be characterized?

  • A. A disaster-recovery repository requiring monitoring of backup restores to ensure invoice data availability.
  • B. A key interface-processing component requiring monitoring of failed or queued messages to ensure shipment-to-invoice completeness.
  • C. A preventive input-edit control requiring monitoring of rejected field values to ensure valid shipment entry.
  • D. A master-data storage component requiring monitoring of customer table changes to ensure master-file accuracy.

Best answer: B

What this tests: Information Systems and Data Management

Explanation: The integration platform is a key supporting architecture component in the shipment-to-invoice flow. Since invoicing depends on successful interface processing, the most important monitoring is for failed, delayed, or queued messages that could lead to incomplete billing.

When an integration or middleware component sits between an operational system and the ERP, it is a critical architecture dependency for downstream processing. In this scenario, shipments are recorded in the warehouse system, but invoices are created only after the cloud integration platform transforms and sends those records to the ERP. That means an outage or processing failure in the platform can allow operations to continue while financial processing becomes incomplete or delayed. The primary control implication is therefore interface monitoring: reviewing failed transmissions, queue backlogs, and reconciliations between source shipments and generated invoices. This addresses transaction completeness. The scenario does not describe a master-data repository, an input edit check, or a backup environment.

  • Monitoring customer table changes fits a master-data repository, but the platform described transports transaction data rather than maintaining reference data.
  • Monitoring rejected field values fits an input-edit control at data capture, but shipment data has already been recorded before the platform processes it.
  • Monitoring backup restores fits disaster recovery, but the immediate risk here is omitted or delayed invoicing from interface failure, not restoration after data loss.

Because ERP invoices are generated only after the platform successfully passes shipment records, the main monitoring need is for failed or queued interface messages that could cause incomplete invoicing.

Questions 76-82

Question 76

Topic: Information Systems and Data Management

A distributor’s ERP generates a customer invoice only after it receives an electronic shipment confirmation from the warehouse system. During a two-day network outage, warehouse staff shipped goods using manual bills of lading and entered shipment confirmations after service was restored. Management wants to address the main AIS risk created by the outage. Which procedure is most appropriate?

  • A. Reconcile vendor invoices to purchase orders and receiving reports for the outage period.
  • B. Reconcile approved employee timecards to payroll disbursements for the outage period.
  • C. Reconcile the manual bills of lading to the sales invoices and accounts receivable postings created after the outage.
  • D. Reconcile bank lockbox deposits to cash receipts postings for the outage period.

Best answer: C

What this tests: Information Systems and Data Management

Explanation: The key AIS dependency is that shipment confirmation triggers billing and A/R posting. When shipments occur manually during an outage, the main risk is that some shipped orders will not be billed or will be recorded inaccurately, so reconciling shipping documents to invoices and A/R is the best response.

In an integrated accounting information system, downstream sales processing often depends on a system event from another module. Here, the warehouse system’s shipment confirmation is the event that causes the ERP to generate the customer invoice and update accounts receivable. Because shipments were made manually during the outage, the main risk is incomplete or inaccurate capture of those shipments once the system is restored. Manual bills of lading are the source record of what actually shipped, so reconciling them to invoices and A/R postings is the most direct way to identify omitted or duplicated transactions. Procedures over lockbox deposits, purchasing documents, or payroll records address different business processes and would not resolve the specific sales-cycle risk caused by the failed shipment interface.

  • Reconciling bank lockbox deposits to cash receipts postings addresses the cash collections process, but the outage affected shipment-to-billing flow before collection.
  • Reconciling vendor invoices to purchase orders and receiving reports is a purchasing and disbursements control, not a sales-cycle response.
  • Reconciling timecards to payroll disbursements tests payroll accuracy, which is unrelated to the missing shipment confirmation trigger.
  • Reconciling manual bills of lading to invoices and A/R postings is the only procedure tied directly to the interrupted sales and billing process.

Because shipment confirmation triggers billing in the AIS, matching manual shipping documents to later invoices and A/R postings directly tests whether all shipped orders were recorded.


Question 77

Topic: Security, Confidentiality and Privacy

An entity is testing a security control over unusual privileged-access activity.

Control description:

  • Each business day at 6:00 a.m., the SIEM generates a report of privileged-login anomalies.
  • By the end of the next business day, a security analyst must review the report and open an incident ticket for each anomaly.

Walkthrough and test results for 20 business days:

  • SIEM report generated on 20 of 20 days.
  • Evidence of analyst review was present on 20 of 20 days.
  • Anomalies appeared on 5 days.
  • Tickets were opened by the end of the next business day on 3 of those 5 days; on the other 2 days, tickets were opened 3 days later.

Which conclusion is best supported by these results?

  • A. The control appears suitably designed, but it did not operate effectively throughout the test period because anomaly tickets were not opened within the required time frame.
  • B. The control operated effectively because all daily reports were reviewed and every anomaly was eventually documented.
  • C. The control is not suitably designed because automated SIEM reporting cannot be used for privileged-access monitoring.
  • D. The results support a conclusion only about report generation completeness, not about response timeliness.

Best answer: A

What this tests: Security, Confidentiality and Privacy

Explanation: The facts support a design-versus-operation conclusion. The control structure appears appropriate because anomalies are identified daily and assigned for follow-up, but the required next-business-day response failed on 2 of 5 anomaly days, so the control did not operate effectively throughout the period tested.

Security control testing often distinguishes whether a control is designed appropriately from whether it operated effectively during the period tested. Here, the walkthrough shows a reasonable design: the SIEM produces a daily anomaly report, the analyst reviews it, and the process requires prompt ticket creation for investigation. The test results show the automated report and analyst review occurred consistently, which supports that the control exists and is being performed. However, timely response is part of the control, not an optional extra step. Because incident tickets were opened late on 2 of the 5 days with anomalies, the control did not operate as prescribed throughout the tested period. The best conclusion is therefore an operating effectiveness problem, not a design failure.

  • Saying the control is not suitably designed overstates the evidence; the walkthrough shows a logical design using automated detection plus required analyst follow-up.
  • Saying the control operated effectively because anomalies were eventually documented ignores the explicit next-business-day requirement.
  • Limiting the conclusion to report generation completeness misses the direct evidence about delayed ticket creation, which tests response performance.

The walkthrough supports the design, but the late ticket creation on 2 of 5 anomaly days shows the control did not operate as prescribed throughout the period.


Question 78

Topic: Security, Confidentiality and Privacy

A company gives remote contractors company-managed laptops and MFA-protected VPN access. One contractor was hired only to update vendor records in the accounts payable application. After connecting, the contractor can still scan most internal subnets and open shared folders containing payroll, tax, and legal files. Security monitoring shows no malware and no failed logins.

Which remediation best addresses this weakness?

  • A. Replace the broad VPN with zero-trust, application-specific access and restrict permissions to only the vendor-record functions and files required for the assignment.
  • B. Retain current access and shorten VPN session duration while requiring more frequent password changes.
  • C. Retain current access and enforce application whitelisting so only approved software can run on the laptop.
  • D. Retain the broad VPN but require a stricter confidentiality acknowledgment before each remote session.

Best answer: A

What this tests: Security, Confidentiality and Privacy

Explanation: The problem is excessive remote access, not malware or failed authentication. The best correction is to stop granting broad network trust through the VPN and instead allow only the specific application, functions, and data the contractor needs to perform the assigned work.

Least privilege means giving a user only the minimum permissions needed to perform assigned duties. Need-to-know narrows that further by limiting access to only the data required for the task. Zero trust avoids assuming that a user should be broadly trusted just because the user is on the VPN; access should be granted per resource and per role. In this scenario, the contractor needs only vendor-record updates in the accounts payable application, so access to internal subnets and payroll, tax, and legal folders is excessive. The best remediation is to replace broad VPN access with application-specific remote access and to restrict both functions and files to the contractor’s assignment. Application whitelisting is useful for controlling what software can run, but it does not solve overbroad data and network permissions.

  • Requiring a stronger confidentiality acknowledgment is an administrative step, but it does not technically reduce the contractor’s excessive access.
  • Enforcing application whitelisting helps control executable software on the endpoint, not which systems or files an authorized user can reach.
  • Shorter VPN sessions and more frequent password changes may affect session or credential management, but they leave the overbroad permissions unchanged.

This removes implicit trust from network connectivity and applies least privilege and need-to-know to both system access and data access.


Question 79

Topic: Considerations for System and Organization Controls Engagements

Nimbus Hosting engaged a CPA firm for a SOC 2 Type 2 report.

  • Period covered: Jan 1-Dec 31, 20X5
  • Report date: Feb 15, 20X6
  • Distribution to customers: Feb 20, 20X6
  • Fact learned on Mar 10, 20X6: multifactor authentication for privileged administrators was disabled from Nov 5-Dec 20, 20X5 during a migration
  • If this fact had been known during the engagement, the service auditor believes the testing results and report could have changed

Which interpretation is most appropriate?

  • A. It is a subsequent event, so the original SOC 2 report is unaffected because the service auditor learned of it after the report date.
  • B. It is immaterial to the original SOC 2 report because the service organization can remediate the problem after discovery.
  • C. It is grounds for automatically withdrawing the SOC 2 report and replacing it with an adverse report without further evaluation.
  • D. It is a subsequently discovered fact, so the service auditor should assess whether the previously issued SOC 2 report needs revision and whether users should be informed.

Best answer: D

What this tests: Considerations for System and Organization Controls Engagements

Explanation: The MFA failure existed during the covered period and before the report date, but the service auditor learned of it only afterward. That makes it a subsequently discovered fact. When such a fact likely would have affected the report, the service auditor must reconsider the issued SOC report and possible user communication.

In a SOC 1 or SOC 2 engagement, a subsequently discovered fact is information that existed at the report date but was not known to the service auditor at that time. If the fact, had it been known, likely would have affected the report, the matter is not ignored just because it was discovered later. The service auditor should discuss the matter with management, evaluate whether the report should be revised, and consider what communication to report users is needed. If management does not take appropriate action, the service auditor may need to take steps to prevent further reliance on the report. This differs from a subsequent event, which involves something occurring after the period or after the report date rather than a preexisting undiscovered condition.

  • Treating the matter as a subsequent event is incorrect because the control failure existed before the report date; only the discovery happened later.
  • Saying remediation makes the matter irrelevant is incorrect because the report covers the past period when MFA was disabled.
  • Assuming automatic withdrawal and automatic adverse replacement goes too far; the service auditor first evaluates the effect and appropriate response.
  • Identifying the matter as a subsequently discovered fact is correct because it may have changed testing results and the issued report if known earlier.

The control failure existed before the report date and could have affected the report, which makes it a subsequently discovered fact requiring reconsideration of the issued report.


Question 80

Topic: Information Systems and Data Management

A CPA is reviewing the design of a company’s billing-to-analytics data flow.

ItemFact
Reporting purposeState-level churn and monthly renewal trend dashboards
Source systemBilling application
Columns copied nightly to analytics warehousecustomer_id, state, plan_tier, renewal_date, email, date_of_birth, full_bank_account_number
Warehouse protectionsSSO with MFA; encrypted at rest
Access18 marketing analysts have read access to the extracted table
Retention in warehouseIndefinite; no purge job configured
Source-system business needFull bank account number is needed only until payment authorization is confirmed

Based on the exhibit, which conclusion is best supported?

  • A. The warehouse needs real-time replication because nightly extraction cannot support the stated dashboards.
  • B. The warehouse needs stronger authentication because analysts can read the table without multifactor authentication.
  • C. The warehouse needs encryption at rest because bank account data is stored unencrypted.
  • D. The warehouse needs data minimization and purge controls because the extract copies unnecessary sensitive fields and keeps them indefinitely.

Best answer: D

What this tests: Information Systems and Data Management

Explanation: The stated dashboards need trend and state-level reporting, but the extract also stores email, date of birth, and full bank account numbers. Because those sensitive fields are not needed for the reporting purpose and the warehouse retains them indefinitely, the best-supported conclusion is a data minimization and lifecycle control need.

When data is extracted into an analytics store, schema design and lifecycle controls should limit copied data to the fields required for the stated business purpose and remove it when no longer needed. Here, the reporting purpose is state-level churn and monthly renewal trends, yet the nightly extract also includes email, date of birth, and full bank account number. The exhibit further states that the warehouse keeps the data indefinitely and that full bank account numbers are only needed in the source system until payment authorization is confirmed. Those facts indicate excessive extraction and excessive retention of sensitive data. SSO with MFA and encryption at rest are already present, so the strongest conclusion is not an authentication or storage-encryption gap, but a need to reduce the schema to necessary fields and apply retention or purge controls.

  • Stronger authentication is not the best conclusion because the exhibit explicitly states SSO with MFA.
  • Encryption at rest is already in place, so storage protection does not address the separate problem of unnecessary sensitive data being copied and retained.
  • Nightly extraction does not by itself make the dashboards unsupported; monthly trend and state-level reporting can reasonably use a nightly load.

The exhibit shows unnecessary sensitive fields in the analytics schema and no retention limit, creating a clear data minimization and lifecycle control gap.


Question 81

Topic: Considerations for System and Organization Controls Engagements

A CPA is performing annual vendor oversight for a client that uses a cloud benefits portal to store employees’ SSNs and bank account data as of 12/31/20X5. The vendor sends only this SOC 2 Type 2 excerpt:

ItemExcerpt
Period covered1/1/20X5-9/30/20X5
Trust Services Criteria coveredSecurity
Complementary user entity controlThe client performs a quarterly privileged-access review of portal administrators

The CPA needs evidence over confidentiality controls through 12/31/20X5, and the client has not documented the Q4 privileged-access review.

What should the CPA do next?

  • A. Rely on the excerpt because Type 2 testing of security also supports confidentiality through year-end if no breach was reported.
  • B. Treat the quarterly privileged-access review as the vendor’s responsibility because it appears in the SOC report excerpt.
  • C. Obtain a written representation from the client that the vendor’s controls did not change after 9/30/20X5 and conclude the excerpt is sufficient.
  • D. Request the full SOC 2 report and additional evidence that covers confidentiality and 10/1/20X5-12/31/20X5, and verify the client performed the complementary user entity control.

Best answer: D

What this tests: Considerations for System and Organization Controls Engagements

Explanation: The excerpt is not sufficient because it ends at 9/30, addresses only security, and assumes the client performed a complementary user entity control. The CPA should obtain evidence for the missing period and confidentiality objective and confirm the user entity control operated.

A SOC report supports reliance only for the period, criteria, and controls actually covered, and only when relevant complementary user entity controls are in place. Here, the excerpt stops before year-end, omits the confidentiality criterion the CPA needs, and lists a quarterly privileged-access review that the client has not documented. The proper follow-up is to obtain the full report and other evidence targeted to the gaps, such as evidence covering 10/1/20X5-12/31/20X5 and confidentiality-related controls, and to verify the client performed the stated complementary user entity control. Security coverage does not automatically satisfy confidentiality, and a representation or absence of reported breaches does not replace missing scoped evidence.

  • Relying on security testing alone fails because confidentiality is a separate criterion and the excerpt does not extend through year-end.
  • Treating the privileged-access review as the vendor’s responsibility is incorrect because complementary user entity controls must be performed by the user entity.
  • A written representation about unchanged controls is not sufficient substitute evidence for a missing period or missing criterion.

The excerpt does not cover the needed criterion, the gap period, or operation of the complementary user entity control, so additional targeted evidence is required before reliance.


Question 82

Topic: Security, Confidentiality and Privacy

During an ISC review, a CPA is investigating a payroll confidentiality incident. The following facts are known:

  • The payroll application requires a user ID, password, and MFA.
  • A payroll supervisor was terminated effective June 12.
  • HR records show the termination was finalized on June 12.
  • The supervisor successfully logged in on June 14 and viewed payroll reports.
  • The original access request and manager approval for the payroll role are on file.
  • A quarterly user access review for the payroll application was completed on May 31.

What should the CPA do next to evaluate the most relevant control?

  • A. Trace the June 12 termination record to evidence that the supervisor’s account and payroll access were disabled promptly.
  • B. Inspect MFA settings and login logs to confirm the supervisor authenticated with two factors.
  • C. Reperform the May 31 quarterly access review for payroll users to confirm role appropriateness.
  • D. Review the original onboarding ticket used to create the supervisor’s account and assign the payroll role.

Best answer: A

What this tests: Security, Confidentiality and Privacy

Explanation: The key issue is not whether the user could prove identity, but why access still existed after termination. The next step is to test access revocation by tracing the termination event to timely disabling of the account and related privileges.

Authentication verifies who the user is, while authorization determines what the user is allowed to do. Access provisioning creates or grants that access, access review periodically reassesses whether access remains appropriate, and access revocation removes access when it is no longer needed. Here, the terminated supervisor successfully logged in two days after HR finalized the termination, so the immediate control concern is failed revocation. The facts already indicate the application had authentication controls and that original access approval existed. A prior quarterly access review also does not replace the need to revoke access promptly when employment ends. The most relevant next procedure is therefore to trace the termination record to evidence that the user’s account and payroll privileges were disabled on a timely basis.

  • Inspecting MFA addresses authentication, but the incident already shows the user could still sign in; the likely failure is not identity verification.
  • Reviewing the onboarding ticket addresses provisioning and initial approval, which are hire-stage controls rather than post-termination removal.
  • Reperforming the prior quarterly access review addresses periodic access review, but it does not substitute for immediate termination-based revocation.

Because a terminated user retained access after the termination date, the next step is to test the access revocation process tied to termination.

Continue with full practice

Use the CPA ISC Practice Test page for the full practice route, mixed-topic practice, timed mock exams, and explanations.

Use the CPA ISC practice route for timed mocks, topic drills, progress tracking, explanations, and full practice.

Focused topic pages

Free review resource

Read the CPA ISC guide on CPAExamsMastery.com for concept review, then return here for Mastery Exam Prep practice.

Revised on Wednesday, May 13, 2026