Try 10 focused Certified Public Accountant Information Systems and Controls (CPA ISC) questions on system flow, data governance, data reliability, processing, and IT control context.
CPA means Certified Public Accountant. ISC means Information Systems and Controls. Use this focused page when your CPA ISC misses are about system flow, source data, interfaces, processing, data reliability, or IT control context. Drill this topic before returning to mixed practice.
| Field | Detail |
|---|---|
| Exam route | CPA ISC |
| Issuer | American Institute of Certified Public Accountants (AICPA) |
| Topic area | Information Systems and Data Management |
| Blueprint weight | 40% |
| Page purpose | Systems-flow practice for source data, processing, interfaces, IT controls, and data reliability |
This topic tests whether you can connect business processes, information systems, data flow, and control objectives. Strong answers identify where data is created, changed, stored, transmitted, reconciled, and relied on.
Draw a simple mental flow: source, input, processing, storage, output, and use. Then ask which risk threatens the financial or operational objective. The best answer usually controls the exact point where the risk enters the system.
Use this page to isolate Information Systems and Data Management for CPA ISC. Work through the 10 questions first, then review the explanations and return to mixed practice in Mastery Exam Prep.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 40% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original Mastery Exam Prep practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Information Systems and Data Management
A distributor’s ERP generates a customer invoice only after it receives an electronic shipment confirmation from the warehouse system. During a two-day network outage, warehouse staff shipped goods using manual bills of lading and entered shipment confirmations after service was restored. Management wants to address the main AIS risk created by the outage. Which procedure is most appropriate?
Best answer: A
What this tests: Information Systems and Data Management
Explanation: The key AIS dependency is that shipment confirmation triggers billing and A/R posting. When shipments occur manually during an outage, the main risk is that some shipped orders will not be billed or will be recorded inaccurately, so reconciling shipping documents to invoices and A/R is the best response.
In an integrated accounting information system, downstream sales processing often depends on a system event from another module. Here, the warehouse system’s shipment confirmation is the event that causes the ERP to generate the customer invoice and update accounts receivable. Because shipments were made manually during the outage, the main risk is incomplete or inaccurate capture of those shipments once the system is restored. Manual bills of lading are the source record of what actually shipped, so reconciling them to invoices and A/R postings is the most direct way to identify omitted or duplicated transactions. Procedures over lockbox deposits, purchasing documents, or payroll records address different business processes and would not resolve the specific sales-cycle risk caused by the failed shipment interface.
Because shipment confirmation triggers billing in the AIS, matching manual shipping documents to later invoices and A/R postings directly tests whether all shipped orders were recorded.
Topic: Information Systems and Data Management
During an internal control review, a CPA notes the following about a monthly HR analytics process:
Which action is the most appropriate data management control response?
Best answer: A
What this tests: Information Systems and Data Management
Explanation: The main risk is not just storage security; it is that the process extracts more sensitive data than the dashboard needs and retains it longer than necessary. A purpose-limited extract with automated deletion best aligns the data extract and lifecycle with the business use.
When an analytics process uses only a small subset of fields, exporting an entire production table that includes SSNs and bank account numbers creates unnecessary data exposure. Keeping those files on local laptops after the dashboard refresh adds a lifecycle problem because the sensitive data persists beyond its business purpose. The strongest control response is to redesign the extract so it includes only the fields required for the dashboard and to enforce automatic deletion once the refresh is complete. Encryption helps protect stored files, but by itself it does not reduce over-collection or over-retention. Schema normalization may improve database design, but it does not solve the immediate risk created by extracting unnecessary sensitive data into CSV files.
This addresses both unnecessary data extraction and excessive retention by applying data minimization and lifecycle control to the analytics process.
Topic: Information Systems and Data Management
Finch Co. moved its ERP system to a cloud provider. Review the exhibit and select the conclusion best supported by the service arrangement.
| Item | Summary |
|---|---|
| Service model | Infrastructure as a Service (IaaS) |
| Workload | Finch hosts its ERP on cloud-based virtual machines |
| Provider responsibilities | Physical data center security, environmental safeguards, hardware maintenance, hypervisor patching, storage replication across zones |
| Finch responsibilities | Guest operating system patching, ERP application configuration, user provisioning, periodic access reviews, database encryption keys |
| Contract note | Finch must configure backup retention and perform recovery testing within its tenant |
Which conclusion is best supported by the exhibit?
Best answer: D
What this tests: Information Systems and Data Management
Explanation: The exhibit describes a shared responsibility model for IaaS. The cloud provider handles the underlying physical and virtualization layers, but Finch still manages the guest OS, application access, encryption keys, and tenant-configured backup settings.
Cloud service provider responsibilities depend on the deployment model, and IaaS leaves significant control duties with the customer. In this exhibit, the provider is responsible for the physical facilities, hardware, environmental safeguards, and hypervisor layer. Finch, however, is still responsible for controls inside its tenant, including guest operating system patching, ERP configuration, user provisioning, access reviews, encryption keys, and backup retention settings. That means hosting the ERP in the cloud does not transfer all control responsibility to the provider. A CPA evaluating cloud controls should identify which controls are performed by the provider and which remain with the user entity under the shared responsibility model.
In an IaaS arrangement, the provider manages the underlying infrastructure, but the customer still controls many operating system, application, and tenant configuration responsibilities.
Topic: Information Systems and Data Management
A distributor plans to replace its legacy sales and inventory application with a new hosted ERP module.
| Fact | Detail |
|---|---|
| Process scope | The application supports about 65% of company revenue and updates shipment status, inventory, accounts receivable, and daily sales journal entries to the general ledger. |
| Go-live timing | A direct cutover is scheduled for December 29, three days before year-end close and the physical inventory count. |
| Testing status | Unit testing passed. End-to-end testing across order entry, shipping, billing, and GL posting was completed for 12 of 20 scenarios; 3 tested scenarios produced duplicate invoices, and 5 scenarios were not tested. |
| Conversion plan | The legacy system will become read-only at go-live. No parallel processing is planned. If the new system fails, orders will be captured manually in spreadsheets until issues are fixed. |
| Access/control status | Automated credit-limit and price-override approvals are configured, but the user-role access review for sales supervisors and billing clerks is scheduled for after go-live. |
| Vendor assurance | The ERP vendor has a SOC 1 Type 2 report covering its hosted infrastructure. |
Based on these facts, which is the best interpretation of the proposed conversion approach?
Best answer: B
What this tests: Information Systems and Data Management
Explanation: This conversion plan combines a high-impact direct cutover with incomplete end-to-end testing, known duplicate invoicing errors, no practical rollback, and delayed access validation right before year-end. In that environment, the proposal creates unacceptable operational disruption, financial reporting risk, and control risk.
A conversion approach should be evaluated in light of the system’s business significance, timing, testing results, fallback capability, and control readiness. Here, the new system drives revenue, inventory, receivables, and GL postings, so defects can affect operations and financial reporting quickly. A direct cutover shortly before year-end heightens the risk because processing failures or duplicate invoices could distort revenue and inventory during close. The plan also lacks a strong fallback, since the legacy system becomes read-only and manual spreadsheets are not an equivalent recovery method. Delaying the user-role access review until after go-live adds control risk because approval workflows may operate with inappropriate access. The vendor’s SOC 1 Type 2 report may support reliance on certain hosted-service controls, but it does not replace customer-side conversion testing, configuration validation, or user access review.
This is correct because the conversion affects revenue-significant processing and key controls, yet unresolved processing errors, incomplete testing, no practical rollback, and delayed access review remain immediately before year-end.
Topic: Information Systems and Data Management
A company’s payroll application uses two storage arrays that are synchronously mirrored.
System facts:
Which is the best correction to the disaster recovery plan?
Best answer: A
What this tests: Information Systems and Data Management
Explanation: The best correction is to add replication to a separate location. Same-site mirroring helps keep systems available when a device fails, but it does not protect against a building-wide outage that affects shared power, network, or facility access.
Mirroring and replication support different availability and recovery objectives. Mirroring maintains an up-to-date duplicate, often to reduce downtime from component or storage failure and support high availability. Replication copies data to another location so the organization can recover if the primary site is lost. In this scenario, both mirrored arrays are in the same data center and depend on the same building services, so a sitewide disruption could disable both copies at once. The disaster recovery plan is therefore overstated. The proper correction is to describe the current mirroring as an availability measure and add replication to a geographically separate site for disaster recovery.
Mirroring within one site supports availability, while offsite replication is needed to recover when the entire site is lost.
Topic: Information Systems and Data Management
During a walkthrough of a retailer’s application change process, the CPA learns the following:
main branch automatically triggers build, testing, and deployment to production.To evaluate change-control considerations in this CI/CD environment, what should the CPA do next?
Best answer: D
What this tests: Information Systems and Data Management
Explanation: In a CI/CD environment, the key change-control issue is whether the automated deployment gates are themselves controlled. Because developers can change branch protections and pipeline configuration, the CPA should first inspect access, approval, and segregation controls over those automated settings.
When an organization uses CI/CD, manual approvals may be replaced by automated controls embedded in the source-code repository and deployment pipeline. That means the CPA should focus on whether those automated controls are designed and protected appropriately. Important control considerations include who can modify branch protection rules, who can change pipeline configuration, whether code and pipeline changes require independent review, and whether one individual can both alter the deployment rules and promote code to production. In the scenario, developers can change the very settings that enforce approvals and testing, so there is a risk they could bypass the intended change-management process. Before selecting samples or performing downstream testing, the CPA should understand and evaluate the design of these automated controls.
In CI/CD, automated approval gates become key change controls, so the next step is to evaluate whether access to those gates and their configuration is properly restricted and independently authorized.
Topic: Information Systems and Data Management
A retail company combines point-of-sale transactions, website clickstream logs, supplier PDFs, and IoT sensor data into a single low-cost cloud repository. The data is kept in its original format, and analysts transform only the data needed for each project later. How should this repository be classified?
Best answer: D
What this tests: Information Systems and Data Management
Explanation: The repository is a data lake because it stores varied data types in raw form and applies transformation later as needed. That fits a flexible analytics environment rather than a curated reporting store or a transaction-processing database.
A data lake is designed to hold large volumes of structured, semi-structured, and unstructured data in native format until users decide how to analyze it. That is why keeping clickstream logs, PDFs, sensor data, and transactions together without upfront standardization points to a data lake. A data warehouse, by contrast, usually contains cleaned, integrated, and structured data prepared in advance for reporting and business intelligence. A data mart is a narrower subset of data, typically focused on one department or subject area such as sales or finance. An operational database supports day-to-day transaction processing rather than broad analytical storage.
A data lake stores large volumes of raw structured, semi-structured, and unstructured data in native form for later use and analysis.
Topic: Information Systems and Data Management
During a walkthrough of the emergency change process for a cloud-based billing application, a CPA notes the following CI/CD controls:
Which statement best distinguishes the remaining design weakness from the controls that are already present?
Best answer: B
What this tests: Information Systems and Data Management
Explanation: The process includes acceptance criteria, approval, independent code review, automated testing, and deployment logging/monitoring. The decisive weakness is that the developer who made the emergency change still has production deployment access, so separation of duties is not preserved through access restriction.
Change control policies use different controls for different purposes. Acceptance criteria define what the change must achieve. Authorization decides whether the change may proceed. Code review and testing assess quality and functionality. Logging and monitoring provide detective evidence of what occurred. Separation of duties and access restrictions serve a different objective: they limit who can move code into production. In this scenario, the process has the earlier controls, but it still allows the developer who wrote the emergency fix to deploy it because on-call developers keep standing production access. That is a design weakness. Even with approval, review, testing, and logging, the process lacks a preventive control that separates development from production promotion. Stronger design would use a separate deployer, restricted deployment role, or tightly controlled service account.
Approval, acceptance criteria, review, testing, and logging are present, but standing production access still lets the coder deploy the change personally.
Topic: Information Systems and Data Management
A retailer documents the following for its online order environment:
| Item | Details |
|---|---|
| Critical process | Accept and fulfill customer orders |
| Required recovery | Order processing must resume within 8 hours of a disruption |
| IT recovery procedures | ERP application servers and database can be restored at a secondary site within 4 hours; database is replicated hourly |
| Recovery contacts | CIO, infrastructure manager, DBA, network engineer |
| Not documented | Manual order-entry procedures, customer communication steps, alternate workspace for customer service, fallback shipping provider |
Which conclusion is best supported by the exhibit?
Best answer: B
What this tests: Information Systems and Data Management
Explanation: The exhibit shows a disaster recovery capability for restoring the ERP environment and data at a secondary site. However, it does not show broader business continuity elements such as manual workarounds, communications, workspace, and third-party operational alternatives needed to continue critical operations.
Disaster recovery planning focuses on recovering technology resources such as applications, servers, and data after a disruption. Business continuity planning is broader: it addresses how the organization will continue critical business processes while technology, facilities, staff, or vendors are disrupted. Business resiliency is broader still, emphasizing the organization’s ability to withstand and recover from disruptions across people, process, technology, and third-party dependencies. In the exhibit, the retailer has documented IT recovery steps, recovery contacts, and replication, which supports the existence of disaster recovery procedures. But key continuity components are missing, including manual order processing, customer communication, alternate workspace, and shipping fallback arrangements. That means the technology recovery plan exists, but the business continuity plan is not complete.
The exhibit covers restoration of IT systems and data, but it omits broader procedures needed to keep business operations running during a disruption.
Topic: Information Systems and Data Management
A company uses the following technology environment:
Which cloud deployment model best describes the company’s overall environment?
Best answer: C
What this tests: Information Systems and Data Management
Explanation: Hybrid cloud deployment is correct because the company uses both a private-cloud environment reserved for its exclusive use and a public-cloud SaaS service shared with other customers. The connected authentication and data transfers show the two environments operate together.
Cloud deployment models are distinguished mainly by who has access to the infrastructure and how control is allocated. A private cloud is dedicated to a single organization, even if a third party hosts it, so exclusive-use servers and company-defined security settings point to private cloud. A public cloud serves multiple customers on shared infrastructure, so the shared SaaS expense platform is public cloud. When an organization uses both private and public cloud resources as part of one connected environment, the overall deployment model is hybrid cloud. The integration facts matter here because they show the private and public portions are part of the same operating model rather than unrelated services.
The environment combines a private cloud reserved for one company with a shared public cloud service that is connected for authentication and data exchange.
Use the CPA ISC Practice Test page for the full practice route, mixed-topic practice, timed mock exams, and explanations.
Read the CPA ISC guide on CPAExamsMastery.com, then return to Mastery Exam Prep for timed practice.