CPA ISC: Information Systems and Data Management

Try 10 focused Certified Public Accountant Information Systems and Controls (CPA ISC) questions on system flow, data governance, data reliability, processing, and IT control context.

CPA means Certified Public Accountant. ISC means Information Systems and Controls. Use this focused page when your CPA ISC misses are about system flow, source data, interfaces, processing, data reliability, or IT control context. Drill this topic before returning to mixed practice.

Use the CPA ISC practice route for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routeCPA ISC
IssuerAmerican Institute of Certified Public Accountants (AICPA)
Topic areaInformation Systems and Data Management
Blueprint weight40%
Page purposeSystems-flow practice for source data, processing, interfaces, IT controls, and data reliability

What this topic tests

This topic tests whether you can connect business processes, information systems, data flow, and control objectives. Strong answers identify where data is created, changed, stored, transmitted, reconciled, and relied on.

Common traps

  • choosing a control without identifying the process point it protects
  • treating data completeness, accuracy, validity, and authorization as the same objective
  • ignoring upstream interfaces, batch jobs, master data, or access rights that affect downstream reports
  • focusing on technology vocabulary instead of the accounting risk created by unreliable data

How to reason through these questions

Draw a simple mental flow: source, input, processing, storage, output, and use. Then ask which risk threatens the financial or operational objective. The best answer usually controls the exact point where the risk enters the system.

How to use this topic drill

Use this page to isolate Information Systems and Data Management for CPA ISC. Work through the 10 questions first, then review the explanations and return to mixed practice in Mastery Exam Prep.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 40% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original Mastery Exam Prep practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Information Systems and Data Management

A distributor’s ERP generates a customer invoice only after it receives an electronic shipment confirmation from the warehouse system. During a two-day network outage, warehouse staff shipped goods using manual bills of lading and entered shipment confirmations after service was restored. Management wants to address the main AIS risk created by the outage. Which procedure is most appropriate?

  • A. Reconcile the manual bills of lading to the sales invoices and accounts receivable postings created after the outage.
  • B. Reconcile vendor invoices to purchase orders and receiving reports for the outage period.
  • C. Reconcile bank lockbox deposits to cash receipts postings for the outage period.
  • D. Reconcile approved employee timecards to payroll disbursements for the outage period.

Best answer: A

What this tests: Information Systems and Data Management

Explanation: The key AIS dependency is that shipment confirmation triggers billing and A/R posting. When shipments occur manually during an outage, the main risk is that some shipped orders will not be billed or will be recorded inaccurately, so reconciling shipping documents to invoices and A/R is the best response.

In an integrated accounting information system, downstream sales processing often depends on a system event from another module. Here, the warehouse system’s shipment confirmation is the event that causes the ERP to generate the customer invoice and update accounts receivable. Because shipments were made manually during the outage, the main risk is incomplete or inaccurate capture of those shipments once the system is restored. Manual bills of lading are the source record of what actually shipped, so reconciling them to invoices and A/R postings is the most direct way to identify omitted or duplicated transactions. Procedures over lockbox deposits, purchasing documents, or payroll records address different business processes and would not resolve the specific sales-cycle risk caused by the failed shipment interface.

  • Reconciling bank lockbox deposits to cash receipts postings addresses the cash collections process, but the outage affected shipment-to-billing flow before collection.
  • Reconciling vendor invoices to purchase orders and receiving reports is a purchasing and disbursements control, not a sales-cycle response.
  • Reconciling timecards to payroll disbursements tests payroll accuracy, which is unrelated to the missing shipment confirmation trigger.
  • Reconciling manual bills of lading to invoices and A/R postings is the only procedure tied directly to the interrupted sales and billing process.

Because shipment confirmation triggers billing in the AIS, matching manual shipping documents to later invoices and A/R postings directly tests whether all shipped orders were recorded.


Question 2

Topic: Information Systems and Data Management

During an internal control review, a CPA notes the following about a monthly HR analytics process:

  • IT exports a full payroll table from the production HR system to a CSV file.
  • The turnover dashboard uses only department, hire date, termination date, and salary band.
  • The CSV also contains employee SSNs and bank account numbers.
  • After the dashboard is refreshed, analysts keep prior-month CSV files on local laptops.

Which action is the most appropriate data management control response?

  • A. Replace the full-table export with a purpose-limited extract or view containing only required dashboard fields and automatically delete the file after refresh.
  • B. Normalize the payroll table further before each monthly export to improve source-system structure.
  • C. Encrypt the existing CSV files on analysts’ laptops and continue exporting the full payroll table.
  • D. Increase retention of the monthly CSV files so analysts can support future ad hoc requests.

Best answer: A

What this tests: Information Systems and Data Management

Explanation: The main risk is not just storage security; it is that the process extracts more sensitive data than the dashboard needs and retains it longer than necessary. A purpose-limited extract with automated deletion best aligns the data extract and lifecycle with the business use.

When an analytics process uses only a small subset of fields, exporting an entire production table that includes SSNs and bank account numbers creates unnecessary data exposure. Keeping those files on local laptops after the dashboard refresh adds a lifecycle problem because the sensitive data persists beyond its business purpose. The strongest control response is to redesign the extract so it includes only the fields required for the dashboard and to enforce automatic deletion once the refresh is complete. Encryption helps protect stored files, but by itself it does not reduce over-collection or over-retention. Schema normalization may improve database design, but it does not solve the immediate risk created by extracting unnecessary sensitive data into CSV files.

  • Encrypting the same full CSV improves storage protection but still leaves unnecessary sensitive fields in the extract and retained on laptops.
  • Increasing retention makes the lifecycle risk worse because the files already outlast their immediate business purpose.
  • Further normalization is a schema design choice, not the direct control needed for an oversized and over-retained data extract.

This addresses both unnecessary data extraction and excessive retention by applying data minimization and lifecycle control to the analytics process.


Question 3

Topic: Information Systems and Data Management

Finch Co. moved its ERP system to a cloud provider. Review the exhibit and select the conclusion best supported by the service arrangement.

ItemSummary
Service modelInfrastructure as a Service (IaaS)
WorkloadFinch hosts its ERP on cloud-based virtual machines
Provider responsibilitiesPhysical data center security, environmental safeguards, hardware maintenance, hypervisor patching, storage replication across zones
Finch responsibilitiesGuest operating system patching, ERP application configuration, user provisioning, periodic access reviews, database encryption keys
Contract noteFinch must configure backup retention and perform recovery testing within its tenant

Which conclusion is best supported by the exhibit?

  • A. The provider is responsible for configuring backup retention and recovery testing within Finch’s tenant.
  • B. Finch is responsible for physical data center safeguards because it owns the ERP data.
  • C. The provider is responsible for all patching and access controls because it hosts the ERP environment.
  • D. The provider is responsible for physical infrastructure and hypervisor controls, while Finch remains responsible for guest OS, application access, and tenant-level backup settings.

Best answer: D

What this tests: Information Systems and Data Management

Explanation: The exhibit describes a shared responsibility model for IaaS. The cloud provider handles the underlying physical and virtualization layers, but Finch still manages the guest OS, application access, encryption keys, and tenant-configured backup settings.

Cloud service provider responsibilities depend on the deployment model, and IaaS leaves significant control duties with the customer. In this exhibit, the provider is responsible for the physical facilities, hardware, environmental safeguards, and hypervisor layer. Finch, however, is still responsible for controls inside its tenant, including guest operating system patching, ERP configuration, user provisioning, access reviews, encryption keys, and backup retention settings. That means hosting the ERP in the cloud does not transfer all control responsibility to the provider. A CPA evaluating cloud controls should identify which controls are performed by the provider and which remain with the user entity under the shared responsibility model.

  • The statement that the provider handles all patching and access controls is too broad; the exhibit limits provider patching to the hypervisor and leaves user access to Finch.
  • The statement that Finch handles physical data center safeguards conflicts with the exhibit, which assigns physical security and environmental protection to the provider.
  • The statement that the provider configures backup retention and recovery testing is contradicted by the contract note assigning those tenant-level tasks to Finch.

In an IaaS arrangement, the provider manages the underlying infrastructure, but the customer still controls many operating system, application, and tenant configuration responsibilities.


Question 4

Topic: Information Systems and Data Management

A distributor plans to replace its legacy sales and inventory application with a new hosted ERP module.

FactDetail
Process scopeThe application supports about 65% of company revenue and updates shipment status, inventory, accounts receivable, and daily sales journal entries to the general ledger.
Go-live timingA direct cutover is scheduled for December 29, three days before year-end close and the physical inventory count.
Testing statusUnit testing passed. End-to-end testing across order entry, shipping, billing, and GL posting was completed for 12 of 20 scenarios; 3 tested scenarios produced duplicate invoices, and 5 scenarios were not tested.
Conversion planThe legacy system will become read-only at go-live. No parallel processing is planned. If the new system fails, orders will be captured manually in spreadsheets until issues are fixed.
Access/control statusAutomated credit-limit and price-override approvals are configured, but the user-role access review for sales supervisors and billing clerks is scheduled for after go-live.
Vendor assuranceThe ERP vendor has a SOC 1 Type 2 report covering its hosted infrastructure.

Based on these facts, which is the best interpretation of the proposed conversion approach?

  • A. The approach’s main concern is data confidentiality, so the conversion risk would be acceptable if encryption of order data is confirmed.
  • B. The approach creates unacceptable operational, reporting, and control risk because a year-end direct cutover is planned before end-to-end processing, fallback, and user access controls are adequately validated.
  • C. The approach is acceptable because the vendor’s SOC 1 Type 2 report compensates for incomplete business-process testing and the delayed user-role review.
  • D. The approach is acceptable if finance performs daily reconciliations after go-live, because detective controls can replace pre-go-live testing for this conversion.

Best answer: B

What this tests: Information Systems and Data Management

Explanation: This conversion plan combines a high-impact direct cutover with incomplete end-to-end testing, known duplicate invoicing errors, no practical rollback, and delayed access validation right before year-end. In that environment, the proposal creates unacceptable operational disruption, financial reporting risk, and control risk.

A conversion approach should be evaluated in light of the system’s business significance, timing, testing results, fallback capability, and control readiness. Here, the new system drives revenue, inventory, receivables, and GL postings, so defects can affect operations and financial reporting quickly. A direct cutover shortly before year-end heightens the risk because processing failures or duplicate invoices could distort revenue and inventory during close. The plan also lacks a strong fallback, since the legacy system becomes read-only and manual spreadsheets are not an equivalent recovery method. Delaying the user-role access review until after go-live adds control risk because approval workflows may operate with inappropriate access. The vendor’s SOC 1 Type 2 report may support reliance on certain hosted-service controls, but it does not replace customer-side conversion testing, configuration validation, or user access review.

  • The vendor SOC 1 Type 2 report does not substitute for the company’s own end-to-end testing, conversion readiness, or user access validation.
  • Confidentiality is not the primary issue in these facts; the stronger risks involve transaction processing, financial reporting, and authorization controls.
  • Daily reconciliations are useful detective controls, but they do not make an under-tested year-end direct cutover acceptable or prevent duplicate or unauthorized processing.

This is correct because the conversion affects revenue-significant processing and key controls, yet unresolved processing errors, incomplete testing, no practical rollback, and delayed access review remain immediately before year-end.


Question 5

Topic: Information Systems and Data Management

A company’s payroll application uses two storage arrays that are synchronously mirrored.

System facts:

  • Both arrays are in the same data center.
  • Both arrays rely on the same building power and network.
  • The disaster recovery plan states: “This mirroring arrangement provides recovery if the entire site becomes unavailable.”

Which is the best correction to the disaster recovery plan?

  • A. Add replication to a geographically separate site, because same-site mirroring improves availability for hardware failure but does not provide recovery from a sitewide outage.
  • B. Replace mirroring with daily backups only, because backups provide the same availability benefit as mirrored storage.
  • C. Add a third mirrored array in the same data center, because more mirrored copies will provide site-level disaster recovery.
  • D. Keep the plan as written, because synchronized mirrored storage fully addresses both system availability and disaster recovery.

Best answer: A

What this tests: Information Systems and Data Management

Explanation: The best correction is to add replication to a separate location. Same-site mirroring helps keep systems available when a device fails, but it does not protect against a building-wide outage that affects shared power, network, or facility access.

Mirroring and replication support different availability and recovery objectives. Mirroring maintains an up-to-date duplicate, often to reduce downtime from component or storage failure and support high availability. Replication copies data to another location so the organization can recover if the primary site is lost. In this scenario, both mirrored arrays are in the same data center and depend on the same building services, so a sitewide disruption could disable both copies at once. The disaster recovery plan is therefore overstated. The proper correction is to describe the current mirroring as an availability measure and add replication to a geographically separate site for disaster recovery.

  • Adding another mirrored array in the same building increases redundancy, but it still leaves the company exposed to a sitewide outage.
  • Keeping the plan unchanged confuses availability with disaster recovery; same-site mirroring does not provide site-level recovery.
  • Replacing mirroring with daily backups only may help restore data, but it does not provide the near-immediate availability benefit that mirroring is designed to support.

Mirroring within one site supports availability, while offsite replication is needed to recover when the entire site is lost.


Question 6

Topic: Information Systems and Data Management

During a walkthrough of a retailer’s application change process, the CPA learns the following:

  • Developers submit code through pull requests.
  • A merge to the main branch automatically triggers build, testing, and deployment to production.
  • The CI/CD pipeline configuration file is stored in the same repository as the application code.
  • Three senior developers can modify branch protection rules and pipeline configuration.
  • Management eliminated manual change advisory board approval because “the pipeline checks everything.”

To evaluate change-control considerations in this CI/CD environment, what should the CPA do next?

  • A. Reperform application calculations for transactions processed after recent releases to determine whether the CI/CD process is reliable.
  • B. Select a sample of completed deployments and trace each one to a help-desk ticket before identifying the automated controls that authorize deployment.
  • C. Review physical access logs for the hosting environment because deployment risk primarily depends on server access.
  • D. Inspect repository and pipeline-governance controls to verify that branch protections, deployment rules, and pipeline changes are restricted, approved, and cannot be overridden by the same person promoting code.

Best answer: D

What this tests: Information Systems and Data Management

Explanation: In a CI/CD environment, the key change-control issue is whether the automated deployment gates are themselves controlled. Because developers can change branch protections and pipeline configuration, the CPA should first inspect access, approval, and segregation controls over those automated settings.

When an organization uses CI/CD, manual approvals may be replaced by automated controls embedded in the source-code repository and deployment pipeline. That means the CPA should focus on whether those automated controls are designed and protected appropriately. Important control considerations include who can modify branch protection rules, who can change pipeline configuration, whether code and pipeline changes require independent review, and whether one individual can both alter the deployment rules and promote code to production. In the scenario, developers can change the very settings that enforce approvals and testing, so there is a risk they could bypass the intended change-management process. Before selecting samples or performing downstream testing, the CPA should understand and evaluate the design of these automated controls.

  • Tracing completed deployments to tickets may be useful later, but it is premature before understanding the automated controls that now replace manual approval.
  • Reperforming application calculations addresses processing integrity, not whether CI/CD change controls are properly authorized and segregated.
  • Reviewing physical access logs focuses on a different risk and does not address the core CI/CD issue of unauthorized changes to pipeline rules or approval gates.

In CI/CD, automated approval gates become key change controls, so the next step is to evaluate whether access to those gates and their configuration is properly restricted and independently authorized.


Question 7

Topic: Information Systems and Data Management

A retail company combines point-of-sale transactions, website clickstream logs, supplier PDFs, and IoT sensor data into a single low-cost cloud repository. The data is kept in its original format, and analysts transform only the data needed for each project later. How should this repository be classified?

  • A. Data mart
  • B. Operational database
  • C. Data warehouse
  • D. Data lake

Best answer: D

What this tests: Information Systems and Data Management

Explanation: The repository is a data lake because it stores varied data types in raw form and applies transformation later as needed. That fits a flexible analytics environment rather than a curated reporting store or a transaction-processing database.

A data lake is designed to hold large volumes of structured, semi-structured, and unstructured data in native format until users decide how to analyze it. That is why keeping clickstream logs, PDFs, sensor data, and transactions together without upfront standardization points to a data lake. A data warehouse, by contrast, usually contains cleaned, integrated, and structured data prepared in advance for reporting and business intelligence. A data mart is a narrower subset of data, typically focused on one department or subject area such as sales or finance. An operational database supports day-to-day transaction processing rather than broad analytical storage.

  • Data warehouse is tempting because it supports analytics, but the stem emphasizes raw data kept in original form rather than curated, structured reporting data.
  • Data mart is incorrect because a data mart is usually a smaller subject-specific subset, not an enterprise repository for many diverse data sources.
  • Operational database is incorrect because the repository is used for storing varied source data for later analysis, not for running daily transactions.

A data lake stores large volumes of raw structured, semi-structured, and unstructured data in native form for later use and analysis.


Question 8

Topic: Information Systems and Data Management

During a walkthrough of the emergency change process for a cloud-based billing application, a CPA notes the following CI/CD controls:

  • Incident manager approval is required before deployment.
  • The ticket includes pass/fail acceptance criteria.
  • A different developer performs the pull-request code review.
  • Automated unit and integration tests must pass.
  • Deployment activity is logged and monitored.
  • Any on-call developer has standing production deployment access, including the developer who wrote the emergency fix.

Which statement best distinguishes the remaining design weakness from the controls that are already present?

  • A. The remaining weakness is code review, because automated tests do not replace a second developer’s review before release.
  • B. The remaining weakness is separation of duties and access restriction, because the developer who coded the fix can also execute the production deployment.
  • C. The remaining weakness is authorization, because management approval is the main control that should determine who can deploy to production.
  • D. The remaining weakness is logging and monitoring, because detective evidence should be relied on instead of restricting production deployment rights.

Best answer: B

What this tests: Information Systems and Data Management

Explanation: The process includes acceptance criteria, approval, independent code review, automated testing, and deployment logging/monitoring. The decisive weakness is that the developer who made the emergency change still has production deployment access, so separation of duties is not preserved through access restriction.

Change control policies use different controls for different purposes. Acceptance criteria define what the change must achieve. Authorization decides whether the change may proceed. Code review and testing assess quality and functionality. Logging and monitoring provide detective evidence of what occurred. Separation of duties and access restrictions serve a different objective: they limit who can move code into production. In this scenario, the process has the earlier controls, but it still allows the developer who wrote the emergency fix to deploy it because on-call developers keep standing production access. That is a design weakness. Even with approval, review, testing, and logging, the process lacks a preventive control that separates development from production promotion. Stronger design would use a separate deployer, restricted deployment role, or tightly controlled service account.

  • Independent pull-request review is already present, so code review is not the missing control in the facts.
  • Predeployment incident manager approval provides authorization, but authorization does not by itself limit who may execute the deployment.
  • Separation of duties and access restriction are the real gap because standing production rights let the developer promote their own code.
  • Logging and monitoring are detective controls; they do not replace preventive restrictions on production deployment access.

Approval, acceptance criteria, review, testing, and logging are present, but standing production access still lets the coder deploy the change personally.


Question 9

Topic: Information Systems and Data Management

A retailer documents the following for its online order environment:

ItemDetails
Critical processAccept and fulfill customer orders
Required recoveryOrder processing must resume within 8 hours of a disruption
IT recovery proceduresERP application servers and database can be restored at a secondary site within 4 hours; database is replicated hourly
Recovery contactsCIO, infrastructure manager, DBA, network engineer
Not documentedManual order-entry procedures, customer communication steps, alternate workspace for customer service, fallback shipping provider

Which conclusion is best supported by the exhibit?

  • A. The organization has a complete business resiliency program because the secondary site can restore the ERP environment within 4 hours.
  • B. The organization has documented disaster recovery procedures, but its business continuity planning is incomplete.
  • C. The organization lacks a disaster recovery plan because replication and server restoration do not address recovery needs.
  • D. The main issue shown is privacy noncompliance rather than an availability or continuity gap.

Best answer: B

What this tests: Information Systems and Data Management

Explanation: The exhibit shows a disaster recovery capability for restoring the ERP environment and data at a secondary site. However, it does not show broader business continuity elements such as manual workarounds, communications, workspace, and third-party operational alternatives needed to continue critical operations.

Disaster recovery planning focuses on recovering technology resources such as applications, servers, and data after a disruption. Business continuity planning is broader: it addresses how the organization will continue critical business processes while technology, facilities, staff, or vendors are disrupted. Business resiliency is broader still, emphasizing the organization’s ability to withstand and recover from disruptions across people, process, technology, and third-party dependencies. In the exhibit, the retailer has documented IT recovery steps, recovery contacts, and replication, which supports the existence of disaster recovery procedures. But key continuity components are missing, including manual order processing, customer communication, alternate workspace, and shipping fallback arrangements. That means the technology recovery plan exists, but the business continuity plan is not complete.

  • Replication and a restoration runbook are part of disaster recovery, so it is incorrect to say no disaster recovery plan exists.
  • Restoring the ERP environment within 4 hours does not by itself prove complete business resiliency, because people, process, facility, and vendor contingencies are still missing.
  • Privacy is not the main issue in the exhibit; the facts point to availability and continuity planning gaps.
  • The best conclusion is the one distinguishing IT recovery from broader continuity of operations.

The exhibit covers restoration of IT systems and data, but it omits broader procedures needed to keep business operations running during a disruption.


Question 10

Topic: Information Systems and Data Management

A company uses the following technology environment:

  • Its general ledger application runs on virtual servers in a hosting provider’s facility, but the servers, storage, and network segment are reserved for this company only.
  • The company defines the security configuration and access rules for that reserved environment.
  • The company also uses a provider’s internet-accessible SaaS expense platform that is shared with many unrelated customers.
  • User authentication and nightly data transfers connect the reserved environment and the SaaS platform.

Which cloud deployment model best describes the company’s overall environment?

  • A. Private cloud deployment
  • B. Public cloud deployment
  • C. Hybrid cloud deployment
  • D. Multicloud architecture

Best answer: C

What this tests: Information Systems and Data Management

Explanation: Hybrid cloud deployment is correct because the company uses both a private-cloud environment reserved for its exclusive use and a public-cloud SaaS service shared with other customers. The connected authentication and data transfers show the two environments operate together.

Cloud deployment models are distinguished mainly by who has access to the infrastructure and how control is allocated. A private cloud is dedicated to a single organization, even if a third party hosts it, so exclusive-use servers and company-defined security settings point to private cloud. A public cloud serves multiple customers on shared infrastructure, so the shared SaaS expense platform is public cloud. When an organization uses both private and public cloud resources as part of one connected environment, the overall deployment model is hybrid cloud. The integration facts matter here because they show the private and public portions are part of the same operating model rather than unrelated services.

  • Public cloud deployment is too narrow because the general ledger environment is reserved for one company rather than shared among many customers.
  • Private cloud deployment is incomplete because the company also relies on a shared SaaS platform delivered over the internet.
  • Hybrid cloud deployment fits because exclusive-use cloud resources and shared cloud services are both present and connected.
  • Multicloud architecture is tempting, but the decisive distinction here is the combination of private and public cloud, not simply the use of more than one cloud service.

The environment combines a private cloud reserved for one company with a shared public cloud service that is connected for authentication and data exchange.

Continue with full practice

Use the CPA ISC Practice Test page for the full practice route, mixed-topic practice, timed mock exams, and explanations.

Use the CPA ISC practice route for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the CPA ISC guide on CPAExamsMastery.com, then return to Mastery Exam Prep for timed practice.

Revised on Wednesday, May 13, 2026