Try 10 focused SnowPro Core COF-C02 questions on Operations and Continuity, with explanations, then continue with IT Mastery.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
Try SnowPro Core COF-C02 on Web View full SnowPro Core COF-C02 practice page
| Field | Detail |
|---|---|
| Exam route | SnowPro Core COF-C02 |
| Topic area | Operations, Monitoring, and Business Continuity |
| Blueprint weight | 10% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Operations, Monitoring, and Business Continuity for SnowPro Core COF-C02. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 10% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Operations, Monitoring, and Business Continuity
A data provider plans to publish a sales analytics data product to many external customers using Snowflake data sharing. They need a stable contract, clear documentation, and secure exposure of only curated objects. Which of the following actions will meet these requirements? (Select TWO.)
Options:
A. Add detailed comments to the share, database, schemas, tables, and columns describing business meaning, units, refresh cadence, and change policy, and keep this metadata updated with each release.
B. Allow each consumer account to create a zero-copy clone of the internal production database instead of using shares, letting them manage schema changes independently.
C. Automatically update shared views to reflect any ETL-driven column renames or datatype changes as soon as pipelines finish, ensuring that consumers always see the latest schema.
D. Expose the entire production database in a single share so consumers automatically see all tables, including staging and intermediate objects, as pipelines evolve.
E. Create a dedicated database and shared schema that expose only secure, read-only views over internal tables, avoid removing or renaming existing columns, and publish new versioned views when a breaking change is required.
Correct answers: A and E
Explanation: Designing a good Snowflake data product for sharing means treating it as a stable, governed interface. Providers should expose only curated, secure objects; evolve schemas in a controlled, versioned way; and embed clear documentation and change policies in metadata so consumers know what to expect.
Using secure views in a dedicated database or schema lets providers shield internal implementation tables while presenting a consistent set of columns and data types. When a breaking change is needed, publishing a new version of a view or schema and deprecating the old one over time protects consumers from sudden failures.
Comments on shares, databases, schemas, tables, and columns are first-class metadata in Snowflake. When used consistently to describe business meaning, units, refresh cadence, and how changes are managed, they turn the shared objects into a self-documenting product that is easier and safer for consumers to adopt.
Topic: Operations, Monitoring, and Business Continuity
Which TWO statements about encryption key management and rotation in the Snowflake AI Data Cloud are TRUE? (Select TWO.)
Options:
A. Snowflake automatically rotates encryption keys on a regular basis without requiring user intervention.
B. Key rotation in Snowflake requires customers to export their keys from an external hardware security module into Snowflake for each rotation event.
C. When an encryption key is rotated in Snowflake, all existing data must be re-encrypted immediately, which can cause long maintenance windows.
D. For best performance, production Snowflake accounts should disable automatic key rotation and perform manual key updates only when necessary.
E. Snowflake uses a hierarchical key model where higher-level keys encrypt lower-level data keys rather than encrypting all data directly.
Correct answers: A and E
Explanation: Snowflake provides continuous data protection by encrypting data at rest using a hierarchical key model and handling key management internally. In this model, higher-level keys protect lower-level data keys, and Snowflake automatically rotates keys on a regular schedule and on key lifecycle events. Rotation is designed to be transparent and to minimize operational impact, so customers benefit from strong security without needing to script or schedule their own key rotation processes or manage provider-specific key infrastructure details.
Topic: Operations, Monitoring, and Business Continuity
A data platform team is deciding between publishing datasets on the Snowflake Marketplace or setting up a private Data Exchange for a limited set of partner accounts. They want to understand the discovery scope of each option.
Which of the following statements about Snowflake Marketplace and private Data Exchanges is INCORRECT?
Options:
A. Snowflake Marketplace is appropriate when the provider wants broad visibility of data products to many potential consumers across different organizations.
B. Listings in a private Data Exchange are discoverable only by the exchange’s member accounts and are not searchable by the wider Snowflake Marketplace audience.
C. A private Data Exchange allows any Snowflake account to automatically discover and request access to its listings without being onboarded as a member first.
D. A private Data Exchange can be used to present curated listings that are only intended for a restricted group of invited member accounts, such as subsidiaries or select partners.
Best answer: C
Explanation: Snowflake provides two complementary mechanisms for distributing data: Snowflake Marketplace and private Data Exchanges.
Snowflake Marketplace is intended for broad, cross-account discovery of data products. Providers publish listings that can be discovered by many Snowflake accounts, subject to whatever entitlement or approval workflow the provider sets. The key idea is that the Marketplace enables wide visibility and discovery across organizations.
Private Data Exchanges, in contrast, are designed for controlled communities such as a company’s subsidiaries, business units, or select external partners. Only accounts that have been explicitly onboarded as members of the private exchange can see its listings. This keeps discovery and access constrained to that invited group.
In this question, the incorrect statement is the one that describes a private Data Exchange as being automatically discoverable by any Snowflake account without prior onboarding. That behavior corresponds more to the broad discovery model of the Marketplace, not to a private exchange with a restricted membership model.
Topic: Operations, Monitoring, and Business Continuity
An architect is planning disaster recovery for critical Snowflake objects by using a failover group between a primary and secondary account. Which of the following actions is NOT an appropriate high-level step in setting up this failover group?
Options:
A. Identify critical databases, shares, and roles and add them to the failover group so they replicate to the secondary account.
B. Configure a recurring replication schedule from the primary to the secondary account and monitor the failover group replication status.
C. Add mission-critical virtual warehouses to the failover group so that both data and compute resources automatically replicate to the secondary account.
D. Plan and periodically test disaster recovery by promoting the failover group in the secondary account and directing workloads there during a drill.
Best answer: C
Explanation: Failover groups in the Snowflake AI Data Cloud provide account-level disaster recovery for logical objects such as databases, shares, and roles. They work by replicating these objects from a primary account to a secondary account and allowing controlled promotion (failover) in the secondary when needed.
A proper high-level process includes identifying which supported objects are critical, adding them to the failover group, configuring and monitoring replication, and periodically testing failover to validate recovery objectives. Compute resources like virtual warehouses are not part of failover groups and must be provisioned separately in each account or region as part of the broader DR design.
Because virtual warehouses are not replicated by failover groups, any step that assumes warehouses are added to the group and automatically replicated is incorrect and should be avoided when designing a failover strategy.
Topic: Operations, Monitoring, and Business Continuity
A team is configuring cross-region database replication and a failover group for a mission-critical analytics workload that must minimize data loss and recover quickly during a regional outage.
Which of the following statements about how replication and failover affect RPO and RTO is INCORRECT?
Options:
A. Running replication frequently (for example, every few minutes) to keep the secondary closely aligned with the primary and reduce potential data loss.
B. Understanding that RTO depends on how quickly failover is executed and clients are redirected to the secondary account or region.
C. Configuring replication to run only once per week while assuming almost no data loss and near-instant recovery during an outage.
D. Recognizing that RPO is mainly determined by how often replication runs and how much data can be lost between scheduled replications.
Best answer: C
Explanation: In Snowflake, cross-region or cross-account replication combined with failover groups enables disaster recovery for critical workloads. Two key metrics are recovery point objective (RPO) and recovery time objective (RTO).
RPO describes how much data loss is acceptable. In Snowflake’s replication model, RPO is primarily influenced by how often replication is executed. If replication runs every few minutes, the secondary environment will lag the primary by at most that interval, so potential data loss is small. If replication runs only once per week, then any failure just before the next replication could lose up to nearly a week of changes.
RTO describes how quickly you can restore service. With Snowflake, this is driven by how quickly you can trigger failover to the secondary account or region and redirect clients, tools, and applications to the new primary.
The problematic choice is the one that sets replication to run only once per week yet claims near-zero data loss and near-instant recovery. That schedule creates a large data-loss window and does not support the stated RPO/RTO goals for a mission-critical workload.
Topic: Operations, Monitoring, and Business Continuity
During a security review, a compliance officer asks how Snowflake protects data in a production account. They worry that table data and query traffic might be unencrypted because no explicit encryption settings were configured. What is the most appropriate response?
Options:
A. Suggest recreating all critical tables with the SECURE keyword so that table data is stored in encrypted form.
B. Recommend enabling encryption per database using an account-level parameter so that stored data becomes encrypted.
C. Explain that Snowflake automatically encrypts all data at rest and all network traffic in transit by default, so no additional encryption setting is required.
D. Propose moving all data into an external stage and managing encryption manually on the external storage system.
Best answer: C
Explanation: Snowflake AI Data Cloud provides built-in, always-on encryption for data protection. All data stored in Snowflake is encrypted at rest, and all network traffic between clients and Snowflake is encrypted in transit by default. This behavior does not require customer configuration and cannot be disabled.
In a security review, the correct response is to explain this default behavior and, if needed, provide documentation. There is no need to enable a special flag on databases, schemas, or tables to turn encryption on, and no need to move data elsewhere just to achieve encryption.
Other security features in Snowflake—such as secure views or network policies—address different concerns (governance, data leakage, or network access), but they do not change the fundamental fact that encryption is already always applied to stored data and traffic.
Topic: Operations, Monitoring, and Business Continuity
An organization runs its most critical Snowflake AI Data Cloud workloads in a primary region and wants cross-region disaster recovery. They require account-level protection for selected databases and shared objects, centralized failover, and minimal manual steps during an outage. They decide to use a failover group. Which high-level approach BEST meets these requirements?
Options:
A. Create reader accounts in a secondary region, grant them access to all critical data, and instruct users to connect to these reader accounts during a primary-region outage instead of configuring any replication or failover groups.
B. Configure individual database replication for each critical database to a secondary account, manually recreate roles and grants there, and switch applications to the replicated databases one by one if the primary region becomes unavailable.
C. Identify the critical databases and shared objects, create a failover group in the primary account including those objects, configure that failover group to replicate to a secondary account/region on a schedule, and use the failover group to perform controlled failover and failback during an outage.
D. Use secure data sharing from the primary account to a secondary account and create nightly zero-copy clones of all critical databases there, relying on Time Travel for recovery if the primary region fails.
Best answer: C
Explanation: Failover groups in the Snowflake AI Data Cloud provide continuous data protection for a set of critical objects (such as databases, shares, and roles) across regions or accounts. They are designed to give you a central unit of replication and failover, so you can protect important workloads with predictable procedures and minimal manual steps.
To set up a failover group conceptually, you first identify which objects must be protected for disaster recovery. Then you define a failover group in the primary account and add those databases, shares, and other supported objects to it. Next, you configure replication of that failover group to a secondary account or region, typically on a scheduled basis. In the event of an outage or planned switchover, you use the failover group itself as the unit of failover and later failback, rather than handling each database or object individually.
This approach gives you centralized control, consistent RPO/RTO for the grouped objects, and greatly reduces the number of manual steps required during a disaster scenario compared with managing each database separately or relying on ad hoc cloning or sharing.
Topic: Operations, Monitoring, and Business Continuity
A partner has granted your Snowflake account access to a secure share, which appears as a read-only database named PARTNER_SALES in your account. Your data governance team sets these requirements:
As the account administrator, what is the most appropriate way to enable analysts to use the shared data for queries and analytics?
Options:
A. Ask the provider to create a Snowflake reader account for your analysts to query PARTNER_SALES directly there, and periodically export results back into your main account for joins.
B. Use COPY INTO to unload all tables from PARTNER_SALES to external cloud storage, then reload them into local tables for reporting and analytics.
C. Clone the PARTNER_SALES database into a writable local database and grant analysts full privileges on the cloned objects so they can customize the shared data as needed.
D. Grant analysts access to a virtual warehouse and SELECT privileges on PARTNER_SALES, and have them create their own views and derived tables in a separate local database that reference the shared tables.
Best answer: D
Explanation: In Snowflake’s data sharing model, the provider creates a secure share and grants it to a consumer account. In the consumer account, the share appears as a read-only database. Although the consumer cannot modify or drop the provider’s objects, they can query the shared database just like any other database in their account, using their own virtual warehouses.
To build analytics and semantic layers without copying data, the consumer typically grants roles SELECT access on the shared database and then creates local objects (such as views, secure views, and derived tables) in consumer-owned databases that reference the shared tables. This keeps the provider’s data authoritative and centrally maintained while allowing the consumer full flexibility in how they query and combine it with their own data.
This pattern satisfies the governance requirements (provider retains control; consumer has read-only access), supports joins with local data, and avoids data duplication and extra operational burden.
Topic: Operations, Monitoring, and Business Continuity
Which statement BEST describes a reader account in Snowflake data sharing?
Options:
A. A reader account is a temporary database clone that allows a consumer to preview shared data without having any Snowflake account.
B. A reader account is a special provider account that can create shares but cannot host its own data.
C. A reader account is a Snowflake-managed account that a data provider creates for consumers who do not have their own Snowflake account, allowing them to query shared data.
D. A reader account is a standard Snowflake account owned and billed independently by the data consumer, used to import a provider’s shared data as a full copy.
Best answer: C
Explanation: In Snowflake data sharing, a reader account is a fully functional Snowflake account that is created and managed by a data provider specifically for consumers who do not have their own Snowflake account. The reader account allows those consumers to connect, run queries, and build views or other objects over the shared data while the provider retains administrative control.
In contrast, a standard consumer account is an ordinary Snowflake account owned by the consumer’s organization, which directly receives the share and manages its own compute, users, and security. Provider accounts own datasets and create shares, but do not themselves function as reader accounts.
Topic: Operations, Monitoring, and Business Continuity
Which TWO statements correctly describe how Snowflake replication and failover settings influence recovery point objective (RPO) and recovery time objective (RTO)? (Select TWO.)
Options:
A. Configuring more frequent replication reduces the amount of data that could be lost if a failover is required.
B. Because Snowflake replication is synchronous, the RPO between primary and secondary regions is always zero.
C. Increasing the size of client virtual warehouses directly improves RPO for replicated databases, even if replication frequency is unchanged.
D. RTO includes the time needed to promote the secondary to primary and redirect client connections to the new endpoint.
E. After replication is enabled, the secondary is automatically writable at all times without any promotion step during failover.
Correct answers: A and D
Explanation: RPO and RTO are key measures in Snowflake’s continuous data protection using replication and failover.
RPO (recovery point objective) describes how much data an organization might lose in a disaster, measured as the time gap between the last successful replication and the failure. Because Snowflake database and account replication are asynchronous, there can be a delay between changes on the primary and their appearance on the secondary. Running replication more frequently reduces this gap and therefore reduces potential data loss.
RTO (recovery time objective) describes how quickly service can be restored after an outage. In Snowflake, this includes operational steps: promoting a secondary database or failover group to primary and updating applications, users, or tools to connect to the new endpoint or region. These actions determine how long it takes to resume normal operations using the replicated copy.
Virtual warehouse size mainly affects query performance and concurrency, not replication lag. Similarly, secondaries remain read-only until explicitly promoted, to avoid write conflicts between regions. Understanding these behaviors helps you tune replication schedules and failover procedures to meet business continuity goals.
Use the SnowPro Core COF-C02 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try SnowPro Core COF-C02 on Web View SnowPro Core COF-C02 Practice Test
Read the SnowPro Core COF-C02 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.