Prepare for SnowPro Core Certification (COF-C02) with free sample questions, a full-length diagnostic, topic drills, timed practice, Snowflake architecture, security, performance, cost, loading, transformations, sharing, and detailed explanations in IT Mastery.
COF-C02 is Snowflake’s SnowPro Core certification for candidates who need strong platform fundamentals across architecture, security, performance, cost, and core data workflows. If you are searching for COF-C02 sample questions, a practice test, mock exam, or exam simulator, this is the main IT Mastery page to start on web and continue on iOS or Android with the same IT Mastery account.
Start a practice session for SnowPro Core Certification (COF-C02) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.
Open Full App in a New TabA small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same app-family account they use on web and mobile.
Prefer to practice on your phone or tablet? Download the IT Mastery – AWS, Azure, GCP & CompTIA exam prep app for iOS or IT Mastery app on Google Play (Android) and use the same IT Mastery account across web and mobile.
Free diagnostic: Try the 100-question SnowPro Core COF-C02 full-length practice exam before subscribing. Use it as one Snowflake Core baseline, then return to IT Mastery for timed mocks, topic drills, explanations, and the full SnowPro Core question bank.
COF-C02 questions usually reward the option that follows Snowflake defaults and platform-native patterns instead of forcing warehouse, security, or data-movement decisions from other data-platform habits.
| Domain | Weight |
|---|---|
| Snowflake AI Data Cloud Features and Architecture | 24% |
| Account Access and Security | 18% |
| Performance and Cost Optimization Concepts | 16% |
| Data Loading and Unloading | 12% |
| Data Transformations | 18% |
| Data Protection and Data Sharing | 12% |
Use these filters when several Snowflake features sound close:
| Day | Practice focus |
|---|---|
| 7 | Take the free full-length diagnostic and group misses by architecture, security, loading, transformation, or performance. |
| 6 | Drill Snowflake object hierarchy, warehouses, storage, compute, databases, schemas, and account concepts. |
| 5 | Drill RBAC, grants, ownership, masking, network controls, and account-security scenarios. |
| 4 | Drill stages, file formats, COPY, Snowpipe, streams, tasks, and loading/unloading behavior. |
| 3 | Drill performance, warehouse sizing, clustering, pruning, caching, and query-profile interpretation. |
| 2 | Complete a timed mixed set and explain the Snowflake object or compute boundary behind each miss. |
| 1 | Review weak feature distinctions; avoid cramming rarely used syntax late. |
If several unseen mixed attempts are above roughly 75% and you can explain the Snowflake object, role, loading, or warehouse behavior behind each answer, you are likely ready. More practice should improve platform judgment, not repeated-stem memorization.
Use these child pages when you want focused IT Mastery practice before returning to mixed sets and timed mocks.
Need concept review first? Read the SnowPro Core COF-C02 Cheat Sheet on Tech Exam Lexicon, then return here for timed mocks, topic drills, and full IT Mastery practice.
Topic: Domain 4: Query processing and performance optimization
Which Snowflake command correctly lists all files currently stored in an internal named stage called mystage?
Options:
SELECT * FROM @mystage;SHOW FILES IN STAGE mystage;DESCRIBE STAGE mystage;LIST @mystage;Best answer: D
Explanation: The choice that uses LIST @mystage; is correct because:
LIST is the dedicated Snowflake command for listing files in stages.@ prefix correctly indicates a stage reference.Topic: Domain 4: Query processing and performance optimization
A team receives small JSON files into external cloud storage every 5 minutes throughout the day. They need new data available in Snowflake within 15 minutes on average, keep daily loading compute at or below 8 credits, and minimize operational management.
They use a virtual warehouse that consumes 0.3 credits for each scheduled COPY INTO run. Snowpipe continuous loading would consume 5 credits per day.
Which approach BEST meets all requirements?
(Note: There are 24 hours in a day. Assume average data latency is half of the load interval.)
Options:
Best answer: A
Explanation: Using Snowpipe continuous loading on the external stage meets the average latency requirement by ingesting data soon after it lands, stays within the 8-credit budget at 5 credits per day, and reduces operational overhead by eliminating the need to orchestrate and maintain frequent scheduled COPY jobs and warehouse runtimes.
Topic: Domain 5: Data sharing, collaboration, and marketplace
A data engineer compares query performance on a structured table and a directory table over unstructured image files.
Exhibit: Query history excerpt
| QUERY_TEXT | DATA_TYPE | BYTES_SCANNED | ROWS_PRODUCED | EXECUTION_TIME_MS |
|---|---|---|---|---|
| SELECT COUNT(*) FROM analytics.orders; | structured | 52,428,800 | 12,500,000 | 380 |
| SELECT file_name, last_modified FROM img_dir WHERE file_extension=‘jpg’; | unstructured | 1,048,576 | 10 | 2,750 |
Based on this exhibit, what is the most appropriate expectation when querying unstructured data in Snowflake?
Options:
Best answer: D
Explanation: The choice that states that unstructured file listing and filtering can have noticeably higher latency than structured queries, and that they are best suited for occasional metadata-style access, directly matches the exhibit.
The unstructured query has EXECUTION_TIME_MS of 2,750 while scanning 1,048,576 bytes and producing 10 rows. The structured query has EXECUTION_TIME_MS of 380 while scanning 52,428,800 bytes and producing 12,500,000 rows. This shows that lower scanned bytes and smaller result sets for unstructured data do not translate into lower latency. Instead, there is extra overhead, so it is reasonable to expect higher latency and to use such queries sparingly for metadata and file discovery, not as a main low-latency analytics path.
Topic: Domain 6: Operations, monitoring, and business continuity
A team instantly creates full-size test databases from production without using additional storage, because only metadata is duplicated while the underlying data storage is shared. Which Snowflake concept does this practice BEST represent?
Options:
Best answer: D
Explanation: The option describing zero-copy cloning for fast, cost-efficient environment creation directly matches the stem’s details:
These behaviors align perfectly with the described practice, making this the only correct principle.
Topic: Domain 1: Snowflake architecture and key features
Which statement correctly describes how Snowflake shares provide controlled read-only access to data for other Snowflake accounts?
Options:
Best answer: B
Explanation: The choice stating that a data provider defines a share with specific database objects, and consumers create a read-only database from that share without copying the underlying data is correct because it captures three essential facts:
These points reflect how secure data sharing works conceptually in the Snowflake AI Data Cloud.
Topic: Domain 2: Account setup, security, and governance
Which statement BEST describes a Snowflake account in the Snowflake AI Data Cloud?
Options:
Best answer: D
Explanation: The statement that an account is a logically isolated container for databases, schemas, virtual warehouses, users, roles, and related objects correctly captures both scope and purpose. It emphasizes that the account is the main isolation boundary and that it includes data objects, compute, and security configuration used by that environment or tenant.
Topic: Domain 2: Account setup, security, and governance
A security engineer reviews recent Snowflake login activity after enabling single sign-on with an external identity provider.
Exhibit:
| EVENT_TIMESTAMP | USER_NAME | AUTHENTICATION_METHOD | IS_SUCCESS |
|---|---|---|---|
| 2025-12-01 09:02:11 | ALICE | FEDERATED | 1 |
| 2025-12-01 09:05:47 | BOB | FEDERATED | 1 |
| 2025-12-01 10:13:02 | CAROL | FEDERATED | 1 |
Based on the exhibit, which statement BEST describes what federated authentication is providing for this Snowflake account?
Options:
Best answer: B
Explanation: The choice describing that an external identity provider performs authentication, and that Snowflake trusts those logins to enable centralized SSO and external password policies, matches the exhibit’s AUTHENTICATION_METHOD = FEDERATED values. This is exactly what federated authentication provides: centralized control of user sign-in and credentials outside Snowflake while still granting access to Snowflake once the identity is verified.
Topic: Domain 1: Snowflake architecture and key features
Which Snowflake interface is most appropriate for analysts who want to run ad-hoc queries, organize worksheets, and view dashboards and usage insights in a modern web browser UI?
Options:
Best answer: A
Explanation: The choice that specifies Snowsight is correct because Snowsight is Snowflake’s modern, browser-based UI that combines worksheets, dashboards, and account insights, which directly matches the needs of analysts running ad-hoc queries and exploring data visually.
Topic: Domain 6: Operations, monitoring, and business continuity
Which TWO statements about cost and governance considerations for Snowflake continuous data protection features are INCORRECT? (Select TWO.)
Options:
Correct answers: A and C
Explanation: The statements claiming that zero-copy clones immediately duplicate all storage and that Fail-safe can be turned off per table are incorrect.
Zero-copy cloning initially references existing micro-partitions, so storage is not doubled at creation. Storage only grows as changes are made to the source or clone.
Fail-safe is not configurable on a per-table basis and cannot simply be disabled to remove its storage overhead; it is a built-in safety net after Time Travel. These inaccuracies make those two statements the correct choices in a negative-polarity question.
Topic: Domain 5: Data sharing, collaboration, and marketplace
You store unstructured documents in an internal stage and expose them through a directory table. Compliance requires that text extraction and keyword detection run entirely inside Snowflake to minimize data movement. Which approach is the BEST fit?
Options:
Best answer: C
Explanation: The choice that creates a JavaScript UDF on top of text returned by a SQL file function is best because it keeps the entire pipeline inside Snowflake. The documents remain in the internal stage, the file function exposes their contents to SQL, and the UDF encapsulates custom logic for text extraction or keyword detection. This directly satisfies the single deciding factor in the scenario: minimizing data movement by performing all processing within Snowflake.
Topic: Domain 5: Data sharing, collaboration, and marketplace
Which statement correctly describes how scalar functions and table functions differ in Snowflake transformations?
Options:
Best answer: D
Explanation: The option that states a scalar function returns a single value per input row, while a table function returns a set of rows and columns usable like a table in the FROM clause is correct because it focuses exactly on the return shape: scalar = single value, table function = tabular result set. This aligns with how queries in Snowflake integrate scalar functions into expressions and table functions into the FROM/JOIN portions of a SELECT statement.
Topic: Domain 1: Snowflake architecture and key features
A data engineering team currently exports large tables from the Snowflake AI Data Cloud to an external processing cluster to run complex transformation code. They want to redesign the pipeline using the requirements shown below.
| Requirement ID | Detail |
|---|---|
| R1 | Reuse existing general-purpose programming language skills (e.g., Python, Java) instead of only SQL. |
| R2 | Keep data and processing inside Snowflake to minimize data movement. |
| R3 | Build maintainable, testable code-based data pipelines. |
| R4 | Use DataFrame-style operations while still leveraging Snowflake tables. |
Based on the exhibit, which Snowflake feature is MOST appropriate to implement the data processing logic?
Options:
Best answer: D
Explanation: The choice to use Snowpark to build data pipelines in a supported programming language that execute inside Snowflake compute directly satisfies every line of the exhibit:
No other option in the list simultaneously meets all four requirements from the table.
Topic: Domain 5: Data sharing, collaboration, and marketplace
A legal team stores signed contract PDFs as unstructured files in an internal stage and wants a Snowflake dashboard showing file name, size, last modified time, and a searchable text summary from each PDF. They insist on staying fully inside Snowflake SQL, avoiding external services and extra pipelines, and want a simple, maintainable design. Which approach best meets these requirements?
Options:
Best answer: B
Explanation: The choice that creates a directory table on the stage and queries it with SQL file functions satisfies all constraints:
Topic: Domain 2: Account setup, security, and governance
Which TWO statements about Snowflake network policies for restricting client IP addresses are correct? Assume a standard Snowflake AI Data Cloud deployment. (Select TWO.)
Options:
Correct answers: A and E
Explanation: The statement that network policies define which client IP addresses are allowed or blocked is correct because their core purpose is to enforce IP-based connection restrictions at the account and/or user level.
The statement that they provide an additional perimeter control against compromised credentials is also correct, as limiting access to trusted IP ranges helps prevent successful logins from unexpected locations even when credentials are known.
Topic: Domain 1: Snowflake architecture and key features
A small analytics team uses a single, always-on Large virtual warehouse for all workloads. Their data volume will double next quarter, but the compute credit budget must stay flat. They want to maintain performance and control costs by leveraging Snowflake’s separation of storage and compute. Which change is MOST appropriate?
Options:
Best answer: D
Explanation: The choice that creates separate, smaller ETL and BI warehouses with auto-suspend and auto-resume correctly applies Snowflake’s separation of storage and compute:
This meets both goals: maintaining performance (via workload isolation) and controlling compute costs (via right-sized, auto-suspending warehouses) while relying on independent storage scaling.
Topic: Domain 1: Snowflake architecture and key features
Which statement best describes how query filters and the way data is ordered in a table affect micro-partition pruning in Snowflake?
Options:
Best answer: A
Explanation: The statement about filtering on well-clustered columns allowing Snowflake to skip entire micro-partitions is correct because it directly describes how Snowflake uses per-column min/max metadata to prune micro-partitions and reduce scanned data when filters align with the stored order/distribution of values.
Topic: Domain 3: Data loading, unloading, and transformation
Which condition most likely causes Snowflake to recompute a query instead of using a previously populated result cache?
Options:
Best answer: A
Explanation: The choice stating that the data in one or more queried tables changed is correct because Snowflake’s result cache is only valid as long as the underlying data remains unchanged. Once DML modifies the tables referenced by the query, the cached result no longer represents the current data, so Snowflake discards it and recomputes the query to return up-to-date results.
Topic: Domain 1: Snowflake architecture and key features
In Snowflake, when is a temporary table the most appropriate choice instead of a permanent or transient table?
Options:
Best answer: D
Explanation: The choice that stores intermediate query results needed only within the current session and safe to discard afterward directly matches the design of temporary tables. They exist only for the duration of the session, are not shared across sessions, and are ideal for scratch or working data that does not need long-term persistence or recovery.
Topic: Domain 3: Data loading, unloading, and transformation
A large fact table has about 5 billion rows. Most queries filter on ORDER_DATE, CUSTOMER_ID, or both. The team plans to use clustering to improve micro-partition pruning. Which approach is the LEAST appropriate and should NOT be chosen?
Options:
ORDER_DATE, CUSTOMER_ID) to support frequent filters on both columns.ORDER_DATE so partitions align with the most common time-based filters.ORDER_ID plus five additional low-selectivity columns so that “all possible filters are covered.”ORDER_DATE, and monitor query performance before adding a key.Best answer: C
Explanation: The choice that defines a clustering key on high-cardinality ORDER_ID plus five additional low-selectivity columns is incorrect because it violates several best practices:
ORDER_ID is typically unique or near-unique, so micro-partitions will not group many related rows together; pruning benefits are minimal.ORDER_DATE and CUSTOMER_ID), so it fails the primary performance goal.This is a classic clustering anti-pattern: high complexity and cost with little improvement in pruning.
Topic: Domain 1: Snowflake architecture and key features
A provider account has created an outbound share named SALES_SHARE and wants to give a consumer account controlled access to sales data.
The provider runs:
SHOW GRANTS TO SHARE sales_share;
Result:
| privilege | granted_on | name | grant_option | granted_to | grantee_name |
|---|---|---|---|---|---|
| USAGE | DATABASE | PROD_SALES_DB | false | SHARE | SALES_SHARE |
| SELECT | TABLE | PROD_SALES_DB.SALES_FACT | false | SHARE | SALES_SHARE |
Based on this exhibit, which statement BEST describes what a consumer account can do with this shared data?
Options:
PROD_SALES_DB because USAGE is granted on the database.SALES_FACT until grant_option is set to true for the SELECT privilege on the table.PROD_SALES_DB.SALES_FACT as read-only data but cannot modify the provider’s underlying table.SALES_FACT using COPY INTO because SELECT on the table is granted to the share.Best answer: C
Explanation: The choice describing that the consumer can query PROD_SALES_DB.SALES_FACT as read-only but cannot modify the provider’s table aligns with both the exhibit and Snowflake’s sharing model. The privilege column lists only USAGE on the database and SELECT on the table, which are inherently non-DML privileges, and shares expose those privileges as read-only access in consumer accounts. Since no write privileges are present, the consumer’s interaction with the provider’s data is limited to querying.
Topic: Domain 6: Operations, monitoring, and business continuity
A data engineering team uses the Snowflake AI Data Cloud and wants protection against accidental table drops and incorrect updates. They plan to rely on Snowflake’s data protection features for day-to-day recovery. Which of the following statements about using Snowflake Time Travel for data recovery is INCORRECT?
Options:
SELECT queries using AT or BEFORE clauses (for example, AT(TIMESTAMP =>...)) to view historical versions of data without actually restoring or changing the current table.UNDROP TABLE to restore tables that were accidentally dropped, as long as they are within the configured Time Travel retention period.Best answer: A
Explanation: The option that recommends primarily relying on Fail-safe instead of Time Travel for routine recovery is incorrect. Fail-safe is not a user-driven, fast recovery mechanism; it is a Snowflake-managed safety net for disaster scenarios and is accessed with Snowflake Support involvement. Normal, self-service recovery from accidental deletes or drops should use Time Travel, making the statement about using Fail-safe as the main tool for routine recovery clearly wrong.
Topic: Domain 5: Data sharing, collaboration, and marketplace
An analytics team repeatedly queries the same set of large image files stored as unstructured data in an internal named stage. Queries are slower than expected and warehouse usage is rising. Which of the following actions/solutions will meet these requirements? (Select TWO.)
Options:
Correct answers: A and B
Explanation: The choice to run queries on a dedicated, always-on virtual warehouse directly leverages Snowflake’s remote file cache. Because the same warehouse continues running between queries, cached contents of frequently read unstructured files can be reused, lowering latency and compute effort.
The choice to create and use a directory table with filters on file path or metadata optimizes access patterns. It ensures that queries only enumerate and access the subset of files required, which reduces unnecessary remote reads and improves both performance and cost efficiency when working with unstructured data.
Topic: Domain 1: Snowflake architecture and key features
An organization is migrating several 10TB transactional tables into Snowflake. The DBA team previously managed table partitions and compression manually and wants to minimize ongoing operational work while still achieving efficient storage and good query performance. Which of the following actions/solutions will meet these requirements? (Select TWO.)
Options:
COPY and rely on Snowflake to automatically create and maintain micro-partitions and data compression.Correct answers: A and E
Explanation: The option that loads data with COPY and relies on automatic micro-partitions and compression is correct because it uses Snowflake’s default behavior: data is transparently stored in compressed micro-partitions without any manual tuning. The option that avoids manual reorganization and only considers clustering keys when query patterns demand it is also correct because it acknowledges that Snowflake already manages storage, and that additional tuning should be exception-based, preserving low operational overhead.
Topic: Domain 1: Snowflake architecture and key features
Which statement BEST describes an external stage in the Snowflake AI Data Cloud?
Options:
Best answer: D
Explanation: The choice describing a named object that references files in external cloud storage and is used for loading and unloading data is correct because it captures the key properties of an external stage: it is a Snowflake object, it points to an external location, and it is used as the source or target for COPY and related operations without storing data inside Snowflake-managed stage storage.
Use this map after the sample questions to connect individual items to the Snowflake architecture, loading, security, performance, and cost decisions these practice samples test.
flowchart LR
S1["Analytics platform requirement"] --> S2
S2["Choose database schema warehouse and role design"] --> S3
S3["Load and transform data"] --> S4
S4["Secure share and govern access"] --> S5
S5["Monitor performance cost and usage"] --> S6
S6["Optimize storage compute and operations"]
| Cue | What to remember |
|---|---|
| Architecture | Separate storage, compute warehouses, cloud services, databases, schemas, and roles. |
| Loading | Know stages, file formats, COPY, Snowpipe, and load history. |
| Security | Use RBAC, masking, network policies, MFA, encryption, and secure sharing. |
| Performance | Review warehouse sizing, clustering, pruning, caching, and query profile evidence. |
| Cost | Control warehouse auto-suspend, scaling, storage, retention, and resource monitors. |