SnowPro Core COF-C02: Architecture and Features

Try 10 focused SnowPro Core COF-C02 questions on Architecture and Features, with explanations, then continue with IT Mastery.

On this page

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try SnowPro Core COF-C02 on Web View full SnowPro Core COF-C02 practice page

Topic snapshot

FieldDetail
Exam routeSnowPro Core COF-C02
Topic areaSnowflake Architecture and Key Features
Blueprint weight20%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Snowflake Architecture and Key Features for SnowPro Core COF-C02. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 20% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Snowflake Architecture and Key Features

A company’s BI tool connects to the Snowflake AI Data Cloud using a shared ACCOUNTADMIN user over the public internet with simple username/password authentication. Security wants stronger controls without breaking BI connectivity. Which change is the most appropriate first step?

Options:

  • A. Rotate the shared ACCOUNTADMIN password monthly and email the new password to all BI users so they can update their BI connections.

  • B. Change the BI tool connection to use the Snowsight web UI instead of a direct Snowflake account URL and keep the same credentials.

  • C. Create a dedicated BI service user with a least-privilege custom role, restrict access via a network policy to the BI tool’s IP range, and update the BI connection to use this user and the account URL.

  • D. Keep using the shared ACCOUNTADMIN user but enable MFA and require all BI users to approve MFA prompts when dashboards refresh.

Best answer: C

Explanation: The scenario highlights several weak practices for connecting a BI tool to Snowflake: use of a shared ACCOUNTADMIN user, broad privileges, no network access controls, and basic password authentication. Optimizing this setup should strengthen security while preserving reliable BI connectivity.

The best first step is to introduce a dedicated BI service user with a least-privilege custom role and restrict where that user can connect from using a Snowflake network policy that allows only the BI tool’s IP range (for example, the BI gateway or corporate egress). The BI tool should then use the correct Snowflake account URL along with this new user. This addresses role selection and network access, which are core connectivity considerations.

Other changes (like MFA on a shared ACCOUNTADMIN user or only rotating passwords) either break automated connectivity or fail to address the most serious security and governance gaps, such as shared credentials and overly broad privileges.


Question 2

Topic: Snowflake Architecture and Key Features

A data team needs analysts to query customer data without directly accessing tables that contain full names, emails, and phone numbers. Analysts should only see a subset of columns, with sensitive fields masked, and should not be able to infer the underlying table structure. Which Snowflake design choice best reflects this security principle?

Options:

  • A. Grant analysts SELECT on the base customer table but require them to manually filter and mask sensitive columns in their queries

  • B. Create a secure view that returns only required columns and masked expressions, and grant analysts access only to the secure view

  • C. Use Time Travel on the customer table so analysts can only query historical snapshots instead of the current data

  • D. Copy the customer data into a separate table with masked values and grant analysts SELECT on that table

Best answer: B

Explanation: The scenario describes a need to protect sensitive customer information while still allowing analysts to run queries. Analysts should only see a subset of columns and masked versions of sensitive fields, and they should not have direct visibility into the underlying tables.

In Snowflake, a secure view is designed for precisely this use case. A secure view acts as a governed, logical interface over underlying tables. It can:

  • Select only the columns that should be visible to a given audience.
  • Apply masking or transformation logic to sensitive fields in the view definition.
  • Hide underlying table structure and view definition details from consumers, adding an extra layer of security and obfuscation.

By granting analysts access only to the secure view (and not the base tables), you enforce least privilege and data masking via the logical data model, which is the key security principle being applied here.


Question 3

Topic: Snowflake Architecture and Key Features

A team is planning how various applications will connect to the Snowflake AI Data Cloud. Which of the following is NOT an appropriate way to use JDBC/ODBC drivers or similar supported interfaces for application connectivity?

Options:

  • A. Embedding the Snowflake JDBC driver in a Java-based application server to run SQL queries using a Snowflake connection string

  • B. Configuring a BI tool to connect to Snowflake using the Snowflake ODBC driver and a DSN with SSO authentication

  • C. Configuring an ETL tool to use the Snowflake ODBC driver to extract data from Snowflake tables into a downstream system

  • D. Bypassing Snowflake drivers and directly opening raw TCP connections to Snowflake’s internal storage layer to read micro-partitions

Best answer: D

Explanation: Snowflake provides JDBC, ODBC, and other language-specific drivers so applications and tools can connect using standard, well-understood interfaces. These drivers handle authentication, encryption, SQL submission, and result retrieval over Snowflake’s supported protocols.

Accessing Snowflake by attempting to connect directly to its internal storage or micro-partitions over raw TCP is neither possible nor supported. Snowflake’s architecture intentionally abstracts storage details; all access must go through the Snowflake service layer via approved clients and drivers.

In practice, BI tools, ETL platforms, and custom applications should use the appropriate Snowflake driver (JDBC, ODBC, or a language connector) and standard connection configuration (account URL, credentials or SSO, and optional parameters) to interact with Snowflake securely and reliably.


Question 4

Topic: Snowflake Architecture and Key Features

A small analytics team uses the Snowflake AI Data Cloud and wants a visual, browser-based interface to develop SQL, share worksheets, build simple dashboards, and review query history. They rarely use the command line.

Which TWO approaches should you AVOID recommending as their primary day-to-day interface? (Select TWO.)

Options:

  • A. Use a supported BI tool connected via a Snowflake driver as the main analytics front end, optionally using Snowsight for administrative tasks.

  • B. Use Snowsight for worksheets and dashboards, and optionally build Streamlit in Snowflake apps for simplified views for business users.

  • C. Build custom scripts that call the Snowflake SQL API directly for interactive query development.

  • D. Use Snowsight as the main web interface for worksheets, dashboards, and query monitoring.

  • E. Use the SnowSQL command-line client for all querying and any result inspection.

Correct answers: C and E

Explanation: The scenario explicitly calls for a visual, browser-based interface where analysts can write SQL, share worksheets, create simple dashboards, and review query history. Snowflake provides Snowsight as the modern web UI for exactly these activities. Other graphical tools, such as external BI tools or Streamlit in Snowflake apps, also satisfy the requirement.

Tools like SnowSQL and the SQL API are powerful but are designed primarily for programmatic or command-line access. Expecting analysts to use those as their primary interface for day-to-day, interactive analysis and dashboarding is an anti-pattern: it increases complexity, reduces usability, and does not provide the requested visual experience.


Question 5

Topic: Snowflake Architecture and Key Features

A team is migrating a 5 TB fact table into the Snowflake AI Data Cloud. They want good query performance while minimizing ongoing effort managing how data is physically partitioned or laid out on disk. Which approach best leverages Snowflake’s automatic micro-partitioning?

Options:

  • A. Pre-split the source data into separate tables for each month and route queries to the correct table using application logic to reduce partition scanning.

  • B. Define a detailed multi-column partitioning scheme in the CREATE TABLE statement so Snowflake uses those columns as explicit physical partitions.

  • C. Continuously reorganize and rewrite data files in external cloud storage so that each file aligns with common filter predicates before loading into Snowflake.

  • D. Load the data into a single standard table without specifying any partitions and let Snowflake automatically create and manage micro-partitions as data is loaded.

Best answer: D

Explanation: Snowflake automatically stores table data in micro-partitions, which are immutable storage units created and managed by the service as data is loaded. Users do not define physical partitions, indexes, or file layouts on disk. Instead, Snowflake’s cloud services layer tracks rich metadata about each micro-partition, such as value ranges and statistics, and uses this metadata to prune unneeded micro-partitions at query time.

Because of this design, the simplest and most aligned pattern for most workloads is to load data into a standard Snowflake table and allow the platform to manage micro-partitions automatically. This meets the requirement to minimize ongoing effort managing physical layout while still benefiting from efficient query performance.

Patterns like manual table sharding, trying to define physical partitions, or rearranging files in external storage all increase operational complexity and do not actually control how micro-partitions are created within Snowflake’s storage layer.


Question 6

Topic: Snowflake Architecture and Key Features

A Snowflake administrator notices that storage charges have increased significantly over the last two months. During this period, the team raised the Time Travel retention for several very large, heavily updated fact tables from 1 day to 14 days for ad hoc historical recovery. The organization now decides that only 1 day of data recovery is required for these tables, and they want to reduce ongoing storage consumption without changing query workloads or moving data out of Snowflake. Which action is the most appropriate to meet these requirements?

Options:

  • A. Disable Fail-safe on the large fact tables so that only Time Travel data is stored for 14 days.

  • B. Reduce the Time Travel retention period on the large fact tables back to 1 day and keep the default Fail-safe behavior, allowing older historical data to age out over time.

  • C. Regularly drop and recreate the large fact tables to clear their history and remove data from storage more quickly.

  • D. Resize the virtual warehouses used to query the large fact tables to a smaller size and rely more on result caching.

Best answer: B

Explanation: Time Travel in Snowflake keeps historical micro-partitions so that users can query or restore data from a previous point in time. The longer the Time Travel retention period, the more historical versions of data must be stored, which increases overall storage consumption, especially for large, frequently updated tables.

Fail-safe provides an additional, non-configurable recovery window after Time Travel expires. Data in Fail-safe is no longer queryable by users but is still kept by Snowflake for disaster recovery and still consumes storage until the Fail-safe period ends.

In this scenario, the key driver of increased storage is the change from a 1-day to a 14-day Time Travel retention on very large, heavily updated fact tables. Reducing the Time Travel retention back to 1 day aligns with the new 1-day recovery requirement and will reduce the volume of historical data that needs to be stored going forward. Fail-safe remains as a background safety net, and as data ages beyond both Time Travel and Fail-safe, Snowflake automatically releases the associated storage.

Other actions such as adjusting compute, attempting to disable Fail-safe, or dropping and recreating tables do not directly or correctly address how Time Travel and Fail-safe contribute to storage usage, and may introduce unnecessary risk or complexity.


Question 7

Topic: Snowflake Architecture and Key Features

A data engineer is exploring how authentication events relate to Snowflake’s architecture. They run the following query in Snowsight:

SELECT
  EVENT_TIMESTAMP,
  USER_NAME,
  CLIENT_APPLICATION_ID,
  EVENT_TYPE
FROM SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY
ORDER BY EVENT_TIMESTAMP DESC
LIMIT 3;

Result:

EVENT_TIMESTAMPUSER_NAMECLIENT_APPLICATION_IDEVENT_TYPE
2025-12-06 09:12:47.123ANALYST1SNOWSIGHTLOGIN_SUCCESS
2025-12-06 09:11:03.844ETL_APPJDBCLOGIN_SUCCESS
2025-12-06 09:10:59.510ADMINSNOWSQLLOGIN_FAILURE

Based on this exhibit, which Snowflake component is primarily responsible for processing these events and maintaining this metadata?

Options:

  • A. The virtual warehouse layer, because it executes all user logins and stores session metadata in its local cache.

  • B. The cloud services layer, which manages authentication, metadata, optimization, and transaction coordination independently of virtual warehouses.

  • C. The storage layer, which decrypts data files and directly authenticates users before queries run.

  • D. The underlying cloud provider’s object storage service, which generates login events and passes them to Snowflake.

Best answer: B

Explanation: The exhibit shows a query against SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY, returning columns such as USER_NAME, CLIENT_APPLICATION_ID, and especially EVENT_TYPE with values like LOGIN_SUCCESS and LOGIN_FAILURE. These clearly represent authentication events and related metadata about how users connect to Snowflake.

In the Snowflake AI Data Cloud architecture, the cloud services layer is responsible for:

  • Authenticating and authorizing users and roles.
  • Managing metadata, including account usage views like LOGIN_HISTORY.
  • Optimizing queries (parsing, planning, and optimization).
  • Coordinating transactions and concurrency.
  • Orchestrating infrastructure such as virtual warehouse provisioning.

Because LOGIN_HISTORY is an account usage metadata view that records authentication events, the component responsible must be the cloud services layer. Virtual warehouses only execute queries once a session is established. The storage layer and external object storage do not have any concept of Snowflake logins or users.

Thus, the choice that explicitly names the cloud services layer and its responsibilities (authentication, metadata, optimization, transactions, and coordination independent of virtual warehouses) best matches the information in the exhibit and the Snowflake architecture model.


Question 8

Topic: Snowflake Architecture and Key Features

A data team is planning how several tools will connect to the Snowflake AI Data Cloud. Which of the following statements about using Snowflake-provided connectors and drivers is INCORRECT?

Options:

  • A. Use a Snowflake JDBC or ODBC driver when building a custom application that needs to issue SQL directly to Snowflake.

  • B. Prefer generic database drivers over Snowflake-provided connectors in tools that natively support Snowflake, because this avoids any Snowflake-specific behavior and is the recommended pattern.

  • C. Use the Snowflake-provided connector for a managed ETL platform that natively supports Snowflake, to simplify configuration and leverage pushdown capabilities.

  • D. Use the Snowflake connector inside a BI dashboard tool that already includes certified Snowflake support, instead of manually configuring low-level drivers in the tool.

Best answer: B

Explanation: Snowflake distinguishes between drivers (such as JDBC and ODBC) and connectors that are integrated into platforms like ETL tools or BI tools. Drivers are low-level components used by custom applications or frameworks to run SQL against Snowflake. Connectors are higher-level integrations provided by Snowflake or partners that are tailored to specific tools and workflows.

When a third-party platform already offers a native Snowflake connector, using that connector is typically the best choice. It usually provides easier configuration, built-in authentication patterns, and Snowflake-aware optimizations such as efficient data loading or query pushdown. Generic database drivers are mainly for situations where no Snowflake-specific connector exists or you are writing your own integration.

The incorrect statement is the one that recommends preferring generic drivers over native Snowflake connectors in tools that already support Snowflake. That advice goes against common practice and would likely reduce functionality, not improve it.


Question 9

Topic: Snowflake Architecture and Key Features

A data team continuously bulk-loads data into Snowflake tables without managing file layout, compression settings, or table partitioning. Queries still perform well and require no physical tuning. Which core Snowflake principle does this behavior BEST illustrate?

Options:

  • A. Automatic management of micro-partitions and data compression

  • B. Result caching to avoid re-running identical queries

  • C. User-managed file layout and manual partition pruning

  • D. Separation of storage and compute resources

Best answer: A

Explanation: Snowflake automatically stores table data in optimized, compressed micro-partitions and maintains all related metadata. Users do not need to define physical partitions, manage file layout, or tune compression algorithms. This design delivers good performance with very low operational overhead.

In the scenario, the team simply loads data into Snowflake, yet queries perform well without any maintenance effort on partitioning or compression. That directly reflects Snowflake’s automatic micro-partitioning and storage compression, which are core aspects of its storage layer and a key reason why administration is simplified compared with traditional databases.

Other concepts such as separation of storage and compute or result caching are important, but they do not explain why raw loaded data performs efficiently without physical design work. The defining principle here is Snowflake’s automatic handling of micro-partitions and compression.


Question 10

Topic: Snowflake Architecture and Key Features

Which TWO statements correctly describe Time Travel and Fail-safe behavior for permanent, transient, and temporary tables in Snowflake? (Select TWO.)

Options:

  • A. Transient tables automatically store deleted data in Fail-safe for a short period before permanent removal.

  • B. Temporary tables are preserved across user sessions and rely on Fail-safe for long-term recovery after they are dropped.

  • C. Transient tables support a limited Time Travel retention but do not use Fail-safe, making them suitable for less critical or easily re-created data.

  • D. Permanent tables support both Time Travel and Fail-safe, giving the longest overall recovery window for critical data.

  • E. Choosing a temporary table instead of a permanent table does not change any Time Travel or Fail-safe behavior for that data.

  • F. Permanent tables cannot use Time Travel; only transient and temporary tables support querying historical table data.

Correct answers: C and D

Explanation: Snowflake supports three main table types—permanent, transient, and temporary—that differ primarily in how long historical data can be recovered and whether Fail-safe is available.

Permanent tables are meant for business-critical data. They support Time Travel for querying and restoring recent historical versions, and after the Time Travel period ends, the data moves into Fail-safe for an additional recovery window managed by Snowflake. This combination provides the longest overall retention and recovery capability.

Transient tables are designed for non-critical or easily reproducible data, such as intermediate ETL results. They support only a short Time Travel retention and do not use Fail-safe at all. Once the Time Travel period expires, the historical data is permanently removed and cannot be recovered via Fail-safe, which reduces storage cost but increases risk.

Temporary tables are session-scoped working tables. They support at most a short Time Travel retention, have no Fail-safe, and are automatically dropped when the session ends. Like transient tables, they are not appropriate for long-term data protection.

Understanding these behaviors helps you choose the correct table type in the Snowflake AI Data Cloud based on how important recovery and retention are for each dataset.

Continue with full practice

Use the SnowPro Core COF-C02 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try SnowPro Core COF-C02 on Web View SnowPro Core COF-C02 Practice Test

Free review resource

Read the SnowPro Core COF-C02 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.

Revised on Thursday, May 14, 2026