Cof-C02 2
Cof-C02 2
Get the Full COF-C02 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/COF-C02-exam-dumps.html (695 New Questions)
Snowflake
Exam Questions COF-C02
SnowPro Core Certification Exam (COF-C02)
NEW QUESTION 1
- (Topic 1)
Which of the following Snowflake capabilities are available in all Snowflake editions? (Select TWO)
Answer: BD
Explanation:
In all Snowflake editions, two key capabilities are universally available:
? B. Automatic encryption of all data: Snowflake automatically encrypts all data stored in its platform, ensuring security and compliance with various regulations.
This encryption is transparent to users and does not require any configuration or management.
? D. Object-level access control: Snowflake provides granular access control mechanisms that allow administrators to define permissions at the object level,
including databases, schemas, tables, and views. This ensures that only authorized users can access specific data objects.
These features are part of Snowflake??s commitment to security and governance, and they are included in every edition of the Snowflake Data Cloud.
References:
? Snowflake Documentation on Security Features
? SnowPro® Core Certification Exam Study Guide
NEW QUESTION 2
- (Topic 1)
What is the default character set used when loading CSV files into Snowflake?
A. UTF-8
B. UTF-16
C. ISO S859-1
D. ANSI_X3.A
Answer: A
Explanation:
https://docs.snowflake.com/en/user-guide/intro-summary-loading.html#:~:text=For%20delimited%20files%20(CSV%2C%20TSV,encoding%20to%20
use%20for%20loading.
For delimited files (CSV, TSV, etc.), the default character set is UTF-8. To use any other characters sets, you must explicitly specify the encoding to use for
loading. For the list of supported character sets, see Supported Character Sets for Delimited Files (in this topic).
NEW QUESTION 3
- (Topic 1)
What is a limitation of a Materialized View?
Answer: D
Explanation:
Materialized Views in Snowflake are designed to store the result of a query and can be refreshed to maintain up-to-date data. However, they have certain
limitations, one of which is that they cannot be defined using a JOIN clause. This means that a Materialized View can only be created based on a single source
table and cannot combine data from multiple tables using JOIN operations.
References:
? Snowflake Documentation on Materialized Views
? SnowPro® Core Certification Study Guide
NEW QUESTION 4
- (Topic 1)
What can be used to view warehouse usage over time? (Select Two).
Answer: BD
Explanation:
To view warehouse usage over time, the Query history view and the WAREHOUSE_METERING HISTORY View can be utilized. The Query history view allows
users to monitor the performance of their queries and the load on their warehouses over a specified period1. The WAREHOUSE_METERING HISTORY View
provides detailed information about the workload on a warehouse within a specified date range, including average running and queued loads2. References: [COF-
C02] SnowPro Core Certification Exam Study Guide
NEW QUESTION 5
- (Topic 1)
Which of the following Snowflake features provide continuous data protection automatically? (Select TWO).
A. Internal stages
B. Incremental backups
C. Time Travel
D. Zero-copy clones
E. Fail-safe
Answer: CE
Explanation:
Snowflake??s Continuous Data Protection (CDP) encompasses a set of features that help protect data stored in Snowflake against human error, malicious acts,
and software failure. Time Travel allows users to access historical data (i.e., data that has been changed or deleted) for a defined period, enabling querying and
restoring of data. Fail-safe is an additional layer of data protection that provides a recovery option in the event of significant data loss or corruption, which can only
be performed by Snowflake. References:
? Continuous Data Protection | Snowflake Documentation1
? Data Storage Considerations | Snowflake Documentation2
? Snowflake SnowPro Core Certification Study Guide3
? Snowflake Data Cloud Glossary
https://docs.snowflake.com/en/user-guide/data-availability.html
NEW QUESTION 6
- (Topic 1)
What features does Snowflake Time Travel enable?
A. Querying data-related objects that were created within the past 365 days
B. Restoring data-related objects that have been deleted within the past 90 days
C. Conducting point-in-time analysis for Bl reporting
D. Analyzing data usage/manipulation over all periods of time
Answer: BC
Explanation:
Snowflake Time Travel is a powerful feature that allows users to access historical data within a defined period. It enables two key capabilities:
? B. Restoring data-related objects that have been deleted within the past 90 days:
Time Travel can be used to restore tables, schemas, and databases that have been accidentally or intentionally deleted within the Time Travel retention period.
? C. Conducting point-in-time analysis for BI reporting: It allows users to query
historical data as it appeared at a specific point in time within the Time Travel retention period, which is crucial for business intelligence and reporting purposes.
While Time Travel does allow querying of past data, it is limited to the retention period set for the Snowflake account, which is typically 1 day for standard accounts
and can be extended up to 90 days for enterprise accounts. It does not enable querying or restoring objects created or deleted beyond the retention period, nor
does it provide analysis over all periods of time.
References:
? Snowflake Documentation on Time Travel
? SnowPro® Core Certification Study Guide
NEW QUESTION 7
- (Topic 1)
True or False: Reader Accounts are able to extract data from shared data objects for use outside of Snowflake.
A. True
B. False
Answer: B
Explanation:
Reader accounts in Snowflake are designed to allow users to read data shared with them but do not have the capability to extract data for use outside of
Snowflake. They are intended for consuming shared data within the Snowflake environment only.
NEW QUESTION 8
- (Topic 1)
Which Snowflake feature is used for both querying and restoring data?
A. Cluster keys
B. Time Travel
C. Fail-safe
D. Cloning
Answer: B
Explanation:
Snowflake??s Time Travel feature is used for both querying historical data in tables and restoring and cloning historical data in databases, schemas, and tables3.
It allows users to access historical data within a defined period (1 day by default, up to 90 days for Snowflake Enterprise Edition) and is a key feature for data
recovery and management. References: [COF-C02] SnowPro Core Certification Exam Study Guide
NEW QUESTION 9
- (Topic 1)
Which account usage views are used to evaluate the details of dynamic data masking? (Select TWO)
A. ROLES
B. POLICY_REFERENCES
C. QUERY_HISTORY
D. RESOURCE_MONIT ORS
E. ACCESS_HISTORY
Answer: BE
Explanation:
To evaluate the details of dynamic data masking,
the POLICY_REFERENCES and ACCESS_HISTORY views in the account_usage schema are used. The POLICY_REFERENCES view provides information
about the objects to which a masking policy is applied, and the ACCESS_HISTORY view contains details about access to the masked data, which can be used to
audit and verify the application of
dynamic data masking policies.
References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake Documentation on Dynamic Data Masking1
NEW QUESTION 10
- (Topic 1)
Which of the following are valid methods for authenticating users for access into Snowflake? (Select THREE)
A. SCIM
B. Federated authentication
C. TLS 1.2
D. Key-pair authentication
E. OAuth
F. OCSP authentication
Answer: BDE
Explanation:
Snowflake supports several methods for authenticating users, including federated authentication, key-pair authentication, and OAuth. Federated authentication
allows users to authenticate using their organization??s identity provider. Key-pair authentication uses a public-private key pair for secure login, and OAuth is an
open standard for access delegation commonly used for token-based authentication. References: Authentication policies | SnowflakeDocumentation,
Authenticating to the server | Snowflake Documentation, External API authentication and secrets | Snowflake Documentation.
NEW QUESTION 10
- (Topic 1)
When is the result set cache no longer available? (Select TWO)
Answer: CE
Explanation:
The result set cache in Snowflake is invalidated and no longer available when the underlying data of the query results has changed, ensuring that queries return
the most current data. Additionally, the cache expires after 24 hours to maintain the efficiency and accuracy of data retrieval1.
NEW QUESTION 14
- (Topic 1)
A user needs to create a materialized view in the schema MYDB.MYSCHEMA. Which statements will provide this access?
A. GRANT ROLE MYROLE TO USER USER1;CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO ROLE MYROLE;
B. GRANT ROLE MYROLE TO USER USER1;CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO USER USER1;
C. GRANT ROLE MYROLE TO USER USER1;CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO USER1;
D. GRANT ROLE MYROLE TO USER USER1;CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO MYROLE;
Answer: D
Explanation:
In Snowflake, to create a materialized view, the user must have the necessary privileges on the schema where the view will be created. These privileges are
granted through roles, not directly to individual users. Therefore, the correct process is to grant the role to the user and then grant the privilege to create the
materialized view to the role itself.
The statement GRANT ROLE MYROLE TO USER USER1; grants the specified role to the user, allowing them to assume that role and exercise its privileges. The
subsequent statement CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO
MYROLE; grants the privilege to create a materialized view within the specified schema to the role MYROLE. Any user who has been granted MYROLE can then
create materialized views in MYDB.MYSCHEMA.
References:
? Snowflake Documentation on Roles
? Snowflake Documentation on Materialized Views
NEW QUESTION 18
- (Topic 1)
What happens when a cloned table is replicated to a secondary database? (Select TWO)
Answer: CE
Explanation:
When a cloned table is replicated to a secondary database in Snowflake, the following occurs:
? C. The physical data is replicated: The actual data of the cloned table is physically
replicated to the secondary database. This ensures that the secondary database has its own copy of the data, which can be used for read-only purposes or failover
scenarios1.
? E. Metadata pointers to cloned tables are replicated: Along with the physical data,
the metadata pointers that refer to the cloned tables are also replicated. This metadata includes information about the structure of the table and any associated
properties2.
It??s important to note that while the physical data and metadata are replicated, the secondary database is typically read-only and cannot be used for write
operations. Additionally, while there may be additional storage costs associated with the secondary account, this is not a direct result of the replication process but
rather a consequence of storing additional data.
References:
? SnowPro Core Exam Prep — Answers to Snowflake??s LEVEL UP: Backup and Recovery
? Snowflake SnowPro Core Certification Exam Questions Set 10
NEW QUESTION 19
- (Topic 1)
What are value types that a VARIANT column can store? (Select TWO)
A. STRUCT
B. OBJECT
C. BINARY
D. ARRAY
E. CLOB
Answer: BD
Explanation:
A VARIANT column in Snowflake can store semi-structured data types. This includes:
? B. OBJECT: An object is a collection of key-value pairs in JSON, and a VARIANT column can store this type of data structure.
? D. ARRAY: An array is an ordered list of zero or more values, which can be of any variant-supported data type, including objects or other arrays.
The VARIANT data type is specifically designed to handle semi-structured data like JSON, Avro, ORC, Parquet, or XML, allowing for the storage of nested and
complex data structures.
References:
? Snowflake Documentation on Semi-Structured Data Types
? SnowPro® Core Certification Study Guide
NEW QUESTION 21
- (Topic 1)
In which scenarios would a user have to pay Cloud Services costs? (Select TWO).
Answer: AE
Explanation:
In Snowflake, Cloud Services costs are incurred when the Cloud Services usage exceeds 10% of the compute usage (measured in credits). Therefore, scenarios
A and E would result in Cloud Services charges because the Cloud Services usage is more than 10% of the compute credits used.
References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake??s official documentation on billing and usage1
NEW QUESTION 23
- (Topic 1)
True or False: When you create a custom role, it is a best practice to immediately grant that role to ACCOUNTADMIN.
A. True
B. False
Answer: B
Explanation:
The ACCOUNTADMIN role is the most powerful role in Snowflake and should be limited to a select number of users within an organization. It is responsible for
account-level configurations and should not be used for day-to-day object creation or management. Granting a custom role to ACCOUNTADMIN could
inadvertently give broad access to users with this role, which is not a recommended security practice.
Reference:https://docs.snowflake.com/en/user-guide/security-access-control-considerations.html
NEW QUESTION 25
- (Topic 1)
Which services does the Snowflake Cloud Services layer manage? (Select TWO).
A. Compute resources
B. Query execution
C. Authentication
D. Data storage
E. Metadata
Answer: CE
Explanation:
The Snowflake Cloud Services layer manages a variety of services that are crucial for the operation of the Snowflake platform. Among these services,
Authentication and Metadata management are key components. Authentication is essential for controlling access to the Snowflake environment, ensuring that only
authorized users can perform actions within the platform. Metadata management involves handling all the metadata related to objects within Snowflake, such as
tables, views, and databases, which is vital for the organization and retrieval of data.
References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake Documentation12 https://docs.snowflake.com/en/user-guide/intro-key-concepts.html
NEW QUESTION 29
- (Topic 1)
What is the recommended file sizing for data loading using Snowpipe?
Answer: C
Explanation:
For data loading using Snowpipe, the recommended file size is a compressed file greater than 10 MB and up to 100 MB. This size range is optimal for
Snowpipe??s continuous, micro-batch loadingprocess, allowing for efficient and timely data ingestion without overwhelming the system with files that are too large
or too small. References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake Documentation on Snowpipe1
NEW QUESTION 30
- (Topic 1)
Which of the following describes how multiple Snowflake accounts in a single organization relate to various cloud providers?
A. Each Snowflake account can be hosted in a different cloud vendor and region.
B. Each Snowflake account must be hosted in a different cloud vendor and region
C. All Snowflake accounts must be hosted in the same cloud vendor and region
D. Each Snowflake account can be hosted in a different cloud vendor, but must be in the same region.
Answer: A
Explanation:
Snowflake??s architecture allows for flexibility in account hosting across different cloud vendors and regions. This means that within a single organization,
different Snowflake accounts can be set up in various cloud environments, such as AWS, Azure, or
GCP, and in different geographical regions. This allows organizations to leverage the global infrastructure of multiple cloud providers and optimize their data
storage and computing needs based on regional requirements, data sovereignty laws, and other considerations.
https://docs.snowflake.com/en/user-guide/intro-regions.html
NEW QUESTION 31
- (Topic 1)
Which cache type is used to cache data output from SQL queries?
A. Metadata cache
B. Result cache
C. Remote cache
D. Local file cache
Answer: B
Explanation:
TheResult cache is used in Snowflake to cache the data output from SQL queries. This feature is designed to improve performance by storing the results of
queries for a period of time. When the same or similar query is executed again, Snowflake can retrieve the result from this cache instead of re-computing the
result, which saves time and computational resources.
References:
? Snowflake Documentation on Query Results Cache
? SnowPro® Core Certification Study Guide
NEW QUESTION 34
- (Topic 1)
Which Snowflake objects track DML changes made to tables, like inserts, updates, and deletes?
A. Pipes
B. Streams
C. Tasks
D. Procedures
Answer: B
Explanation:
In Snowflake, Streams are the objects that track Data Manipulation Language (DML) changes made to tables, such as inserts, updates, and deletes. Streams
record these changes along with metadata about each change, enabling actions to be taken using the changed data. This process is known as change data
capture (CDC)2.
NEW QUESTION 35
- (Topic 1)
What are ways to create and manage data shares in Snowflake? (Select TWO)
Answer: AC
Explanation:
Data shares in Snowflake can be created and managed through the Snowflake web interface, which provides a user-friendly graphical interface for various
operations. Additionally, SQL commands can be used to perform these tasks programmatically, offering flexibility and automation capabilities123.
NEW QUESTION 37
- (Topic 1)
Which of the following Snowflake objects can be shared using a secure share? (Select TWO).
A. Materialized views
B. Sequences
C. Procedures
D. Tables
E. Secure User Defined Functions (UDFs)
Answer: DE
Explanation:
Secure sharing in Snowflake allows users to share specific objects with other Snowflake accounts without physically copying the data, thus not consuming
additional storage. Tables and Secure User Defined Functions (UDFs) are among the objects that can be shared using this feature. Materialized views,
sequences, and procedures are not shareable objects in Snowflake.
References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake Documentation on Secure Data Sharing1
NEW QUESTION 38
- (Topic 1)
How often are encryption keys automatically rotated by Snowflake?
A. 30 Days
B. 60 Days
C. 90 Days
D. 365 Days
Answer: A
Explanation:
Snowflake automatically rotates encryption keys when they are more than 30 days old. Active keys are retired, and new keys are created. This process is part of
Snowflake??s comprehensive security measures to ensure data protection and is managed entirely by the Snowflake service without requiring user intervention.
References:
? Understanding Encryption Key Management in Snowflake
NEW QUESTION 40
- (Topic 1)
True or False: It is possible for a user to run a query against the query result cache without requiring an active Warehouse.
A. True
B. False
Answer: A
Explanation:
Snowflake??s architecture allows for the use of a query result cache that stores the results of queries for a period of time. If the same query is run again and the
underlying data has not changed, Snowflake can retrieve the result from this cache without needing to re-run the query on an active warehouse, thus saving on
compute resources.
NEW QUESTION 43
- (Topic 1)
A user unloaded a Snowflake table called mytable to an internal stage called mystage. Which command can be used to view the list of files that has been uploaded
to the staged?
A. list @mytable;
B. list @%raytable;
C. list @ %m.ystage;
D. list @mystage;
Answer: D
Explanation:
The command list @mystage; is used to view the list of files that have been uploaded to an internal stage in Snowflake. The list command displays the metadata
for all files in the specified stage, which in this case is mystage. This command is particularly useful for verifying that files have been successfully unloaded from a
Snowflake table to the stage and for managing the files within the stage.
References:
? Snowflake Documentation on Stages
? SnowPro® Core Certification Study Guide
NEW QUESTION 46
- (Topic 1)
Query compilation occurs in which architecture layer of the Snowflake Cloud Data Platform?
A. Compute layer
B. Storage layer
C. Cloud infrastructure layer
D. Cloud services layer
Answer: D
Explanation:
Query compilation in Snowflake occurs in the Cloud Services layer. This layer is responsible for coordinating and managing all aspects of the Snowflake service,
including authentication, infrastructure management, metadata management, query parsing and optimization, and security. By handling these tasks, the Cloud
Services layer enables the Compute layer to focus on executing queries, while the Storage layer is dedicated to persistently storing data.
References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake Documentation on Snowflake Architecture1
NEW QUESTION 50
- (Topic 1)
A user is loading JSON documents composed of a huge array containing multiple records into Snowflake. The user enables the strip outer_array file format option
What does the STRIP_OUTER_ARRAY file format do?
Answer: B
Explanation:
The STRIP_OUTER_ARRAY file format option in Snowflake is used when loading JSON documents that are composed of a large array containing multiple
records. When this option is enabled, it removes the outer array structure, which allows each record within the array to be loaded as a separate row in the table.
This is particularly useful for efficiently loading JSON data that is structured as an array of records1.
References:
? Snowflake Documentation on JSON File Format
? [COF-C02] SnowPro Core Certification Exam Study Guide
NEW QUESTION 55
- (Topic 1)
What is the MOST performant file format for loading data in Snowflake?
A. CSV (Unzipped)
B. Parquet
C. CSV (Gzipped)
D. ORC
Answer: B
Explanation:
Parquet is a columnar storage file format that is optimized for performance in Snowflake. It is designed to be efficient for both storage and query performance,
particularly for complex queries on large datasets. Parquet files support efficient compression and encoding schemes, which can lead to significant savings in
storage and speed in query processing, making it the most performant file format for loading data into Snowflake.
References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake Documentation on Data Loading1
NEW QUESTION 58
- (Topic 1)
True or False: A 4X-Large Warehouse may, at times, take longer to provision than a X- Small Warehouse.
A. True
B. False
Answer: A
Explanation:
Provisioning time can vary based on the size of the warehouse. A 4X-Large Warehouse typically has more resources and may take longer to provision compared
to a X-Small Warehouse, which has fewer resources and can generally be provisioned more quickly.References: Understanding and viewing Fail-safe | Snowflake
Documentation
NEW QUESTION 62
- (Topic 1)
How would you determine the size of the virtual warehouse used for a task?
Answer: D
Explanation:
The size of the virtual warehouse for a task can be configured to handle concurrency automatically using a Multi-cluster warehouse (MCW). This is because tasks
are designed to run their body on a schedule, and MCW allows for scaling compute resources to match the task??s execution needs without manual intervention.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
NEW QUESTION 67
- (Topic 1)
Which data type can be used to store geospatial data in Snowflake?
A. Variant
B. Object
C. Geometry
D. Geography
Answer: D
Explanation:
Snowflake supports two geospatial data
types: GEOGRAPHY and GEOMETRY. The GEOGRAPHY data type is used to store geospatial data that models the Earth as a perfect sphere, which is suitable
for global geospatial data. This data type follows the WGS 84 standard and is used for storing points, lines, and polygons on the Earth??s surface. The
GEOMETRY data type, on the other hand, represents features in a planar (Euclidean, Cartesian) coordinate system and is typically
used for local spatial reference systems. Since the question specifically asks about geospatial data, which commonly refers to Earth-related spatial data, the
correct answer
is GEOGRAPHY3R. eferences: [COF-C02] SnowPro Core Certification Exam Study Guide
NEW QUESTION 70
- (Topic 1)
What is the purpose of an External Function?
Answer: A
Explanation:
The purpose of an External Function in Snowflake is to call code that executes outside of the Snowflake environment. This allows Snowflake to interact with
external services and leverage functionalities that are not natively available within Snowflake, such as calling APIs or running custom code hosted on cloud
services3. https://docs.snowflake.com/en/sql-reference/external-functions.html
NEW QUESTION 74
- (Topic 1)
When reviewing the load for a warehouse using the load monitoring chart, the chart indicates that a high volume of Queries are always queuing in the warehouse
According to recommended best practice, what should be done to reduce the Queue volume? (Select TWO).
Answer: AB
Explanation:
To address a high volume of queries queuing in a warehouse, Snowflake recommends two best practices:
? A. Use multi-clustered warehousing to scale out warehouse capacity: This approach allows for the distribution of queries across multiple clusters within a
warehouse, effectively managing the load and reducing the queue volume.
? B. Scale up the warehouse size to allow Queries to execute faster: Increasing the size of the warehouse provides more compute resources, which can reduce
the time it takes for queries to execute and thus decrease the number of queries waiting in the queue.
These strategies help to optimize the performance of the warehouse by ensuring that resources are scaled appropriately to meet demand.
References:
? Snowflake Documentation on Multi-Cluster Warehousing
? SnowPro Core Certification best practices
NEW QUESTION 78
- (Topic 1)
In which use cases does Snowflake apply egress charges?
Answer: C
Explanation:
Snowflake applies egress charges in the case of database replication when data is transferred out of a Snowflake region to another region or cloud provider. This
is because the data transfer incurs costs associated with moving data across different networks. Egress charges are not applied for data sharing within the same
region, query result retrieval, or loading data into Snowflake, as these actions do not involve data transfer across regions.
References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake Documentation on Data Replication and Egress Charges1
NEW QUESTION 83
- (Topic 1)
Which command is used to unload data from a Snowflake table into a file in a stage?
A. COPY INTO
B. GET
C. WRITE
D. EXTRACT INTO
Answer: A
Explanation:
The COPY INTO command is used in Snowflake to unload data from a table into a file in a stage. This command allows for the export of data from Snowflake
tables into flat files, which can then be used for further analysis, processing, or storage in external systems.
References:
? Snowflake Documentation on Unloading Data
? Snowflake SnowPro Core: Copy Into Command to Unload Rows to Files in Named Stage
NEW QUESTION 85
- (Topic 1)
When reviewing a query profile, what is a symptom that a query is too large to fit into the memory?
A. A single join node uses more than 50% of the query time
B. Partitions scanned is equal to partitions total
C. An AggregateOperacor node is present
D. The query is spilling to remote storage
Answer: D
Explanation:
When a query in Snowflake is too large to fit into the available memory, it will start spilling to remote storage. This is an indication that the memory allocated for
the query is insufficient for its execution, and as a result, Snowflake uses remote disk storage to handle the overflow. This spill to remote storage can lead to
slower query performance due to the additional I/O operations required.
References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake Documentation on Query Profile1
? Snowpro Core Certification Exam Flashcards2
NEW QUESTION 87
- (Topic 1)
What happens when a virtual warehouse is resized?
A. When increasing the size of an active warehouse the compute resource for all running and queued queries on the warehouse are affected
B. When reducing the size of a warehouse the compute resources are removed only when they are no longer being used to execute any current statements.
C. The warehouse will be suspended while the new compute resource is provisioned and will resume automatically once provisioning is complete.
D. Users who are trying to use the warehouse will receive an error message until the resizing is complete
Answer: A
Explanation:
When a virtual warehouse in Snowflake is resized, specifically when it is increased in size, the additional compute resources become immediately available to all
running and queued queries. Thismeans that the performance of these queries can improve due to the increased resources. Conversely, when the size of a
warehouse is reduced, the compute resources are not removed until they are no longer being used by any current operations1.
References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake Documentation on Virtual Warehouses2
NEW QUESTION 92
- (Topic 1)
What SQL command would be used to view all roles that were granted to user.1?
Answer: A
Explanation:
The correct command to view all roles granted to a specific user in Snowflake is SHOW GRANTS TO USER <user_name>;. This command lists all access control
privileges that have been explicitly granted to the specified user.
References: SHOW GRANTS | Snowflake Documentation
NEW QUESTION 93
- (Topic 1)
What transformations are supported in a CREATE PIPE ... AS COPY ... FROM (....) statement? (Select TWO.)
Answer: AD
Explanation:
In a CREATE PIPE ... AS COPY ... FROM (....) statement, the supported transformations include filtering data using an optional WHERE clause and omitting
columns. The WHERE clause allows for the specification of conditions to filter the data that is being loaded, ensuring only relevant data is inserted into the table.
Omitting columns enables the exclusion of certain columns from the data load, which can be useful when the incoming data contains more columns than are
needed for the target table.
References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Simple Transformations During a Load1
NEW QUESTION 98
- (Topic 1)
Which of the following are benefits of micro-partitioning? (Select TWO)
Answer: BC
Explanation:
Micro-partitions in Snowflake are immutable objects, which means once they are written, they cannot be modified. This immutability supports the use of Time
Travel, allowing users to access historical data within a defined period. Additionally, micro-partitions can significantly reduce the amount of I/O from object storage
to virtual warehouses. This is because Snowflake??s query optimizer can skip over micro-partitions that do not contain relevant data for a query, thus reducing the
amount of data that needs to be scanned and transferred.
References: [COF-C02] SnowPro Core Certification Exam Study Guide https://docs.snowflake.com/en/user-guide/tables-clustering-micropartitions.html
Answer: C
Explanation:
Snowpipe is used for continuous, automated data loading into Snowflake. It uses a COPY INTO <table> statement within a pipe object to load data from files as
soon as they are available in a stage. Snowpipe does not execute UDFs, stored procedures, or insert statements. References: Snowpipe | Snowflake
Documentation
Answer: A
Explanation:
The list command in Snowflake is used to validate and display the list of files in a specified stage. When a user has unloaded data to a stage, running the list @file
stage command will show all the files that have been uploaded to that stage, allowing the user to verify the data that was unloaded.
References:
? Snowflake Documentation on Stages
? SnowPro® Core Certification Study Guide
Answer: CE
Explanation:
Snowflake manages various compute resources and features, including Snowpipe and the ability to scale up a warehouse. Snowpipe is Snowflake??s continuous
data ingestion service that allows users to load data as soon as it becomes available. Scaling up a warehouse refers to increasing the compute resources
allocated to a virtual warehouse to handle larger workloads or improve performance.
References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake Documentation on Snowpipe and Virtual Warehouses1
A. Bytes scanned
B. Bytes sent over the network
C. Number of partitions scanned
D. Percentage scanned from cache
E. External bytes scanned
Answer: AC
Explanation:
In the query profiler view, the components that represent areas that can be used to help optimize query performance include ??Bytes scanned?? and ??Number
of partitions scanned??. ??Bytes scanned?? indicates the total amount of data the query had to read and is a direct indicator of the query??s efficiency. Reducing
the bytes scanned can lead to lower data transfer costs and faster query execution. ??Number of partitions scanned?? reflects how well the data is clustered;
fewer partitions scanned typically means better performance because the system can skip irrelevant data more effectively.
References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake Documentation on Query Profiling1
A. Data is hashed by the cluster key to facilitate fast searches for common data values
B. Larger micro-partitions are created for common data values to reduce the number of partitions that must be scanned
C. Smaller micro-partitions are created for common data values to allow for more parallelism
D. Data may be colocated by the cluster key within the micro-partitions to improve pruning performance
Answer: D
Explanation:
When a CLUSTER BY clause is added to a Snowflake table, it specifies one or more columns to organize the data within the table??s micro-partitions. This
clustering aims to colocate data with similar values in the same or adjacent micro-partitions. By doing so, it enhances the efficiency of query pruning, where the
Snowflake query optimizer can skip over irrelevant micro-partitions that do not contain the data relevant to the query, thereby improving performance.
References:
? Snowflake Documentation on Clustering Keys & Clustered Tables1.
? Community discussions on how source data??s ordering affects a table with a cluster key
A. ORC
B. XML
C. Avro
D. Parquet
E. JSON
Answer: DE
Explanation:
Semi-structured JSON, Parquet Snowflake supports unloading data in several semi-structured file formats, including Parquet and JSON. These formats allow for
efficient storage and querying of semi-
structured data, which can be loaded directly into Snowflake tables without requiring a predefined schema12.
https://docs.snowflake.com/en/user-guide/data-unload- prepare.html#:~:text=Supported%20File%20Formats,-
The%20following%20file&text=Delimited%20(CSV%2C%20TSV%2C%20etc.)
A. CSV
B. JSON
C. Parquet
D. XML
Answer: A
Explanation:
The default file format for the COPY command in Snowflake, when not specified, is CSV (Comma-Separated Values). This format is widely used for data
exchange because it is simple, easy to read, and supported by many data analysis tools.
A. Standard Edition
B. Enterprise Edition
C. Business Critical Edition
D. Virtual Private Snowflake Edition
Answer: B
Explanation:
Materialized views in Snowflake are a feature that allows for the pre- computation and storage of query results for faster query performance. This feature is
available starting from the Enterprise Edition of Snowflake. It is not available in the Standard Edition, and while it is also available in higher editions like Business
Critical and Virtual Private Snowflake, the Enterprise Edition is the minimum requirement. References:
? Snowflake Documentation on CREATE MATERIALIZED VIEW1.
? Snowflake Documentation on Working with Materialized Views https://docs.snowflake.com/en/sql-reference/sql/create-materialized-
view.html#:~:text=Materialized%20views%20require%20Enterprise%20Edition,upgrading%2C%20please%20contact%20Snowflake%20Support.
A. Clustering keys update the micro-partitions in place with a full sort, and impact the DML operations.
B. Clustering keys sort the designated columns over time, without blocking DML operations
C. Clustering keys create a distributed, parallel data structure of pointers to a table's rows and columns
D. Clustering keys establish a hashed key on each node of a virtual warehouse to optimize joins at run-time
Answer: B
Explanation:
Clustering keys in Snowflake work by sorting the designated columns over time. This process is done in the background and does not block data manipulation
language (DML) operations, allowing for normal database operations to continue without interruption. The purpose of clustering keys is to organize the data within
micro-partitions to optimize query performance1.
References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake Documentation on Clustering1
A. CREATE SHARE
B. ALTER WAREHOUSE
C. DROP ROLE
D. SHOW SCHEMAS
E. DESCRBE TABLE
Answer: A
Explanation:
In Snowflake, a reader account is a type of account that is intended for consuming shared data rather than performing any data management or DDL operations.
The CREATE SHARE command is used to share data from your account with another account, which is not a capability provided to reader accounts. Reader
accounts are typically restricted from creating shares, as their primary purpose is to read shared data rather than to share it themselves.
References:
? Snowflake Documentation on Reader Accounts
? SnowPro® Core Certification Study Guide
A. Zero-copy cloning creates a mirror copy of a database that updates with the original
B. Software updates are automatically applied on a quarterly basis
C. Snowflake eliminates resource contention with its virtual warehouse implementation
D. Multi-cluster warehouses allow users to run a query that spans across multiple clusters
E. Snowflake automatically sorts DATE columns during ingest for fast retrieval by date
Answer: C
Explanation:
One of the key features of Snowflake??s architecture is its unique approach to eliminating resource contention through the use of virtual warehouses. This is
achieved by separating storage and compute resources, allowing multiple virtual warehouses to operate independently on the same data without affecting each
other. This means that different workloads, such as loading data, running queries, or performing complex analytics, can be processed simultaneously without any
performance degradation due to resource contention.
References:
? Snowflake Documentation on Virtual Warehouses
? SnowPro® Core Certification Study Guide
A. When dropping an external stage, the files are not removed and only the stage is dropped
B. When dropping an external stage, both the stage and the files within the stage are removed
C. When dropping an internal stage, the files are deleted with the stage and the files are recoverable
D. When dropping an internal stage, the files are deleted with the stage and the files arenot recoverable
E. When dropping an internal stage, only selected files are deleted with the stage and are not recoverable
Answer: AD
Explanation:
When an external stage is dropped in Snowflake, the reference to the external storage location is removed, but the actual files within the external storage (like
Amazon S3, Google Cloud Storage, or Microsoft Azure) are not deleted. This means that the data remains intact in the external storage location, and only the
stage object in Snowflake is removed.
On the other hand, when an internal stage is dropped, any files that were uploaded to the stage are deleted along with the stage itself. These files are not
recoverable once the internal stage is dropped, as they are permanently removed from Snowflake??s storage. References:
? [COF-C02] SnowPro Core Certification Exam Study Guide
? Snowflake Documentation on Stages
Answer: A
Explanation:
When a pipe is recreated using the CREATE OR REPLACE
PIPE command, the load history of the pipe is reset. This means that Snowpipe will consider all files in the stage as new and will attempt to load them, even if they
were loaded previously by the old pipe2.
A. 1 day
B. 7 days
C. 12 hours
D. 0 days
Answer: D
Explanation:
Transient tables in Snowflake have a minimum Fail-safe retention time period of 0 days. This means that once the Time Travel retention period ends, there is no
additional Fail-safe period for transient tables
A. There is no need to have a Snowflake account in the target region, a share will be created for each user.
B. The listing is replicated into all selected regions automatically, the data is not.
C. The user must have the ORGADMIN role available in at least one account to link accounts for replication.
D. Shares attached to listings in remote regions can be viewed from any account in an organization.
E. For a standard listing the user can wait until the first customer requests the data before replicating it to the target region.
Answer: BC
Explanation:
When publishing a Snowflake Data Marketplace listing into a remote region, it??s important to note that while the listing is replicated into all selected regions
automatically, the data itself is not.Therefore, the data must be replicated separately. Additionally, the user must have the ORGADMIN role in at least one account
to manage the replication of accounts1.
Answer: AD
Explanation:
The Snowflake Enterprise edition includes database replication and failover for business continuity and disaster recovery, as well as extended time travel
capabilities for longer data retention periods1.
Answer: A
Explanation:
Organizing files into logical paths can significantly improve the efficiency of data loading from an external stage. This practice helps in managing and locating files
easily, which can be particularly beneficial when dealing with large datasets or complex directory structures1.
A. VIEWS_HISTORY
B. OBJECT_HISTORY
C. ACCESS_HISTORY
D. LOGIN_HISTORY
Answer: C
Explanation:
The ACCESS_HISTORY view in the SNOWFLAKE.ACCOUNT_USAGE schema contains information about the access history of Snowflake objects, such as
tables and views, within the last 365 days1.
Users are responsible for data storage costs until what occurs?
Answer: B
Explanation:
Users are responsible for data storage costs in Snowflake until the data expires from the Fail-safe period. Fail-safe is the final stage in the data lifecycle, following
Time Travel, and provides additional protection against accidental data loss. Once data exits the Fail-safe state, users are no longer billed for its storage
A. INSERT
B. PUT
C. GET
D. COPY
Answer: D
Explanation:
The COPY command is used in Snowflake to load data from files located in an external stage into a table. This command allows for efficient and parallelized data
loading from various file formats1.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation
Answer: CD
Explanation:
Data encryption and Time Travel are part of Snowflake??s Continuous Data Protection (CDP) feature set that do not require additional configuration. Data
encryption is automatically applied to all filesstored on internal stages, and Time Travel allows for querying and restoring data without any extra setup
Answer: C
Explanation:
The query results cache can be used as long as the data in the table has not changed since the last time the query was run. If the underlying data has changed,
Snowflake will not use the cached results and will re-execute the query1.
A. They can be created as secure and hide the underlying metadata from the user.
B. They can only access tables from a single database.
C. They can contain only a single SQL statement.
D. They can be created to run with a caller's rights or an owner's rights.
Answer: D
Explanation:
Snowflake stored procedures can be created to execute with the privileges of the role that owns the procedure (owner??s rights) or with the privileges of the role
that calls the procedure (caller??s rights). This allows for flexibility in managing security and access control within Snowflake1.
The Snowflake Cloud Data Platform is described as having which of the following architectures?
A. Shared-disk
B. Shared-nothing
C. Multi-cluster shared data
D. Serverless query engine
Answer: C
Explanation:
Snowflake??s architecture is described as a multi-cluster, shared data architecture. This design combines the simplicity of a shared-disk architecture with the
performance and scale-out benefits of a shared-nothing architecture, using a central repository accessible from all compute nodes2.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation
A. The cloned views and the stored procedures will reference the cloned tables in the cloned database.
B. An error will occur, as views with qualified references cannot be cloned.
C. An error will occur, as stored objects cannot be cloned.
D. The stored procedures and views will refer to tables in the source database.
Answer: A
Explanation:
When cloning a database containing stored procedures and regular views with fully qualified table references, the cloned views and stored procedures will
reference the cloned tables in the cloned database (A). This ensures that the cloned database is a self-contained copy of the original, with all references pointing to
objects within the same cloned database. References: SnowPro Core Certification cloning database stored procedures views
A. Reader
B. Consumer
C. Vendor
D. Standard
E. Personalized
Answer: CE
Explanation:
In the Snowflake Data Marketplace, the types of data listings available include ??Vendor??, which refers to the providers of data, and ??Personalized??, which
indicates customized data offerings tailored to specific consumer needs45.
Answer: C
Explanation:
The Snowflake Search Optimization Service is designed to support improved performance for selective point lookup queries. These are queries that retrieve
specific records from a database, often based on a unique identifier or a small set of criteria3.
Answer: BD
Explanation:
The ResultSet cache is leveraged to quickly return results for repeated queries. Actions that prevent leveraging this cache include stopping the virtual warehouse
that the query is running against (B) and executing the RESULTS_SCAN() table function (D). Stopping the warehouse clears the local disk cache, including the
ResultSet cache1. The RESULTS_SCAN() function is used to retrieve the result of a previously executed query, which bypasses the need for the ResultSet cache.
Answer: A
Explanation:
To ensure that additional multi-clusters are resumed with no delay, a virtual warehouse should be configured to a size larger than generally required. This
configuration allows for immediate availability of additional resources when needed, without waiting for new clusters to start up
Answer: CD
Explanation:
To delete staged files from a Snowflake stage, you can specify
the PURGE option in the COPY INTO <table> command, which will automatically delete the files after they have been successfully loaded. Additionally, you can
use
the REMOVE command after the load completes to manually delete the files from the stage12.
References = DROP STAGE, REMOVE
A. Auto-resume applies only to the last warehouse that was started in a multi-cluster warehouse.
B. The ability to auto-suspend a warehouse is only available in the Enterprise edition or above.
C. SnowSQL supports both a configuration file and a command line option for specifying a default warehouse.
D. A user cannot specify a default warehouse when using the ODBC driver.
E. The default virtual warehouse size can be changed at any time.
Answer: CE
Explanation:
Snowflake virtual warehouses support a configuration file and command line options in SnowSQL to specify a default warehouse, which is characteristic C.
Additionally, the size of a virtual warehouse can be changed at any time, which is characteristic E. These features provide flexibility and ease of use in managing
compute resources2. References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation
A. Protegrity
B. Tableau
C. DBeaver
D. SAP
Answer: A
Explanation:
Protegrity is listed as a data tokenization integration partner for Snowflake. This partnership allows Snowflake users to utilize Protegrity??s tokenization solutions
within the Snowflake environment3.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation
A. 30 days
B. 7 days
C. 48 hours
D. 24 hours
Answer: D
Explanation:
For a temporary table, the maximum total Continuous Data Protection (CDP) charges incurred are for the duration of the session in which the table was created,
which does not exceed 24 hours2.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation2
Answer: A
Explanation:
When loading data into Snowflake, it is recommended to organize the data into single files with 100-250 MB of compressed data per file. This size range is optimal
for parallel processing and can help in achieving better performance during data loading operations. References: [COF-C02] SnowPro Core Certification Exam
Study Guide
Answer: B
Explanation:
Operations that do not require compute resources are typically those that can leverage previously cached results. However, if no queries have been executed
previously, all the given operations would require compute to execute. It??s important to note that certain operations like DDL statements and queries that hit the
result cache do not consume compute credits2.
A. Compute
B. Data storage
C. Cloud services
D. Cloud provider
Answer: C
Explanation:
In Snowflake??s architecture, the Cloud Services layer is responsible for generating the query execution plan. This layer handles all the coordination, optimization,
and management tasks, including query parsing, optimization, and compilation into an execution plan that can be processed by the Compute layer.
Answer: AD
Explanation:
Best practices for using the ACCOUNTADMIN role include ensuring that all users with this role use Multi-factor Authentication (MFA) for added security.
Additionally, it is recommended to assign the ACCOUNTADMIN role to at least two users to avoid delays in case of password recovery issues, but to as few users
as possible to maintain strict control over account-level operations4.
A. Identifying queries that will likely run very slowly before executing them
B. Locating queries that consume a high amount of credits
C. Identifying logical issues with the queries
D. Identifying inefficient micro-partition pruning
Answer: DE
Explanation:
The Query Profile in Snowflake is used to identify performance issues with queries. Common issues that can be found using the Query Profile include identifying
inefficient micro-partition pruning (D) and data spilling to a local or remote disk (E). Micro- partition pruning is related to the efficiency of query execution, and data
spilling occurs when the memory is insufficient, causing the query to write data to disk, which can slow down the query performance1.
A. 5 MB
B. 8 GB
C. 16 MB
D. 32 MB
Answer: C
Explanation:
The default file size when unloading data from Snowflake using the COPY command is not explicitly stated in the provided resources. However, Snowflake
documentation suggests that the file size can be specified using the MAX_FILE_SIZE option in the COPY INTO <location> command2.
A. 4
B. 8
C. 16
D. 32
Answer: B
Explanation:
In Snowflake, each size increase in virtual warehouses doubles the number of servers. Therefore, if a size Small virtual warehouse is made up of two servers, a
Large warehouse, which is two sizes larger, would be made up of eight servers (2 servers for Small, 4 for Medium, and 8 for Large)2.
Size specifies the amount of compute resources available per cluster in a warehouse. Snowflake supports the following warehouse sizes:
https://docs.snowflake.com/en/user-guide/warehouses-overview.html
Answer: C
Explanation:
When the retention period for an object ends in Snowflake, Time Travel on the historical data is dropped ©. This means that the ability to access historical data via
Time Travel is no longer available once the retention period has expired2.
Answer: D
Explanation:
The minimum edition of Snowflake required to use a SCIM security integration is the Enterprise Edition. SCIM integrations are used for automated management of
user identities and groups, and this feature is available starting from the Enterprise Edition of Snowflake. References: [COF-C02] SnowPro Core Certification Exam
Study Guide
Answer: C
Explanation:
Worksheets in Snowsight can be shared directly with other Snowflake users within the same account. This feature allows for collaboration and sharing of SQL
queries or Python code, as well as other data manipulation tasks1.
Answer: AE
Explanation:
A row access policy can be applied to a table or a view within the policy DDL when defining the policy. Additionally, an existing row access policy can be applied
to a table or a view using the ALTER <object> ADD ROW ACCESS POLICY <policy> command
Answer: B
Explanation:
The LOGIN_HISTORY view in the ACCOUNT_USAGE schema provides information about login attempts, including both successful and failed logins. This view
can be used to review the failed login attempts of a specific user for the past 30 days. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A. Authentication
B. Resource management
C. Virtual warehouse caching
D. Query parsing and optimization
E. Query execution
F. Physical storage of micro-partitions
Answer: ABD
Explanation:
The responsibilities of Snowflake??s Cloud Service layer include authentication (A), which ensures secure access to the platform; resource management (B),
which involves allocating and managing compute resources; and query parsing and optimization (D), which improves the efficiency and performance of SQL query
execution3.
Answer: A
Explanation:
In an auto-scaling multi-cluster virtual warehouse with the SCALING_POLICY set to ECONOMY, another cluster is started when the system has enough load for 2
minutes (A). This policy is designed to optimize the balance between performance and cost, starting additional clusters only when the sustained load justifies it2.
A. Materialized view
B. Sequence
C. Secure view
D. Transient table
E. Clustered table
Answer: AD
Explanation:
In Snowflake, both materialized views and transient tables will incur storage charges because they store data. They will also incur compute charges when queries
are run against them, as compute resources are used to process the queries. References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Answer: A
Explanation:
To go back to the first version of a transient table created three days prior,
one can use Time Travel if the DATA_RETENTION_TIME_IN_DAYS was set to at least 3 days. This allows the user to access historical data within the specified
retention period. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Answer: C
Explanation:
The graphical representation of warehouse utilization indicates periods of significant queuing, suggesting that the current single cluster cannot efficiently handle all
incoming queries. Configuring the warehouse to a multi-cluster warehouse will distribute the load among multiple clusters, reducing queuing times and improving
overall performance1.
References = Snowflake Documentation on Multi-cluster Warehouses1
Answer: D
Explanation:
In Snowflake??s access control model, the default owner of an object is the role that was used to create the object. This role has the OWNERSHIP privilege on
the object and can grant access to other roles1
A. Standard
B. Enterprise
C. Business Critical
D. Virtual Private Snowflake
Answer: B
Explanation:
The Enterprise edition of Snowflake allows for a dedicated metadata store, providing additional features designed for large-scale enterprises
Reference: https://docs.snowflake.com/en/user-guide/intro-editions.html
A. 1 second
B. 60 seconds
C. 5 minutes
D. 60 minutes
Answer: B
Explanation:
When a running virtual warehouse in Snowflake is suspended and then restarted, the minimum amount of time it will incur charges for is 60 seconds2.
Answer: D
Explanation:
The SQL command Select * from table(validate(t1, job_id => '_last')); is used to return errors from the last executed COPY command into table t1 in the current
session. It checks the results of the most recent data load operation and provides details on any errors that occurred during that process1.
A. Optimizes the virtual warehouse size and multi-cluster setting to economy mode
B. Allows a user to import the files in a sequential order
C. Increases the latency staging and accuracy when loading the data
D. Allows optimization of parallel operations
Answer: D
Explanation:
Snowflake recommends file sizes between 100-250 MB compressed when loading data to optimize parallel processing. Smaller, compressed files can be loaded
in parallel, which maximizes the efficiency of the virtual warehouses and speeds up the data loading process
Answer: AD
Explanation:
When leveraging third-party data from the Snowflake Data Marketplace, the data is live, ready-to-query, and can be personalized. Additionally, the data is
available without the need for copying or moving it to an individual Snowflake account, allowing for seamless integration with existing data
A. Standard
B. Enterprise
C. Business Critical
D. Virtual Private Snowflake (VPC)
Answer: B
Explanation:
The minimum Snowflake edition required to use Dynamic Data Masking is the Enterprise edition. This feature is not available in the Standard edition2.
- (Topic 2)
What do the terms scale up and scale out refer to in Snowflake? (Choose two.)
A. Scaling out adds clusters of the same size to a virtual warehouse to handle more concurrent queries.
B. Scaling out adds clusters of varying sizes to a virtual warehouse.
C. Scaling out adds additional database servers to an existing running cluster to handle more concurrent queries.
D. Snowflake recommends using both scaling up and scaling out to handle more concurrent queries.
E. Scaling up resizes a virtual warehouse so it can handle more complex workloads.
F. Scaling up adds additional database servers to an existing running cluster to handle larger workloads.
Answer: AE
Explanation:
Scaling out in Snowflake involves adding clusters of the same size to a virtual warehouse, which allows for handling more concurrent queries without affecting the
performance of individual queries. Scaling up refers to resizing a virtual warehouse to increase its compute resources, enabling it to handle more complex
workloads and larger queries more efficiently.
Answer: AC
Explanation:
The SQL commands that consume a stream and advance the stream offset are those that result in changes to the data, such as UPDATE and INSERT
operations. Specifically, ??UPDATE TABLE FROM STREAM?? and ??INSERT INTO TABLE SELECT
FROM STREAM?? will consume the stream and move the offset forward, reflecting the changes made to the data.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
Answer: A
Explanation:
When unloading data, Snowflake assigns each file a unique name to ensure there is no overlap or confusion between files. This is part of the bulk unloading
process where data is exported from Snowflake tables into flat files3.
Answer: D
Explanation:
When deciding on a clustering key for a table, Snowflake recommends using the columns that are most actively used in the select filters. This is because
clustering by these columns can improve the performance of queries that filter on these values, leading to more efficient scans and better overall query
performance2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A. SnowCD
B. Snowpark
C. Snowsight
D. SnowSQL
Answer: A
Explanation:
SnowCD (Snowflake Connectivity Diagnostic Tool) is used to diagnose and troubleshoot network connections to Snowflake. It runs a series of connection checks
to evaluate the network connection to Snowflake
A. Schema stage
B. Named stage
C. User stage
D. Stream stage
E. Table stage
F. Database stage
Answer: BCE
Explanation:
Snowflake supports three types of internal stages: Named, User, and Table stages. These stages are used for staging data files to be loaded into Snowflake
tables. Schema, Stream, and Database stages are not supported as internal stages in Snowflake. References: Snowflake Documentation1.
A. Directory
B. File
C. Pre-signed
D. Scoped
Answer: C
Explanation:
The pre-signed URL type allows users or applications to download or access files directly from a Snowflake stage without authentication. This URL type is open
and can be used without needing to authenticate into Snowflake or pass an authorization token.
Answer: BC
A. 8MB
B. 16MB
C. 32MB
D. 128MB
Answer: B
Explanation:
The maximum size limit for a record of a VARIANT data type in Snowflake is 16MB. This allows for storing semi-structured data types like JSON, Avro, ORC,
Parquet, or XML within a single VARIANT column. References: Based on general database knowledge as of 2021.
A. The database ACCOUNTADMIN must define the clustering methodology for each Snowflake table.
B. Clustering is the way data is grouped together and stored within Snowflake micro- partitions.
C. The clustering key must be included in the COPY command when loading data into Snowflake.
D. Clustering can be disabled within a Snowflake account.
Answer: B
Explanation:
Clustering in Snowflake refers to the organization of data within micro- partitions, which are contiguous units of storage within Snowflake tables. Clustering keys
can be defined to co-locate similar rows in the same micro-partitions, improving scan efficiency and query performance12.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
A. Account
B. Database
C. Schema
D. Table
E. Virtual warehouse
Answer: BC
Explanation:
In Snowflake, a namespace is comprised of a database and a schema. The combination of a database and schema uniquely identifies database objects within an
account
A. READ
B. OWNERSHIP
C. USAGK
D. WRTTF
Answer: A
Explanation:
The minimum privilege required on an external stage for any role to access unstructured data files using a file URL in the GET REST API is READ. This allows the
role to retrieve or download data files from the stage.
A. Create a view.
B. Cluster a table.
C. Enable the search optimization service.
D. Enable Time Travel.
E. Index a table.
Answer: BC
Explanation:
To optimize query performance in Snowflake, users can cluster a table, which organizes the data in a way that minimizes the amount of data scanned during
queries. Additionally, enabling the searchoptimization service can improve the performance of selective point lookup queries on large tables34.
A. COPY INTO
B. GET
C. PUT
D. TRANSFER
Answer: B
Explanation:
The command used to unload files from an internal or external stage to a local file system in Snowflake is the GET command. This command allows users to
download data files that have been staged, making them available on the local file system for further use23.
Answer: C
Explanation:
In Snowflake, to change the columns referenced in a view, the view must be recreated with the required changes. The ALTER VIEW command does not allow
changing the definition of a view; it can only be used to rename a view, convert it to or from a secure view, or add, overwrite, or remove a comment for a view.
Therefore, the correct approach is to drop the existing view and create a new one with the desired column references.
Answer: A
Explanation:
To change the existing file format type from CSV to JSON, the recommended way is to use the ALTER FILE FORMAT command with the SET TYPE=JSON
clause. This alters the file format specification to use JSON instead of CSV. References: Based on my internal knowledge as of 2021.
A. GeoJSON
B. Array
C. XML
D. Object
E. BLOB
Answer: AC
Explanation:
Snowflake supports storing unstructured data and provides native support for semi-structured file formats such as JSON, Avro, Parquet, ORC, and XML1.
GeoJSON, being a type of JSON, and XML are among the formats that can be stored in Snowflake. References: [COF-C02] SnowPro Core Certification Exam
Study Guide
Answer: A
Explanation:
When a database is cloned in Snowflake, it does not retain any privileges that were granted on the source object. The clone will need to have privileges
reassigned as necessary for users to access it. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A. JSON
B. BINARY
C. VARCHAR
D. VARIANT
Answer: D
Explanation:
The VARIANT data type in Snowflake can store multiple types of data structures, as it is designed to hold semi-structured data. It can contain any other data type,
including OBJECT and ARRAY, which allows it to represent various data structures
A. COPY INTO
B. CREATE PIPE
C. INSERT INTO
D. TABLE STREAM
Answer: B
Explanation:
The Snowflake feature that allows for small volumes of data to be continuously loaded into Snowflake and incrementally made available for analysis is Snowpipe.
Snowpipe is designed for near-real-time data loading, enabling data to be loaded as soon as it??s available in the storage layer3
A. Duo Security
B. OAuth
C. Okta
D. Single Sign-On (SSO)
Answer: A
Explanation:
Snowflake provides Multi-Factor Authentication (MFA) support as an integrated feature, powered by the Duo Security service. This service is managed completely
by Snowflake, and users do not need to sign up separately with Duo1
A. Account and user authentication is only available with the Snowflake Business Critical edition.
B. Support for HIPAA and GDPR compliance is available for UI Snowflake editions.
C. Periodic rekeying of encrypted data is available with the Snowflake Enterprise edition and higher
D. Private communication to internal stages is allowed in the Snowflake Enterprise edition and higher.
Answer: C
Explanation:
One of the security features of Snowflake includes the periodic rekeying of encrypted data, which is available with the Snowflake Enterprise edition and higher2.
This ensures that the encryption keys are rotated regularly to maintain a high level of security. References: [COF-C02] SnowPro Core Certification Exam Study
Guide
A. Security
B. Data storage
C. Data visualization
D. Query computation
E. Metadata management
Answer: AE
Explanation:
The Cloud Services layer in Snowflake is responsible for various services, including security (like authentication and authorization) and metadata management
(like query parsing and optimization). References: Based on general cloud architecture knowledge as of 2021.
A. IMPORT SHARE
B. OWNERSHIP
C. REFERENCES
D. USAGE
Answer: B
Explanation:
A secure view in Snowflake is exposed only to users with the OWNERSHIP privilege. This privilege ensures that only authorized users who own the view, or roles
that include ownership, can access the secure view
Answer: B
Explanation:
The Snowflake Query Profile provides a graphic representation of the main components of the query processing. This visual aid helps users understand the
execution details and performance characteristics of their queries4.
Answer: BE
Explanation:
A materialized view is beneficial when the query consumes many compute resources every time it runs (B), and when the results of the query do not change often
and are used frequently (E). This is because materialized views store pre-computed data, which can speed up query performance for workloads that are run
frequently or are complex
A. GO driver
B. Node.js driver
C. ODBC driver
D. Python connector
E. Spark connector
Answer: CD
Explanation:
Multi-Factor Authentication (MFA) token caching is typically supported for clients that maintain a persistent connection or session with Snowflake, such as the
ODBC driver and Python connector, to reduce the need for repeated MFA challenges. References: Based on general security practices in cloud services as of
2021.
Answer: D
Explanation:
To allow secure views the ability to reference data in multiple databases, the REFERENCE_USAGE privilege must be granted on each database that contains
objects referenced by the secure view2. This privilege is necessary before granting the SELECT privilege on a secure view to a share.
A. Schema Stage
B. User Stage
C. Database Stage
D. Table Stage
E. External Named Stage
F. Internal Named Stage
Answer: BDF
Explanation:
The Snowflake PUT command is used to upload files from a local file system to Snowflake stages, specifically the user stage, table stage, and internal named
stage. These stages are where the data files are temporarily stored before being loaded into Snowflake tables
Answer: AE
Explanation:
To monitor data storage for individual tables, the commands and objects that can be used are ??SHOW STORAGE BY TABLE;?? and the Information Schema
view ??TABLE_STORAGE_METRICS??. These tools provide detailed information about the storage utilization for tables. References: Snowflake Documentation
A. Swift
B. JavaScript
C. Python
D. SQL
Answer: C
Explanation:
The Snowpark API allows developers to create User-Defined Functions (UDFs) in various languages, including Python, which is known for its ease of use and
wide adoption in data-related tasks. References: Based on general programming and cloud data service knowledge as of 2021.
A. Schemas
B. Roles
C. Secure Views
D. Stored Procedures
E. Tables
F. Secure User-Defined Functions (UDFs)
Answer: ACF
Explanation:
In Snowflake, you can share several types of objects with other Snowflake accounts. These include schemas, secure views, and secure user-defined functions
(UDFs). Sharing these objects allows for collaboration and data access across different Snowflake accounts while maintaining security and governance controls4.
A. 8-16 MB
B. 16-24 MB
C. 10-99 MB
D. 100-250 MB
Answer: D
Explanation:
For continuous data loads using Snowpipe, the recommended compressed file size range is between 100-250 MB. This size range is suggested to optimize the
number of parallel operations for a load and to avoid size limitations, ensuring efficient and cost-effective data loading
Answer: A
Explanation:
Snowsight automatically detects if a target account is in a different region and enables cross-cloud auto-fulfillment when using a paid listing on the Snowflake
Marketplace. This feature allows Snowflake to manage the replication of data products to consumer regions as needed, without manual intervention1.
A. Enable the data sharing feature in the account and validate the view.
B. Use the CURRENT_ROLE and CURRENT_USER functions to validate secure views.
C. Use the CURRENT_ function to authorize users from a specific account to access rows in a base table.
D. Set the SIMULATED DATA SHARING CONSUMER session parameter to the name of the consumer account for which access is being simulated.
Answer: A
Explanation:
To ensure a data consumer has access to the required objects, a data provider can enable the data sharing feature and validate that the consumer can access
the views or tables shared with them. References: Based on general data sharing practices in cloud services as of 2021.
A. Table definition
B. Stage definition
C. Session level
D. COPY INTO TABLE statement
Answer: D
Explanation:
When file format options are specified in multiple locations, the load operation applies the options in the following order of precedence: first, the COPY INTO
TABLE statement; second, the stage definition; and third, the table definition1
A. Append-only
B. External
C. Insert-only
D. Standard
Answer: B
Explanation:
The stream type that can be used for tracking the records in external tables is ??External??. This type of stream is specifically designed to track changes in
external tables
A. Data storage
B. Dynamic data masking
C. Partition scanning
D. User authentication
E. Infrastructure management
Answer: DE
Explanation:
The Cloud Services layer in Snowflake includes activities such as user authentication and infrastructure management. This layer coordinates activities across
Snowflake, including security enforcement, query compilation and optimization, and more
A. USERADMIN
B. PUBLIC
C. ORGADMIN
D. SYSADMIN
Answer: A
Explanation:
The first user assigned to a new Snowflake account, typically with the ACCOUNTADMIN role, should create at least one additional user with the USERADMIN
administrative privilege. This role is responsible for creating and managing users and roles within the Snowflake account. References: Access control
considerations | Snowflake Documentation
Answer: BD
Explanation:
Snowflake recommends creating objects with a role that has the necessary privileges and is not overly permissive. SYSADMIN is typically used for managing
system- level objects and operations. Creating objects with a custom role and granting this role to SYSADMIN allows for more granular control and adherence to
the principle of least privilege. References: Based on best practices for database object ownership and role management.
A. src:salesperson.name
B. src:sa1esPerso
C. name
D. src:salesperson.Name
E. SRC:salesperson.name
F. SRC:salesperson.Name
Answer: AC
Explanation:
To access a JSON object in Snowflake, dot notation is used where the path to the object is specified after the column name containing the JSON data. Both
lowercase and uppercase can be used for attribute names, so both ??name?? and ??Name?? are valid. References: [COF-C02] SnowPro Core Certification Exam
Study Guide
Answer: B
Explanation:
To show all privileges granted to a specific schema, the command SHOW GRANTS ON SCHEMA <schema_name> should be used3. In this case, it would be
SHOW GRANTS ON SCHEMA ANALYTICS_DW.MARKETING. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Answer: A
Explanation:
A schema in Snowflake is a logical grouping of database objects, such as tables and views, that belongs to a single database. Each schema is part of a
namespace in Snowflake, which is inferred from the current database and schema in use for the session5
Answer: BC
Explanation:
Snowflake??s column level security features include Dynamic Data Masking and External Tokenization. Dynamic Data Masking uses masking policies to
selectively mask data at query time, while External Tokenization allows for the tokenization of data before loading it into Snowflake and detokenizing it at query
runtime5.
A. Pre-signed URL
B. Scoped URL
C. Signed URL
D. File URL
Answer: A
Explanation:
Pre-signed URLs in Snowflake allow users to access unstructured data without the need for authentication into Snowflake or passing an authorization token.
These URLs are open and can be directly accessed or downloaded by any user or application, making them ideal for business intelligence applications or reporting
tools that need to display unstructured file contents
A. USAGE
B. OPERATE
C. MONITOR
D. OWNERSHIP
Answer: B
Explanation:
In Snowflake, the OPERATE privilege is required for a role to suspend or resume a task. This privilege allows the role to perform operational tasks such as
starting and stopping tasks, which includes suspending and resuming them6
A. External tables
B. Materialized views
C. Tables and views that are not protected by row access policies
D. Casts on table columns (except for fixed-point numbers cast to strings)
Answer: C
Explanation:
Snowflake??s search optimization service supports tables and views that are not protected by row access policies. It is designed to improve the performance of
certain types of queries on tables, including selective point lookup queries and queries on fields in VARIANT, OBJECT, and ARRAY (semi-structured) columns1.
A. Indefinitely
B. Until the result_cache expires
C. Until the retention_time is met
D. Until the expiration time is exceeded
Answer: D
Explanation:
A data consumer who has a pre-signed URL can access data files using Snowflake until the expiration time is exceeded. The expiration time is set when the pre-
signed URL is generated and determines how long the URL remains valid3.
Answer: CD
Explanation:
Snowpark is designed to bring the data programmability to Snowflake, enabling developers to write code in familiar languages like Scala, Java, and Python. It
allows for the execution of these codes directly within Snowflake??s virtual warehouses, eliminating the need for a separate cluster. Additionally, Snowpark??s
compatibility with Spark allows users to leverage their existing Spark code with minimal changes1.
A. Account
B. Database
C. Organization
D. Schema
E. Virtual warehouse
Answer: AE
Explanation:
Resource monitors in Snowflake can be configured at the account and virtual warehouse levels. They are used to track credit usage and control costs associated
with running virtual warehouses. When certain thresholds are reached, resource monitors can trigger actions such as sending alerts or suspending warehouses to
prevent excessive credit consumption. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A. Query execution
B. Data loading
C. Time Travel data
D. Security
E. Authentication and access control
Answer: DE
Explanation:
The cloud services layer of Snowflake architecture handles various aspects including security functions, authentication of user sessions, and access control,
ensuring that only authorized users can access the data and services23.
Answer: A
Explanation:
The Snowflake Cloud Services layer coordinates activities within the Snowflake account. It is responsible for tasks such as authentication, infrastructure
management, metadata management, query parsing and optimization, and access control. References: Based on general cloud database architecture knowledge.
A. ACCOUNTADMIN
B. ORGADMIN
C. SECURITYADMIN
D. SYSADMIN
Answer: A
Explanation:
To use Partner Connect, the ACCOUNTADMIN role is required. Partner Connect allows account administrators to easily create trial accounts with selected
Snowflake business partners and integrate these accounts with Snowflake
Answer: C
Explanation:
Snowflake??s architecture is divided into three layers: database storage, query processing, and cloud services. The metadata, which includes information about
the structure of the data, the SQL operations performed, and the service-level policies, is stored in the cloud services layer. This layer acts as the brain of the
Snowflake environment, managing metadata, query optimization, and transaction coordination.
A. 1
B. 2
C. 3
D. 4
Answer: A
Explanation:
Snowflake allows for only one resource monitor to be assigned at the account level. This monitor oversees the credit usage of all the warehouses in the account.
References: Snowflake Documentation
A. File
B. Pre-signed
C. Scoped
D. Virtual-hosted style
Answer: C
Explanation:
The Snowflake URL type used by directory tables is the scoped URL. This type of URL provides access to files in a stage with metadata, such as the Snowflake
file URL, for each file
A. Contacts
B. Sharing settings
C. Copy History
D. Query History
E. Automatic Clustering History
Answer: DE
Explanation:
The Activity area of Snowsight includes the Query History page, which allows users to monitor and view details about queries executed in their account, including
performance data1. It also includes the Automatic Clustering History, which provides insights into the automatic clustering operations performed on tables2.
A. If an account is enrolled with a Data Exchange, it will lose its access to the Snowflake Marketplace.
B. A Data Exchange allows groups of accounts to share data privately among the accounts.
C. A Data Exchange allows accounts to share data with third, non-Snowflake parties.
D. Data Exchange functionality is available by default in accounts using the Enterprise edition or higher.
E. The sharing of data in a Data Exchange is bidirectiona
F. An account can be a provider for some datasets and a consumer for others.
Answer: BE
Explanation:
A Snowflake Data Exchange allows groups of accounts to share data privately among the accounts (B), and it supports bidirectional sharing, meaning an account
can be both a provider and a consumer of data (E). This facilitates secure and governed data collaboration within a selected group3.
A. ROW_NUMBER
B. TABLE
C. TABULAR
D. VALUES
Answer: B
Explanation:
In Snowflake, a tabular User-Defined Function (UDF) is defined with a return clause that includes the keyword ??TABLE.?? This indicates that the UDF will return
a set of rows, which can be used in the FROM clause of a query. References: Based on my internal knowledge as of 2021.
- (Topic 3)
What type of columns does Snowflake recommend to be used as clustering keys? (Select TWO).
A. A VARIANT column
B. A column with very low cardinality
C. A column with very high cardinality
D. A column that is most actively used in selective filters
E. A column that is most actively used in join predicates
Answer: CD
Explanation:
Snowflake recommends using columns with very high cardinality and those that are most actively used in selective filters as clustering keys. High cardinality
columns have a wide range of unique values, which helps in evenly distributing the data across micro-partitions. Columns used in selective filters help in pruning
the number of micro- partitions to scan, thus improving query performance. References: Based on general database optimization principles.
Answer: A
Explanation:
When a view is defined on a permanent table, and a temporary table with the same name is created in the same schema, the query from the view will return the
data from the permanent table. Temporary tables are session-specific and do not affect the data returned by views defined on permanent tables2.
A. history
B. config
C. snowsqLcnf
D. snowsql.pubkey
Answer: B
Explanation:
The SnowSQL file that can store connection information is named ??config??. It is used to store user credentials and connection details for easy access to
Snowflake instances. References: Based on general database knowledge as of 2021.
Answer: CD
Explanation:
To grant a role the privilege to select data from all current and future tables in a schema, two separate commands are needed. The first command grants the
SELECT privilege on all existing tables within the schema, and the second command grants the SELECT privilege on all tables that will be created in the future
within the same schema.
A. When after 2-3 consecutive checks the system determines that the load on the most- loaded cluster could be redistributed.
B. When after 5-6 consecutive checks the system determines that the load on the most- loaded cluster could be redistributed.
C. When after 5-6 consecutive checks the system determines that the load on the least- loaded cluster could be redistributed.
D. When after 2-3 consecutive checks the system determines that the load on the least- loaded cluster could be redistributed.
Answer: D
Explanation:
In a standard multi-cluster warehouse with auto-scaling, a cluster will shut down when, after 2-3 consecutive checks, the system determines that the load on the
least-loaded cluster could be redistributed to other clusters. This ensures efficient resource utilization and cost management. References: [COF-C02] SnowPro
Core Certification Exam Study Guide
Answer: C
Explanation:
Once a database has been created in a consumer account from a share, the shared objects become accessible to users in that account. The shared objects are
not transferred or copied; they remain in the provider??s account and are accessible to the consumer account
* COF-C02 Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* COF-C02 Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year