A company's security audit requires generating a report listing all Snowflake logins (e.g.. date and user) within the last 90 days. Which of the following statements will return the required information?
SELECT LAST_SUCCESS_LOGIN, LOGIN_NAME
FROM ACCOUNT_USAGE.USERS;
SELECT EVENT_TIMESTAMP, USER_NAME
FROM table(information_schema.login_history_by_user())
SELECT EVENT_TIMESTAMP, USER_NAME
FROM ACCOUNT_USAGE.ACCESS_HISTORY;
SELECT EVENT_TIMESTAMP, USER_NAME
FROM ACCOUNT_USAGE.LOGIN_HISTORY;
To generate a report listing all Snowflake logins within the last 90 days, the ACCOUNT_USAGE.LOGIN_HISTORY view should be used. This view provides information about login attempts, including successful and unsuccessful logins, and is suitable for security audits4.
Which stage type can be altered and dropped?
Database stage
External stage
Table stage
User stage
 External stages can be altered and dropped in Snowflake. An external stage points to an external location, such as an S3 bucket, where data files are stored. Users can modify the stage’s definition or drop it entirely if it’s no longer needed. This is in contrast to table stages, which are tied to specific tables and cannot be altered or dropped independently.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Stages1
Which command can be used to stage local files from which Snowflake interface?
SnowSQL
Snowflake classic web interface (Ul)
Snowsight
.NET driver
SnowSQL is the command-line client for Snowflake that allows users to execute SQL queries and perform all DDL and DML operations, including staging files for bulk data loading. It is specifically designed for scripting and automating tasks.
References:
SnowPro Core Certification Exam Study Guide
Snowflake Documentation on SnowSQL
https://docs.snowflake.com/en/user-guide/snowsql-use.html
What is a machine learning and data science partner within the Snowflake Partner Ecosystem?
Informatica
Power Bl
Adobe
Data Robot
Data Robot is recognized as a machine learning and data science partner within the Snowflake Partner Ecosystem. It provides an enterprise AI platform that enables users to build and deploy accurate predictive models quickly. As a partner, Data Robot integrates with Snowflake to enhance data science capabilities2.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Machine Learning & Data Science Partners
https://docs.snowflake.com/en/user-guide/ecosystem-analytics.html
What actions will prevent leveraging of the ResultSet cache? (Choose two.)
Removing a column from the query SELECT list
Stopping the virtual warehouse that the query is running against
Clustering of the data used by the query
Executing the RESULTS_SCAN() table function
Changing a column that is not in the cached query
The ResultSet cache is leveraged to quickly return results for repeated queries. Actions that prevent leveraging this cache include stopping the virtual warehouse that the query is running against (B) and executing the RESULTS_SCAN() table function (D). Stopping the warehouse clears the local disk cache, including the ResultSet cache1. The RESULTS_SCAN() function is used to retrieve the result of a previously executed query, which bypasses the need for the ResultSet cache.
Which command can be used to load data into an internal stage?
LOAD
copy
GET
PUT
The PUT command is used to load data into an internal stage in Snowflake. This command uploads data files from a local file system to a named internal stage, making the data available for subsequent loading into a Snowflake table using the COPY INTO command.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Data Loading
How often are encryption keys automatically rotated by Snowflake?
30 Days
60 Days
90 Days
365 Days
Snowflake automatically rotates encryption keys when they are more than 30 days old. Active keys are retired, and new keys are created. This process is part of Snowflake’s comprehensive security measures to ensure data protection and is managed entirely by the Snowflake service without requiring user intervention.
References:
Understanding Encryption Key Management in Snowflake
A user needs to create a materialized view in the schema MYDB.MYSCHEMA.
Which statements will provide this access?
GRANT ROLE MYROLE TO USER USER1;
CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO ROLE MYROLE;
GRANT ROLE MYROLE TO USER USER1;
CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO USER USER1;
GRANT ROLE MYROLE TO USER USER1;
CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO USER1;
GRANT ROLE MYROLE TO USER USER1;
CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO MYROLE;
In Snowflake, to create a materialized view, the user must have the necessary privileges on the schema where the view will be created. These privileges are granted through roles, not directly to individual users. Therefore, the correct process is to grant the role to the user and then grant the privilege to create the materialized view to the role itself.
The statement GRANT ROLE MYROLE TO USER USER1; grants the specified role to the user, allowing them to assume that role and exercise its privileges. The subsequent statement CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO MYROLE; grants the privilege to create a materialized view within the specified schema to the role MYROLE. Any user who has been granted MYROLE can then create materialized views in MYDB.MYSCHEMA.
References:
Snowflake Documentation on Roles
Snowflake Documentation on Materialized Views
What is a limitation of a Materialized View?
A Materialized View cannot support any aggregate functions
A Materialized View can only reference up to two tables
A Materialized View cannot be joined with other tables
A Materialized View cannot be defined with a JOIN
 Materialized Views in Snowflake are designed to store the result of a query and can be refreshed to maintain up-to-date data. However, they have certain limitations, one of which is that they cannot be defined using a JOIN clause. This means that a Materialized View can only be created based on a single source table and cannot combine data from multiple tables using JOIN operations.
References:
Snowflake Documentation on Materialized Views
SnowPro® Core Certification Study Guide
What happens when a cloned table is replicated to a secondary database? (Select TWO)
A read-only copy of the cloned tables is stored.
The replication will not be successful.
The physical data is replicated
Additional costs for storage are charged to a secondary account
Metadata pointers to cloned tables are replicated
When a cloned table is replicated to a secondary database in Snowflake, the following occurs:
C. The physical data is replicated: The actual data of the cloned table is physically replicated to the secondary database. This ensures that the secondary database has its own copy of the data, which can be used for read-only purposes or failover scenarios1.
E. Metadata pointers to cloned tables are replicated: Along with the physical data, the metadata pointers that refer to the cloned tables are also replicated. This metadata includes information about the structure of the table and any associated properties2.
It’s important to note that while the physical data and metadata are replicated, the secondary database is typically read-only and cannot be used for write operations. Additionally, while there may be additional storage costs associated with the secondary account, this is not a direct result of the replication process but rather a consequence of storing additional data.
References:
SnowPro Core Exam Prep — Answers to Snowflake’s LEVEL UP: Backup and Recovery
Snowflake SnowPro Core Certification Exam Questions Set 10
What can be used to view warehouse usage over time? (Select Two).
The load HISTORY view
The Query history view
The show warehouses command
The WAREHOUSE_METERING__HISTORY View
The billing and usage tab in the Snowflake web Ul
 To view warehouse usage over time, the Query history view and the WAREHOUSE_METERING__HISTORY View can be utilized. The Query history view allows users to monitor the performance of their queries and the load on their warehouses over a specified period1. The WAREHOUSE_METERING__HISTORY View provides detailed information about the workload on a warehouse within a specified date range, including average running and queued loads2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which of the following are benefits of micro-partitioning? (Select TWO)
Micro-partitions cannot overlap in their range of values
Micro-partitions are immutable objects that support the use of Time Travel.
Micro-partitions can reduce the amount of I/O from object storage to virtual warehouses
Rows are automatically stored in sorted order within micro-partitions
Micro-partitions can be defined on a schema-by-schema basis
 Micro-partitions in Snowflake are immutable objects, which means once they are written, they cannot be modified. This immutability supports the use of Time Travel, allowing users to access historical data within a defined period. Additionally, micro-partitions can significantly reduce the amount of I/O from object storage to virtual warehouses. This is because Snowflake’s query optimizer can skip over micro-partitions that do not contain relevant data for a query, thus reducing the amount of data that needs to be scanned and transferred.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
https://docs.snowflake.com/en/user-guide/tables-clustering-micropartitions.html
What are the default Time Travel and Fail-safe retention periods for transient tables?
Time Travel - 1 day. Fail-safe - 1 day
Time Travel - 0 days. Fail-safe - 1 day
Time Travel - 1 day. Fail-safe - 0 days
Transient tables are retained in neither Fail-safe nor Time Travel
Transient tables in Snowflake have a default Time Travel retention period of 1 day, which allows users to access historical data within the last 24 hours. However, transient tables do not have a Fail-safe period. Fail-safe is an additional layer of data protection that retains data beyond the Time Travel period for recovery purposes in case of extreme data loss. Since transient tables are designed for temporary or intermediate workloads with no requirement for long-term durability, they do not include a Fail-safe period by default1.
References:
Snowflake Documentation on Storage Costs for Time Travel and Fail-safe
A user unloaded a Snowflake table called mytable to an internal stage called mystage.
Which command can be used to view the list of files that has been uploaded to the staged?
list @mytable;
list @%raytable;
list @ %m.ystage;
list @mystage;
 The command list @mystage; is used to view the list of files that have been uploaded to an internal stage in Snowflake. The list command displays the metadata for all files in the specified stage, which in this case is mystage. This command is particularly useful for verifying that files have been successfully unloaded from a Snowflake table to the stage and for managing the files within the stage.
References:
Snowflake Documentation on Stages
SnowPro® Core Certification Study Guide
Which of the following are valid methods for authenticating users for access into Snowflake? (Select THREE)
SCIM
Federated authentication
TLS 1.2
Key-pair authentication
OAuth
OCSP authentication
 Snowflake supports several methods for authenticating users, including federated authentication, key-pair authentication, and OAuth. Federated authentication allows users to authenticate using their organization’s identity provider. Key-pair authentication uses a public-private key pair for secure login, and OAuth is an open standard for access delegation commonly used for token-based authentication. References: Authentication policies | Snowflake Documentation, Authenticating to the server | Snowflake Documentation, External API authentication and secrets | Snowflake Documentation.
How would you determine the size of the virtual warehouse used for a task?
Root task may be executed concurrently (i.e. multiple instances), it is recommended to leave some margins in the execution window to avoid missing instances of execution
Querying (select) the size of the stream content would help determine the warehouse size. For example, if querying large stream content, use a larger warehouse size
If using the stored procedure to execute multiple SQL statements, it's best to test run the stored procedure separately to size the compute resource first
Since task infrastructure is based on running the task body on schedule, it's recommended to configure the virtual warehouse for automatic concurrency handling using Multi-cluster warehouse (MCW) to match the task schedule
 The size of the virtual warehouse for a task can be configured to handle concurrency automatically using a Multi-cluster warehouse (MCW). This is because tasks are designed to run their body on a schedule, and MCW allows for scaling compute resources to match the task’s execution needs without manual intervention. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A sales table FCT_SALES has 100 million records.
The following Query was executed
SELECT COUNT (1) FROM FCT__SALES;
How did Snowflake fulfill this query?
Query against the result set cache
Query against a virtual warehouse cache
Query against the most-recently created micro-partition
Query against the metadata excite
 Snowflake is designed to optimize query performance by utilizing metadata for certain types of queries. When executing a COUNT query, Snowflake can often fulfill the request by accessing metadata about the table’s row count, rather than scanning the entire table or micro-partitions. This is particularly efficient for large tables like FCT_SALES with a significant number of records. The metadata layer maintains statistics about the table, including the row count, which enables Snowflake to quickly return the result of a COUNT query without the need to perform a full scan.
References:
Snowflake Documentation on Metadata Management
SnowPro® Core Certification Study Guide
Which of the following Snowflake objects can be shared using a secure share? (Select TWO).
Materialized views
Sequences
Procedures
Tables
Secure User Defined Functions (UDFs)
Secure sharing in Snowflake allows users to share specific objects with other Snowflake accounts without physically copying the data, thus not consuming additional storage. Tables and Secure User Defined Functions (UDFs) are among the objects that can be shared using this feature. Materialized views, sequences, and procedures are not shareable objects in Snowflake.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Secure Data Sharing1
True or False: It is possible for a user to run a query against the query result cache without requiring an active Warehouse.
True
False
 Snowflake’s architecture allows for the use of a query result cache that stores the results of queries for a period of time. If the same query is run again and the underlying data has not changed, Snowflake can retrieve the result from this cache without needing to re-run the query on an active warehouse, thus saving on compute resources.
During periods of warehouse contention which parameter controls the maximum length of time a warehouse will hold a query for processing?
STATEMENT_TIMEOUT__IN__SECONDS
STATEMENT_QUEUED_TIMEOUT_IN_SECONDS
MAX_CONCURRENCY__LEVEL
QUERY_TIMEOUT_IN_SECONDS
The parameter STATEMENT_QUEUED_TIMEOUT_IN_SECONDS sets the limit for a query to wait in the queue in order to get its chance of running on the warehouse. The query will quit after reaching this limit. By default, the value of this parameter is 0 which mean the queries will wait indefinitely in the waiting queue
https://community.snowflake.com/s/article/Warehouse-Concurrency-and-Statement-Timeout-Parameters#:~:text=The%20parameter%20STATEMENT_QUEUED_TIMEOUT_IN_SECONDS%20sets%20the,indefinitely%20in%20the%20waiting%20queue .
Which of the following commands cannot be used within a reader account?
CREATE SHARE
ALTER WAREHOUSE
DROP ROLE
SHOW SCHEMAS
DESCRBE TABLE
 In Snowflake, a reader account is a type of account that is intended for consuming shared data rather than performing any data management or DDL operations. The CREATE SHARE command is used to share data from your account with another account, which is not a capability provided to reader accounts. Reader accounts are typically restricted from creating shares, as their primary purpose is to read shared data rather than to share it themselves.
References:
Snowflake Documentation on Reader Accounts
SnowPro® Core Certification Study Guide
In which use cases does Snowflake apply egress charges?
Data sharing within a specific region
Query result retrieval
Database replication
Loading data into Snowflake
 Snowflake applies egress charges in the case of database replication when data is transferred out of a Snowflake region to another region or cloud provider. This is because the data transfer incurs costs associated with moving data across different networks. Egress charges are not applied for data sharing within the same region, query result retrieval, or loading data into Snowflake, as these actions do not involve data transfer across regions.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Data Replication and Egress Charges1
A user has unloaded data from Snowflake to a stage
Which SQL command should be used to validate which data was loaded into the stage?
list @file__stage
show @file__stage
view @file__stage
verify @file__stage
The list command in Snowflake is used to validate and display the list of files in a specified stage. When a user has unloaded data to a stage, running the list @file__stage command will show all the files that have been uploaded to that stage, allowing the user to verify the data that was unloaded.
References:
Snowflake Documentation on Stages
SnowPro® Core Certification Study Guide
What are two ways to create and manage Data Shares in Snowflake? (Choose two.)
Via the Snowflake Web Interface (Ul)
Via the data_share=true parameter
Via SQL commands
Via Virtual Warehouses
In Snowflake, Data Shares can be created and managed in two primary ways:
Via the Snowflake Web Interface (UI): Users can create and manage shares through the graphical interface provided by Snowflake, which allows for a user-friendly experience.
Via SQL commands: Snowflake also allows the creation and management of shares using SQL commands. This method is more suited for users who prefer scripting or need to automate the process.
Which Snowflake technique can be used to improve the performance of a query?
Clustering
Indexing
Fragmenting
Using INDEX__HINTS
 Clustering is a technique used in Snowflake to improve the performance of queries. It involves organizing the data in a table into micro-partitions based on the values of one or more columns. This organization allows Snowflake to efficiently prune non-relevant micro-partitions during a query, which reduces the amount of data scanned and improves query performance.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Clustering
What is the default File Format used in the COPY command if one is not specified?
CSV
JSON
Parquet
XML
The default file format for the COPY command in Snowflake, when not specified, is CSV (Comma-Separated Values). This format is widely used for data exchange because it is simple, easy to read, and supported by many data analysis tools.
A user is loading JSON documents composed of a huge array containing multiple records into Snowflake. The user enables the strip__outer_array file format option
What does the STRIP_OUTER_ARRAY file format do?
It removes the last element of the outer array.
It removes the outer array structure and loads the records into separate table rows,
It removes the trailing spaces in the last element of the outer array and loads the records into separate table columns
It removes the NULL elements from the JSON object eliminating invalid data and enables the ability to load the records
 The STRIP_OUTER_ARRAY file format option in Snowflake is used when loading JSON documents that are composed of a large array containing multiple records. When this option is enabled, it removes the outer array structure, which allows each record within the array to be loaded as a separate row in the table. This is particularly useful for efficiently loading JSON data that is structured as an array of records1.
References:
Snowflake Documentation on JSON File Format
[COF-C02] SnowPro Core Certification Exam Study Guide
A developer is granted ownership of a table that has a masking policy. The developer's role is not able to see the masked data. Will the developer be able to modify the table to read the masked data?
Yes, because a table owner has full control and can unset masking policies.
Yes, because masking policies only apply to cloned tables.
No, because masking policies must always reference specific access roles.
No, because ownership of a table does not include the ability to change masking policies
 Even if a developer is granted ownership of a table with a masking policy, they will not be able to modify the table to read the masked data if their role does not have the necessary permissions. Ownership of a table does not automatically confer the ability to alter masking policies, which are designed to protect sensitive data. Masking policies are applied at the schema level and require specific privileges to modify12.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Masking Policies
Which feature is only available in the Enterprise or higher editions of Snowflake?
Column-level security
SOC 2 type II certification
Multi-factor Authentication (MFA)
Object-level access control
Column-level security is a feature that allows fine-grained control over access to specific columns within a table. This is particularly useful for managing sensitive data and ensuring that only authorized users can view or manipulate certain pieces of information. According to my last update, this feature was available in the Enterprise Edition or higher editions of Snowflake.
References: Based on my internal data as of 2021, column-level security is an advanced feature typically reserved for higher-tiered editions like the Enterprise Edition in data warehousing solutions such as Snowflake.
https://docs.snowflake.com/en/user-guide/intro-editions.html
What feature can be used to reorganize a very large table on one or more columns?
Micro-partitions
Clustering keys
Key partitions
Clustered partitions
 Clustering keys in Snowflake are used to reorganize large tables based on one or more columns. This feature optimizes the arrangement of data within micro-partitions to improve query performance, especially for large tables where efficient data retrieval is crucial. References: [COF-C02] SnowPro Core Certification Exam Study Guide
https://docs.snowflake.com/en/user-guide/tables-clustering-keys.html
A marketing co-worker has requested the ability to change a warehouse size on their medium virtual warehouse called mktg__WH.
Which of the following statements will accommodate this request?
ALLOW RESIZE ON WAREHOUSE MKTG__WH TO USER MKTG__LEAD;
GRANT MODIFY ON WAREHOUSE MKTG WH TO ROLE MARKETING;
GRANT MODIFY ON WAREHOUSE MKTG__WH TO USER MKTG__LEAD;
GRANT OPERATE ON WAREHOUSE MKTG WH TO ROLE MARKETING;
 The correct statement to accommodate the request for a marketing co-worker to change the size of their medium virtual warehouse called mktg__WH is to grant the MODIFY privilege on the warehouse to the ROLE MARKETING. This privilege allows the role to change the warehouse size among other properties.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Access Control Privileges1
True or False: Loading data into Snowflake requires that source data files be no larger than 16MB.
True
False
Snowflake does not require source data files to be no larger than 16MB. In fact, Snowflake recommends that for optimal load performance, data files should be roughly 100-250 MB in size when compressed. However, it is not recommended to load very large files (e.g., 100 GB or larger) due to potential delays and wasted credits if errors occur. Smaller files should be aggregated to minimize processing overhead, and larger files should be split to distribute the load among compute resources in an active warehouse.
References: Preparing your data files | Snowflake Documentation
Which semi-structured file formats are supported when unloading data from a table? (Select TWO).
ORC
XML
Avro
Parquet
JSON
Semi-structured
JSON, Parquet
Snowflake supports unloading data in several semi-structured file formats, including Parquet and JSON. These formats allow for efficient storage and querying of semi-structured data, which can be loaded directly into Snowflake tables without requiring a predefined schema12.
https://docs.snowflake.com/en/user-guide/data-unload-prepare.html#:~:text=Supported%20File%20Formats,-The%20following%20file &text=Delimited%20(CSV%2C%20TSV%2C%20etc.)
Which of the following conditions must be met in order to return results from the results cache? (Select TWO).
The user has the appropriate privileges on the objects associated with the query
Micro-partitions have been reclustered since the query was last run
The new query is run using the same virtual warehouse as the previous query
The query includes a User Defined Function (UDF)
The query has been run within 24 hours of the previously-run query
To return results from the results cache in Snowflake, certain conditions must be met:
Privileges: The user must have the appropriate privileges on the objects associated with the query. This ensures that only authorized users can access cached data.
Time Frame: The query must have been run within 24 hours of the previously-run query. Snowflake’s results cache is designed to store the results of queries for a short period, typically 24 hours, to improve performance for repeated queries.
What SQL command would be used to view all roles that were granted to user.1?
show grants to user USER1;
show grants of user USER1;
describe user USER1;
show grants on user USER1;
The correct command to view all roles granted to a specific user in Snowflake is SHOW GRANTS TO USER
What is the purpose of an External Function?
To call code that executes outside of Snowflake
To run a function in another Snowflake database
To share data in Snowflake with external parties
To ingest data from on-premises data sources
The purpose of an External Function in Snowflake is to call code that executes outside of the Snowflake environment. This allows Snowflake to interact with external services and leverage functionalities that are not natively available within Snowflake, such as calling APIs or running custom code hosted on cloud services3.
https://docs.snowflake.com/en/sql-reference/external-functions.html
Which command is used to unload data from a Snowflake table into a file in a stage?
COPY INTO
GET
WRITE
EXTRACT INTO
 The COPY INTO command is used in Snowflake to unload data from a table into a file in a stage. This command allows for the export of data from Snowflake tables into flat files, which can then be used for further analysis, processing, or storage in external systems.
References:
Snowflake Documentation on Unloading Data
Snowflake SnowPro Core: Copy Into Command to Unload Rows to Files in Named Stage
What tasks can be completed using the copy command? (Select TWO)
Columns can be aggregated
Columns can be joined with an existing table
Columns can be reordered
Columns can be omitted
Data can be loaded without the need to spin up a virtual warehouse
The COPY command in Snowflake allows for the reordering of columns as they are loaded into a table, and it also permits the omission of columns from the source file during the load process. This provides flexibility in handling the schema of the data being ingested. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which cache type is used to cache data output from SQL queries?
Metadata cache
Result cache
Remote cache
Local file cache
 The Result cache is used in Snowflake to cache the data output from SQL queries. This feature is designed to improve performance by storing the results of queries for a period of time. When the same or similar query is executed again, Snowflake can retrieve the result from this cache instead of re-computing the result, which saves time and computational resources.
References:
Snowflake Documentation on Query Results Cache
SnowPro® Core Certification Study Guide
Which property or parameter can be used to temporarily disable Multi-Factor Authentication (MFA) for a Snowflake user?
DISABLE_MFA
EXT_AUTHN_DUO
MINS_TO_BYPASS_MFA
ALLOW_CLIENT_MFA_CACHING
The MINS_TO_BYPASS_MFA property can temporarily disable Multi-Factor Authentication (MFA) for a user, allowing them to log in without providing an MFA code for a specific duration (in minutes).
Example:
ALTER USER my_user SET MINS_TO_BYPASS_MFA = 10;
References:
Snowflake Documentation: Multi-Factor Authentication
What can a Snowtlake user do to reduce queuing on a multi-cluster virtual warehouse?
Increase the warehouse size.
Use an economy scaling policy.
Increase the maximum number of clusters.
Convert the warehouse to a Snowpark-optimized warehouse.
In a multi-cluster virtual warehouse, Snowflake allows you to scale-out by adding more clusters to handle concurrent queries.
Increasing the MAX_CLUSTER_COUNT ensures more clusters are dynamically provisioned during high-concurrency periods, reducing queuing.
Why Other Options Are Incorrect:
A. Increase warehouse size: Increases compute power for individual queries but does not address concurrency issues.
B. Use economy scaling policy: Economy scaling is unrelated to reducing query queuing.
D. Convert to Snowpark-optimized warehouse: Snowpark warehouses are designed for ML and Python-based workloads, not for query concurrency.
References:
Multi-Cluster Warehouses
Which service or tool is a Command Line Interface (CLI) client used for connecting to Snowflake to execute SQL queries?
Snowsight
SnowCD
Snowpark
SnowSQL
SnowSQL is the Command Line Interface (CLI) client provided by Snowflake for executing SQL queries and performing various tasks. It allows users to connect to their Snowflake accounts and interact with the Snowflake data warehouse.
Installation: SnowSQL can be downloaded and installed on various operating systems.
Configuration: Users need to configure SnowSQL with their Snowflake account credentials.
Usage: Once configured, users can run SQL queries, manage data, and perform administrative tasks through the CLI.
References:
Snowflake Documentation: SnowSQL
Snowflake Documentation: Installing SnowSQL
Which security feature is available in all Snowflake editions?
Data masking policies
Object-level access control
Object tagging
Customer-managed encryption keys
Object-level access control is available in all Snowflake editions.
This feature allows administrators to grant and revoke permissions on specific objects (e.g., tables, schemas, databases) to control access at a granular level.
Other features like data masking policies and customer-managed encryption keys are available only in higher Snowflake editions.
References:
Snowflake Documentation: Object-Level Access Control
Snowflake Editions Comparison
Which query types will have significant performance improvement when run using the search optimization service? (Select TWO)
Range searches
Equality searches
Substring searches
Queries with IN predicates
Queries with aggregation
The search optimization service in Snowflake significantly improves the performance of range searches and equality searches. Range searches involve looking for values within a specific range (e.g., BETWEEN, <, >). Equality searches involve looking for values that match a specific value (e.g., =).
References:
Snowflake Documentation: Search Optimization Service
While unloading data into a stage, how can the user ensure that the output will be a single file?
Use the copy option files=single.
Use the COPY Option SINGLE=TRUE .
Use the get option SINGLE-TRUE.
Use the GET option FILES-SINGLE.
To ensure that the output will be a single file when unloading data into a stage, you should use the COPY option SINGLE=TRUE. This option specifies that the result of the COPY INTO command should be written to a single file, rather than multiple files.
References:
Snowflake Documentation: COPY INTO
Which function should be used to find the query ID of the second query executed in a current session?
Select LAST_QUERY_ID(-2)
Select LAST_QUERY_ID(2)
Select LAST_QUERY_ID(1)
Select LAST_QUERY_ID(2)
The correct function to find the query ID of the second query executed in the current session is SELECT LAST_QUERY_ID(-2). The LAST_QUERY_ID function returns the query ID for the most recent query executed in the session when called with no arguments. When used with an argument, it can retrieve the ID of previous queries within the same session, where -2 would reference the second most recent query executed.
References:
There's a clarification needed here; Snowflake's documentation indicates LAST_QUERY_ID() function does not accept arguments. It returns the ID of the last query executed in the session. To find the query ID of the second last executed query, users typically need to track query IDs manually or use session history views.
Which Snowflake objects use storage? (Select TWO).
Regular table
Regular view
Cached query result
Materialized view
External table
Snowflake objects that use storage include:
Regular table (A): Stores actual data in micro-partitions.
Materialized view (D): Stores precomputed results for queries, which consume storage space.
Other Notes:
Regular view (B): Does not store data, as it is a logical object.
Cached query result (C): Does not consume long-term storage; it is temporary.
External table (E): Data is stored in external storage, not Snowflake.
References:
Snowflake Documentation: Tables
Snowflake Documentation: Materialized Views
What is a characteristic of a tag associated with a masking policy?
A tag can be dropped after a masking policy is assigned
A tag can have only one masking policy for each data type.
A tag can have multiple masking policies for each data type.
A tag can have multiple masking policies with varying data types
In Snowflake, a tag can be associated with only one masking policy for each data type. This means that for a given data type, you can define a single masking policy to be applied when a tag is used. Tags and masking policies are part of Snowflake's data classification and governance features, allowing for data masking based on the context defined by the tags.
References:
Snowflake Documentation: Tag-Based Masking Policies
What action should be taken if a Snowflake user wants to share a newly created object in a database with consumers?
Use the automatic sharing feature for seamless access.
Drop the object and then re-add it to the database to trigger sharing.
Recreate the object with a different name in the database before sharing.
Use the grant privilege ... TO share command to grant the necessary privileges.
When a Snowflake user wants to share a newly created object in a database with consumers, the correct action to take is to use the GRANT privilege ... TO SHARE command to grant the necessary privileges for the object to be shared. This approach allows the object owner or a user with the appropriate privileges to share database objects such as tables, secure views, and streams with other Snowflake accounts by granting access to a named share.
The GRANT statement specifies which privileges are granted on the object to the share. The object remains in its original location; sharing does not duplicate or move the object. Instead, it allows the specified share to access the object according to the granted privileges.
For example, to share a table, the command would be:
GRANT SELECT ON TABLE new_table TO SHARE consumer_share;
This command grants the SELECT privilege on a table named new_table to a share named consumer_share, enabling the consumers of the share to query the table.
Automatic sharing, dropping and re-adding the object, or recreating the object with a different name are not required or recommended practices for sharing objects in Snowflake. The use of the GRANT statement to a share is the direct and intended method for this purpose.
The customer table in the Tl database is accidentally dropped.
Which privileges are required to restore this table? (Select TWO).
select privilege on the customer table
ownership privilege on the customer table
All privileges on the customer table
All privileges on the TI database
create table privilege on the Tl database
To restore a dropped table in Snowflake, you need specific privileges:
Ownership (B): The owner of the table can perform all operations, including restoring dropped tables.
All privileges on the database (D): Having full database privileges includes access to restore dropped objects within that database.
Why Other Options Are Incorrect:
A. Select privilege: Only allows querying data, not restoration.
C. All privileges on the table: The table no longer exists, so this does not apply.
E. Create table privilege: This allows creating new tables, not restoring dropped ones.
References:
Undrop Table
Snowflake Privileges Documentation
What objects can be cloned within Snowflake? (Select TWO).
Schemas
Users
External tables
Internal named stages
External named stages
In Snowflake, cloning is available for certain types of objects, allowing quick duplication without copying data:
Schemas: These can be cloned, enabling users to replicate entire schema structures, including tables and views, for development or testing.
Internal named stages: These stages, used to store data files within Snowflake, can also be cloned, preserving configurations for data loading.
Users and external objects (like external stages or tables) cannot be cloned due to their dependency on external data and configurations outside Snowflake.
How are micro-partitions enabled on Snowflake tables?
Micro-partitioning requires a cluster key on a table.
Micro-partitioning is automatically performed on a table.
Micro-partitioning requires the use of the search optimization service.
Micro-partitioning is defined by the user when a table is created.
Snowflake uses micro-partitions automatically to store table data. A micro-partition is a contiguous unit of storage containing a subset of data from a table, enabling efficient data retrieval and performance optimization.
Users do not need to manually define or enable micro-partitioning, as this process is automatically handled by Snowflake during data ingestion.
Micro-partitions store metadata (e.g., column statistics, range of values) to optimize query performance.
Why Other Options Are Incorrect:
A. Cluster key: Cluster keys help improve query performance for large datasets but are unrelated to enabling micro-partitions.
C. Search optimization service: This is a separate feature for optimizing point lookups, not for enabling micro-partitions.
D. User-defined micro-partitions: Snowflake does not allow users to define micro-partitions manually; it is fully automated.
References:
Snowflake Micro-Partitioning Documentation
Which activities are managed by Slowflake's Cloud Services layer? (Select TWO).
Authorisation
Access delegation
Data pruning
Data compression
Query parsing and optimization
Snowflake's Cloud Services layer is responsible for managing various aspects of the platform that are not directly related to computing or storage. Specifically, it handles authorisation, ensuring that users have appropriate access rights to perform actions or access data. Additionally, it takes care of query parsing and optimization, interpreting SQL queries and optimizing their execution plans for better performance. This layer abstracts much of the platform's complexity, allowing users to focus on their data and queries without managing the underlying infrastructure.References: Snowflake Architecture Documentation
Which parameters can be used together to ensure that a virtual warehouse never has a backlog of queued SQL statements? (Select TWO).
STATEMENT_QUEUED_TIME0UT_IN_SECONDS
STATEMENT_TIMEOUT_IN_SECONDS
DATA_RETENTION_TIME_IN_DAYS
MAX_CONCURRENCY_LEVEL
MAX DATA EXTENSION TIME IN DAYS
To prevent backlogs of queued SQL statements:
STATEMENT_QUEUED_TIMEOUT_IN_SECONDS (A): Sets a timeout for how long a query can wait in the queue. Queries that exceed this time are terminated, preventing long queues.
MAX_CONCURRENCY_LEVEL (D): Controls the maximum number of concurrent queries the virtual warehouse can execute. Increasing this value reduces queuing.
Why Other Options Are Incorrect:
B. STATEMENT_TIMEOUT_IN_SECONDS: Defines the total execution time for queries, not queuing behavior.
C. DATA_RETENTION_TIME_IN_DAYS: Related to table retention policies, not query concurrency.
E. MAX_DATA_EXTENSION_TIME_IN_DAYS: Related to time travel, not query queuing.
References:
Managing Query Concurrency
What does a table with a clustering depth of 1 mean in Snowflake?
The table has only 1 micro-partition.
The table has 1 overlapping micro-partition.
The table has no overlapping micro-partitions.
The table has no micro-partitions.
In Snowflake, a table's clustering depth indicates the degree of micro-partition overlap based on the clustering keys defined for the table. A clustering depth of 1 implies that the table has no overlapping micro-partitions. This is an optimal scenario, indicating that the table's data is well-clustered according to the specified clustering keys. Well-clustered data can lead to more efficient query performance, as it reduces the amount of data scanned during query execution and improves the effectiveness of data pruning.
References:
Snowflake Documentation on Clustering: Understanding Clustering Depth
Based on a review of a Query Profile, which scenarios will benefit the MOST from the use of a data clustering key? (Select TWO.)
A column that appears most frequently in ORDER BY operations
A column that appears most frequently in r.im z operations
A column that appears most frequently in GROUP BY operations
A column that appears most frequently in aggregate operations
A column that appears most frequently in join operations
Data Clustering Keys help Snowflake optimize performance by physically organizing data in micro-partitions based on specified columns.
Columns frequently used in ORDER BY or WHERE clauses benefit the most because clustering improves pruning, reducing the number of micro-partitions scanned.
Why Other Options Are Incorrect:
C. GROUP BY: Aggregate operations benefit less from clustering.
D. Aggregate operations: Clustering has limited effect here unless combined with filtering in WHERE.
E. JOIN operations: Joins rely more on table structure than clustering.
References:
Clustering Keys Documentation
How can the Query Profile be used to troubleshoot a problematic query?
It will indicate if a virtual warehouse memory is too small to run the query
It will indicate if a user lacks the privileges needed to run the query.
It will indicate if a virtual warehouse is in auto-scale mode
It will indicate if the user has enough Snowflake credits to run the query
The Query Profile in Snowflake provides detailed insights into the execution of a query. It helps in troubleshooting performance issues by showing the steps of the query execution and the resources consumed. One of the key aspects it reveals is whether the virtual warehouse memory was sufficient for the query.
Access Query Profile: Navigate to the Query History page and select the query you want to analyze.
Examine Query Execution Steps: The Query Profile displays the different stages of the query execution, including the time taken and resources used at each step.
Identify Memory Issues: Look for indicators of memory issues, such as spilling to disk or memory errors, which suggest that the virtual warehouse memory might be too small.
References:
Snowflake Documentation: Query Profile
Snowflake Documentation: Optimizing Queries
How does the search optimization service improve query performance?
By clustering the tables
By creating a persistent data structure
By using caching
By optimizing the use of micro-partitions
The Search Optimization Service in Snowflake enhances query performance by creating a persistent data structure that enables faster access to specific data, particularly for queries with selective filters on columns not often used in clustering. This persistent structure accelerates data retrieval without depending on clustering or caching, thereby improving response times for targeted queries.
Snowflake's micro-partitioning automatically manages table structure, but search optimization allows further enhancement for certain high-frequency, specific access patterns.
Which function unloads data from a relational table to JSON?
TRUNC
TRUNC(ID_NUMBER, 5)
ID_NUMBER*100
TO_CHAR
To unload data from a relational table to JSON format, you can use the TO_CHAR function. This function converts a number to a character string, which can then be serialized into JSON format. While there isn't a direct function specifically named for unloading to JSON, converting the necessary fields to a string representation is a common step in preparing data for JSON serialization.
References:
Snowflake Documentation: TO_CHAR Function
How should a Snowflake use' configure a virtual warehouse to be in Maximized mode''
Set the WAREHOUSES_SIZE to 6XL.
Set the STATEMENT_TIMEOUT_1M_SECOMES to 0.
Set the MAX_CONCURRENCY_LEVEL to a value of 12 or large.
Set the same value for both MIN_CLUSTER_COUNT and MAX_CLUSTER_COUNT.
In Snowflake, configuring a virtual warehouse to be in a "Maximized" mode implies maximizing the resources allocated to the warehouse for its duration. This is done to ensure that the warehouse has a consistent amount of compute resources available, enhancing performance for workloads that require a high level of parallel processing or for handling high query volumes.
To configure a virtual warehouse in maximized mode, you should set the same value for both MIN_CLUSTER_COUNT and MAX_CLUSTER_COUNT. This configuration ensures that the warehouse operates with a fixed number of clusters, thereby providing a stable and maximized level of compute resources.
Reference to Snowflake documentation on warehouse sizing and scaling:
Warehouse Sizing and Scaling
Understanding Warehouses
A Snowflake user wants to design a series of transformations that need to be executed in a specific order, on a given schedule.
What Snowflake objects should be used?
Pipes
Tasks
Streams
Sequences
Tasks in Snowflake are used to create workflows or schedules for executing SQL statements, including transformations, in a specific order.
By defining dependencies between tasks, users can ensure they execute in a defined sequence.
Example:
sql
CopyEdit
CREATE TASK task_1
SCHEDULE = '5 MINUTE'
AS
INSERT INTO transformed_table SELECT * FROM raw_table;
CREATE TASK task_2
AFTER task_1
AS
DELETE FROM raw_table WHERE processed = TRUE;
Tasks can also run on a fixed schedule or be triggered by preceding tasks in a chain.
Why Other Options Are Incorrect:
A. Pipes: Automates data loading but does not handle transformation workflows.
C. Streams: Tracks table changes but does not schedule or sequence transformations.
D. Sequences: Generate unique numbers, unrelated to task scheduling or transformations.
References:
Snowflake Tasks Documentation
Which table function will identify data that was loaded using COPY INTO