Which stage type can be altered and dropped?
Database stage
External stage
Table stage
User stage
 External stages can be altered and dropped in Snowflake. An external stage points to an external location, such as an S3 bucket, where data files are stored. Users can modify the stage’s definition or drop it entirely if it’s no longer needed. This is in contrast to table stages, which are tied to specific tables and cannot be altered or dropped independently.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Stages1
A company's security audit requires generating a report listing all Snowflake logins (e.g.. date and user) within the last 90 days. Which of the following statements will return the required information?
SELECT LAST_SUCCESS_LOGIN, LOGIN_NAME
FROM ACCOUNT_USAGE.USERS;
SELECT EVENT_TIMESTAMP, USER_NAME
FROM table(information_schema.login_history_by_user())
SELECT EVENT_TIMESTAMP, USER_NAME
FROM ACCOUNT_USAGE.ACCESS_HISTORY;
SELECT EVENT_TIMESTAMP, USER_NAME
FROM ACCOUNT_USAGE.LOGIN_HISTORY;
To generate a report listing all Snowflake logins within the last 90 days, the ACCOUNT_USAGE.LOGIN_HISTORY view should be used. This view provides information about login attempts, including successful and unsuccessful logins, and is suitable for security audits4.
Which command should be used to unload all the rows from a table into one or more files in a named stage?
COPY INTO
GET
INSERT INTO
PUT
To unload data from a table into one or more files in a named stage, the COPY INTO <location> command should be used. This command exports the result of a query, such as selecting all rows from a table, into files stored in the specified stage. The COPY INTO command is versatile, supporting various file formats and compression options for efficient data unloading.
References:
Snowflake Documentation: COPY INTO Location
What does the worksheet and database explorer feature in Snowsight allow users to do?
Add or remove users from a worksheet.
Move a worksheet to a folder or a dashboard.
Combine multiple worksheets into a single worksheet.
Tag frequently accessed worksheets for ease of access.
The worksheet and database explorer feature in Snowsight allows users to tag frequently accessed worksheets for ease of access. This functionality helps users organize and quickly navigate to the worksheets they use most often, enhancing productivity and streamlining the data exploration and analysis process within Snowsight, Snowflake's web-based query and visualization interface.
References:
Snowflake Documentation: Snowsight (UI for Snowflake)
What can a Snowflake user do in the Activity section in Snowsight?
Create dashboards.
Write and run SQL queries.
Explore databases and objects.
Explore executed query performance.
 In the Activity section in Snowsight, Snowflake users can explore the performance of executed queries. This includes monitoring queries, viewing details about queries, including performance data, and exploring each step of an executed query in the query profile1.
The VALIDATE table function has which parameter as an input argument for a Snowflake user?
Last_QUERY_ID
CURRENT_STATEMENT
UUID_STRING
JOB_ID
The VALIDATE table function in Snowflake would typically use a unique identifier, such as a UUID_STRING, as an input argument. This function is designed to validate the data within a table against a set of constraints or conditions, often requiring a specific identifier to reference the particular data or job being validated.
References:
There is no direct reference to a VALIDATE table function with these specific parameters in Snowflake documentation. It seems like a theoretical example for understanding function arguments. Snowflake documentation on UDFs and system functions can provide guidance on how to create and use custom functions for similar purposes.
What is used to denote a pre-computed data set derived from a SELECT query specification and stored for later use?
View
Secure view
Materialized view
External table
A materialized view in Snowflake denotes a pre-computed data set derived from a SELECT query specification and stored for later use. Unlike standard views, which dynamically compute the data each time the view is accessed, materialized views store the result of the query at the time it is executed, thereby speeding up access to the data, especially for expensive aggregations on large datasets.
References:
Snowflake Documentation: Materialized Views
Which function will provide the proxy information needed to protect Snowsight?
SYSTEMADMIN_TAG
SYSTEM$GET_PRIVATELINK
SYSTEMSALLONTLIST
SYSTEMAUTHORIZE
The SYSTEM$GET_PRIVATELINK function in Snowflake provides proxy information necessary for configuring PrivateLink connections, which can protect Snowsight as well as other Snowflake services. PrivateLink enhances security by allowing Snowflake to be accessed via a private connection within a cloud provider’s network, reducing exposure to the public internet.
References:
Snowflake Documentation: PrivateLink Setup
What SnowFlake database object is derived from a query specification, stored for later use, and can speed up expensive aggregation on large data sets?
Temporary table
External table
Secure view
Materialized view
A materialized view in Snowflake is a database object derived from a query specification, stored for later use, and can significantly speed up expensive aggregations on large data sets. Materialized views store the result of their underlying query, reducing the need to recompute the result each time the view is accessed. This makes them ideal for improving the performance of read-heavy, aggregate-intensive queries.
References:
Snowflake Documentation: Using Materialized Views
What is it called when a customer managed key is combined with a Snowflake managed key to create a composite key for encryption?
Hierarchical key model
Client-side encryption
Tri-secret secure encryption
Key pair authentication
Tri-secret secure encryption is a security model employed by Snowflake that involves combining a customer-managed key with a Snowflake-managed key to create a composite key for encrypting data. This model enhances data security by requiring both the customer-managed key and the Snowflake-managed key to decrypt data, thus ensuring that neither party can access the data independently. It represents a balanced approach to key management, leveraging both customer control and Snowflake's managed services for robust data encryption.
References:
Snowflake Documentation: Encryption and Key Management
There are two Snowflake accounts in the same cloud provider region: one is production and the other is non-production. How can data be easily transferred from the production account to the non-production account?
Clone the data from the production account to the non-production account.
Create a data share from the production account to the non-production account.
Create a subscription in the production account and have it publish to the non-production account.
Create a reader account using the production account and link the reader account to the non-production account.
To easily transfer data from a production account to a non-production account in Snowflake within the same cloud provider region, creating a data share is the most efficient approach. Data sharing allows for live, read-only access to selected data objects from the production account to the non-production account without the need to duplicate or move the actual data. This method facilitates seamless access to the data for development, testing, or analytics purposes in the non-production environment.
References:
Snowflake Documentation: Data Sharing
What are valid sub-clauses to the OVER clause for a window function? (Select TWO).
GROUP BY
LIMIT
ORDER BY
PARTITION BY
UNION ALL
Valid sub-clauses to the OVER clause for a window function in SQL are:
C. ORDER BY: This clause specifies the order in which the rows in a partition are processed by the window function. It is essential for functions that depend on the row order, such as ranking functions.
D. PARTITION BY: This clause divides the result set into partitions to which the window function is applied. Each partition is processed independently of other partitions, making it crucial for functions that compute values across sets of rows that share common characteristics.
These clauses are fundamental to defining the scope and order of data over which the window function operates, enabling complex analytical computations within SQL queries.
References:
Snowflake Documentation: Window Functions
Which data types optimally store semi-structured data? (Select TWO).
ARRAY
CHARACTER
STRING
VARCHAR
VARIANT
In Snowflake, semi-structured data is optimally stored using specific data types that are designed to handle the flexibility and complexity of such data. The VARIANT data type can store structured and semi-structured data types, including JSON, Avro, ORC, Parquet, or XML, in a single column. The ARRAY data type, on the other hand, is suitable for storing ordered sequences of elements, which can be particularly useful for semi-structured data types like JSON arrays. These data types provide the necessary flexibility to store and query semi-structured data efficiently in Snowflake.
References:
Snowflake Documentation: Semi-structured Data Types
How does a Snowflake stored procedure compare to a User-Defined Function (UDF)?
A single executable statement can call only two stored procedures. In contrast, a single SQL statement can call multiple UDFs.
A single executable statement can call only one stored procedure. In contrast, a single SQL statement can call multiple UDFs.
A single executable statement can call multiple stored procedures. In contrast, multiple SQL statements can call the same UDFs.
Multiple executable statements can call more than one stored procedure. In contrast, a single SQL statement can call multiple UDFs.
In Snowflake, stored procedures and User-Defined Functions (UDFs) have different invocation patterns within SQL:
Option B is correct: A single executable statement can call only one stored procedure due to the procedural and potentially transactional nature of stored procedures. In contrast, a single SQL statement can call multiple UDFs because UDFs are designed to operate more like functions in traditional programming, where they return a value and can be embedded within SQL queries.References: Snowflake documentation comparing the operational differences between stored procedures and UDFs.
User1, who has the SYSADMIN role, executed a query on Snowsight. User2, who is in the same Snowflake account, wants to view the result set of the query executed by User1 using the Snowsight query history.
What will happen if User2 tries to access the query history?
If User2 has the sysadmin role they will be able to see the results.
If User2 has the securityadmin role they will be able to see the results.
If User2 has the ACCOUNTADMIN role they will be able to see the results.
User2 will be unable to view the result set of the query executed by User1.
In Snowflake, the query history and the results of queries executed by a user are accessible based on the roles and permissions. If User1 executed a query with the SYSADMIN role, User2 would be able to view the result set of that query executed by User1 only if User2 has the ACCOUNTADMIN role. The ACCOUNTADMIN role has the broadest set of privileges, including the ability to access all aspects of the account's operation, data, and query history, thus enabling User2 to view the results of queries executed by other users.
References:
Snowflake Documentation: Understanding Snowflake Roles
If a virtual warehouse runs for 61 seconds, shut down, and then restart and runs for 30 seconds, for how many seconds is it billed?
60
91
120
121
Snowflake bills virtual warehouse usage in one-minute increments, rounding up to the nearest minute for any partial minute of compute time used. If a virtual warehouse runs for 61 seconds and then, after being shut down, restarts and runs for an additional 30 seconds, the total time billed would be 120 seconds or 2 minutes. The first 61 seconds are rounded up to 2 minutes, and the subsequent 30 seconds are within a new minute, which is also rounded up to the nearest minute.
References:
Snowflake Documentation: Virtual Warehouses Billing
Which Snowflow object does not consume and storage costs?
Secure view
Materialized view
Temporary table
Transient table
Temporary tables in Snowflake do not consume storage costs. They are designed for transient data that is needed only for the duration of a session. Data stored in temporary tables is held in the virtual warehouse's cache and does not persist beyond the session's lifetime, thereby not incurring any storage charges.
References:
Snowflake Documentation: Temporary Tables
A Snowflake user wants to temporarily bypass a network policy by configuring the user object property MINS_TO_BYPASS_NETWORK_POLICY.
What should they do?
Use the SECURITYADMIN role.
Use the SYSADMIN role.
Use the USERADMIN role.
Contact Snowflake Support.
To temporarily bypass a network policy by configuring the user object property MINS_TO_BYPASS_NETWORK_POLICY, the USERADMIN role should be used. This role has the necessary privileges to modify user properties, including setting a temporary bypass for network policies, which can be crucial for enabling access under specific circumstances without permanently altering the network security configuration.
References:
Snowflake Documentation: User Management
Which Snowflake mechanism is used to limit the number of micro-partitions scanned by a query?
Caching
Cluster depth
Query pruning
Retrieval optimization
Query pruning in Snowflake is the mechanism used to limit the number of micro-partitions scanned by a query. By analyzing the filters and conditions applied in a query, Snowflake can skip over micro-partitions that do not contain relevant data, thereby reducing the amount of data processed and improving query performance. This technique is particularly effective for large datasets and is a key component of Snowflake's performance optimization features.
References:
Snowflake Documentation: Query Performance Optimization
When reviewing a query profile, what is a symptom that a query is too large to fit into the memory?
A single join node uses more than 50% of the query time
Partitions scanned is equal to partitions total
An AggregateOperacor node is present
The query is spilling to remote storage
 When a query in Snowflake is too large to fit into the available memory, it will start spilling to remote storage. This is an indication that the memory allocated for the query is insufficient for its execution, and as a result, Snowflake uses remote disk storage to handle the overflow. This spill to remote storage can lead to slower query performance due to the additional I/O operations required.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Query Profile1
Snowpro Core Certification Exam Flashcards2
Which of the following objects can be shared through secure data sharing?
Masking policy
Stored procedure
Task
External table
Secure data sharing in Snowflake allows users to share various objects between Snowflake accounts without physically copying the data, thus not consuming additional storage. Among the options provided, external tables can be shared through secure data sharing. External tables are used to query data directly from files in a stage without loading the data into Snowflake tables, making them suitable for sharing across different Snowflake accounts.
References:
Snowflake Documentation on Secure Data Sharing
SnowProâ„¢ Core Certification Companion: Hands-on Preparation and Practice
Which resource monitor setting will cancel all active queries in a virtual warehouse when the threshold is met?
NOTIF
NOTIFY_USERS
SUSPEND
SUSPEND_IMMEDIATE
Which steps will help optimize query performance? (Select TWO).
Using the query acceleration service
Clustering a table
Indexing a column
Increasing the size of the micro-partitions
Decreasing the size of the virtual warehouse
Which security models are used in Snowflake to manage access control? (Select TWO).
Discretionary Access Control (DAC)
Identity Access Management (1AM)
Mandatory Access Control (MAC)
Role-Based Access Control (RBAC)
Security Assertion Markup Language (SAML)
Snowflake uses both Discretionary Access Control (DAC) and Role-Based Access Control (RBAC) to manage access control. DAC allows object owners to grant access privileges to other users. RBAC assigns permissions to roles, and roles are then granted to users, making it easier to manage permissions based on user roles within the organization.
References:
Snowflake Documentation: Access Control in Snowflake
Which MINIMUM set of privileges is required to temporarily bypass an active network policy by configuring the user object property MINS_TO_BYPASS_NETWORK_POLICY?
Only while in the ACCOUNTADMIH role
Only while in the securityadmin role
Only the role with the ownership privilege on the network policy
Only Snowflake Support can set the value for this object property
To temporarily bypass an active network policy by configuring the user object property MINS_TO_BYPASS_NETWORK_POLICY, the minimum set of privileges required is having the ACCOUNTADMIN role. This role has the necessary privileges to make such changes, including modifying user properties that affect network policies.
References:
Snowflake Documentation: Network Policy Management
When floating-point number columns are unloaded to CSV or JSON files, Snowflake truncates the values to approximately what?
(12,2)
(10,4)
(14,8)
(15,9)
When unloading floating-point number columns to CSV or JSON files, Snowflake truncates the values to approximately 15 significant digits with 9 digits following the decimal point, which can be represented as (15,9). This ensures a balance between accuracy and efficiency in representing floating-point numbers in text-based formats, which is essential for data interchange and processing applications that consume these files.
References:
Snowflake Documentation: Data Unloading Considerations
What will happen if a Snowflake user increases the size of a suspended virtual warehouse?
The provisioning of new compute resources for the warehouse will begin immediately.
The warehouse will remain suspended but new resources will be added to the query acceleration service.
The provisioning of additional compute resources will be in effect when the warehouse is next resumed.
The warehouse will resume immediately and start to share the compute load with other running virtual warehouses.
When a Snowflake user increases the size of a suspended virtual warehouse, the changes to compute resources are queued but do not take immediate effect. The provisioning of additional compute resources occurs only when the warehouse is resumed. This ensures that resources are allocated efficiently, aligning with Snowflake's commitment to cost-effective and on-demand scalability.
References:
Snowflake Documentation: Virtual Warehouses
How does Snowflake describe its unique architecture?
A single-cluster shared data architecture using a central data repository and massively parallel processing (MPP)
A multi-duster shared nothing architecture using a soloed data repository and massively parallel processing (MPP)
A single-cluster shared nothing architecture using a sliced data repository and symmetric multiprocessing (SMP)
A multi-cluster shared nothing architecture using a siloed data repository and symmetric multiprocessing (SMP)
Snowflake's unique architecture is described as a multi-cluster, shared data architecture that leverages massively parallel processing (MPP). This architecture separates compute and storage resources, enabling Snowflake to scale them independently. It does not use a single cluster or rely solely on symmetric multiprocessing (SMP); rather, it uses a combination of shared-nothing architecture for compute clusters (virtual warehouses) and a centralized storage layer for data, optimizing for both performance and scalability.
References:
Snowflake Documentation: Snowflake Architecture Overview
Which activities are included in the Cloud Services layer? {Select TWO).
Data storage
Dynamic data masking
Partition scanning
User authentication
Infrastructure management
The Cloud Services layer in Snowflake is responsible for a wide range of services that facilitate the management and use of Snowflake, including:
D. User authentication: This service handles identity and access management, ensuring that only authorized users can access Snowflake resources.
E. Infrastructure management: This service manages the allocation and scaling of resources to meet user demands, including the management of virtual warehouses, storage, and the orchestration of query execution.
These services are part of Snowflake's fully managed, cloud-based architecture, which abstracts and automates many of the complexities associated with data warehousing.
References:
Snowflake Documentation: Overview of Snowflake Cloud Services
What is the purpose of an External Function?
To call code that executes outside of Snowflake
To run a function in another Snowflake database
To share data in Snowflake with external parties
To ingest data from on-premises data sources
The purpose of an External Function in Snowflake is to call code that executes outside of the Snowflake environment. This allows Snowflake to interact with external services and leverage functionalities that are not natively available within Snowflake, such as calling APIs or running custom code hosted on cloud services3.
https://docs.snowflake.com/en/sql-reference/external-functions.html
What is the default value in the Snowflake Web Interface (Ul) for auto suspending a Virtual Warehouse?
1 minutes
5 minutes
10 minutes
15 minutes
The default value for auto-suspending a Virtual Warehouse in the Snowflake Web Interface (UI) is 10 minutes. This setting helps manage compute costs by automatically suspending warehouses that are not in use, ensuring that compute resources are efficiently allocated and not wasted on idle warehouses.
References:
Snowflake Documentation: Virtual Warehouses
How long is a query visible in the Query History page in the Snowflake Web Interface (Ul)?
60 minutes
24 hours
14 days
30 days
In the Snowflake Web Interface (UI), the Query History page displays the history of queries executed in Snowflake for up to 14 days. This allows users to review and analyze their query performance, troubleshoot issues, and understand their query patterns over a two-week period. The Query History page is a critical tool for monitoring and optimizing the use of Snowflake.
References:
Snowflake Documentation: Using the Web Interface
What happens to the underlying table data when a CLUSTER BY clause is added to a Snowflake table?
Data is hashed by the cluster key to facilitate fast searches for common data values
Larger micro-partitions are created for common data values to reduce the number of partitions that must be scanned
Smaller micro-partitions are created for common data values to allow for more parallelism
Data may be colocated by the cluster key within the micro-partitions to improve pruning performance
When a CLUSTER BY clause is added to a Snowflake table, it specifies one or more columns to organize the data within the table’s micro-partitions. This clustering aims to colocate data with similar values in the same or adjacent micro-partitions. By doing so, it enhances the efficiency of query pruning, where the Snowflake query optimizer can skip over irrelevant micro-partitions that do not contain the data relevant to the query, thereby improving performance.
References:
Snowflake Documentation on Clustering Keys & Clustered Tables1.
Community discussions on how source data’s ordering affects a table with a cluster key
Which data type can be used to store geospatial data in Snowflake?
Variant
Object
Geometry
Geography
Snowflake supports two geospatial data types: GEOGRAPHY and GEOMETRY. The GEOGRAPHY data type is used to store geospatial data that models the Earth as a perfect sphere, which is suitable for global geospatial data. This data type follows the WGS 84 standard and is used for storing points, lines, and polygons on the Earth’s surface. The GEOMETRY data type, on the other hand, represents features in a planar (Euclidean, Cartesian) coordinate system and is typically used for local spatial reference systems. Since the question specifically asks about geospatial data, which commonly refers to Earth-related spatial data, the correct answer is GEOGRAPHY3. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A user unloaded a Snowflake table called mytable to an internal stage called mystage.
Which command can be used to view the list of files that has been uploaded to the staged?
list @mytable;
list @%raytable;
list @ %m.ystage;
list @mystage;
 The command list @mystage; is used to view the list of files that have been uploaded to an internal stage in Snowflake. The list command displays the metadata for all files in the specified stage, which in this case is mystage. This command is particularly useful for verifying that files have been successfully unloaded from a Snowflake table to the stage and for managing the files within the stage.
References:
Snowflake Documentation on Stages
SnowPro® Core Certification Study Guide
Which of the following can be executed/called with Snowpipe?
A User Defined Function (UDF)
A stored procedure
A single copy_into statement
A single insert__into statement
 Snowpipe is used for continuous, automated data loading into Snowflake. It uses a COPY INTO