What are potential impacts of storing non-native values like dates and timestamps in a variant column in Snowflake?
Faster query performance and increased storage consumption
Slower query performance and increased storage consumption
Faster query performance and decreased storage consumption
Slower query performance and decreased storage consumption
Storing non-native values, such as dates and timestamps, in a VARIANT column in Snowflake can lead to slower query performance and increased storage consumption. VARIANT is a semi-structured data type that allows storing JSON, AVRO, ORC, Parquet, or XML data in a single column. When non-native data types are stored as VARIANT, Snowflake must perform implicit conversion to process these values, which can slow down query execution. Additionally, because the VARIANT data type is designed to accommodate a wide variety of data formats, it often requires more storage space compared to storing data in native, strongly-typed columns that are optimized for specific data types.
The performance impact arises from the need to parse and interpret the semi-structured data on the fly during query execution, as opposed to directly accessing and operating on optimally stored data in its native format. Furthermore, the increased storage consumption is a result of the overhead associated with storing data in a format that is less space-efficient than the native formats optimized for specific types of data.
References:
Which SQL command can be used to verify the privileges that are granted to a role?
SHOW GRANTS ON ROLE
SHOW ROLES
SHOW GRANTS TO ROLE
SHOW GRANTS FOR ROLE
To verify the privileges that have been granted to a specific role in Snowflake, the correct SQL command is SHOW GRANTS TO ROLE <Role Name>. This command lists all the privileges granted to the specified role, including access to schemas, tables, and other database objects. This is a useful command for administrators and users with sufficient privileges to audit and manage role permissions within the Snowflake environment.
References:
What happens when a virtual warehouse is resized?
When increasing the size of an active warehouse the compute resource for all running and queued queries on the warehouse are affected
When reducing the size of a warehouse the compute resources are removed only when they are no longer being used to execute any current statements.
The warehouse will be suspended while the new compute resource is provisioned and will resume automatically once provisioning is complete.
Users who are trying to use the warehouse will receive an error message until the resizing is complete
When a virtual warehouse in Snowflake is resized, specifically when it is increased in size, the additional compute resources become immediately available to all running and queued queries. This means that the performance of these queries can improve due to the increased resources. Conversely, when the size of a warehouse is reduced, the compute resources are not removed until they are no longer being used by any current operations1.
References:
Which of the following compute resources or features are managed by Snowflake? (Select TWO).
Execute a COPY command
Updating data
Snowpipe
AUTOMATIC__CLUSTERING
Scaling up a warehouse
 Snowflake manages various compute resources and features, including Snowpipe and the ability to scale up a warehouse. Snowpipe is Snowflake’s continuous data ingestion service that allows users to load data as soon as it becomes available. Scaling up a warehouse refers to increasing the compute resources allocated to a virtual warehouse to handle larger workloads or improve performance.
References:
Which Snowflake data type is used to store JSON key value pairs?
TEXT
BINARY
STRING
VARIANT
The VARIANT data type in Snowflake is used to store JSON key-value pairs along with other semi-structured data formats like AVRO, BSON, and XML. The VARIANT data type allows for flexible and dynamic data structures within a single column, accommodating complex and nested data. This data type is crucial for handling semi-structured data in Snowflake, enabling users to perform SQL operations on JSON objects and arrays directly.
References:
What will happen if a Snowflake user increases the size of a suspended virtual warehouse?
The provisioning of new compute resources for the warehouse will begin immediately.
The warehouse will remain suspended but new resources will be added to the query acceleration service.
The provisioning of additional compute resources will be in effect when the warehouse is next resumed.
The warehouse will resume immediately and start to share the compute load with other running virtual warehouses.
When a Snowflake user increases the size of a suspended virtual warehouse, the changes to compute resources are queued but do not take immediate effect. The provisioning of additional compute resources occurs only when the warehouse is resumed. This ensures that resources are allocated efficiently, aligning with Snowflake's commitment to cost-effective and on-demand scalability.
References:
What is the default character set used when loading CSV files into Snowflake?
UTF-8
UTF-16
ISO S859-1
ANSI_X3.A
https://docs.snowflake.com/en/user-guide/intro-summary-loading.html#:~:text=For%20delimited%20files%20(CSV%2C%20TSV,encoding%20to%20use%20for%20loading .
For delimited files (CSV, TSV, etc.), the default character set is UTF-8. To use any other characters sets, you must explicitly specify the encoding to use for loading. For the list of supported character sets, see Supported Character Sets for Delimited Files (in this topic).
True or False: Fail-safe can be disabled within a Snowflake account.
True
False
A company's security audit requires generating a report listing all Snowflake logins (e.g.. date and user) within the last 90 days. Which of the following statements will return the required information?
SELECT LAST_SUCCESS_LOGIN, LOGIN_NAME
FROM ACCOUNT_USAGE.USERS;
SELECT EVENT_TIMESTAMP, USER_NAME
FROM table(information_schema.login_history_by_user())
SELECT EVENT_TIMESTAMP, USER_NAME
FROM ACCOUNT_USAGE.ACCESS_HISTORY;
SELECT EVENT_TIMESTAMP, USER_NAME
FROM ACCOUNT_USAGE.LOGIN_HISTORY;
To generate a report listing all Snowflake logins within the last 90 days, the ACCOUNT_USAGE.LOGIN_HISTORY view should be used. This view provides information about login attempts, including successful and unsuccessful logins, and is suitable for security audits4.
Which of the following Snowflake features provide continuous data protection automatically? (Select TWO).
Internal stages
Incremental backups
Time Travel
Zero-copy clones
Fail-safe
Snowflake’s Continuous Data Protection (CDP) encompasses a set of features that help protect data stored in Snowflake against human error, malicious acts, and software failure. Time Travel allows users to access historical data (i.e., data that has been changed or deleted) for a defined period, enabling querying and restoring of data. Fail-safe is an additional layer of data protection that provides a recovery option in the event of significant data loss or corruption, which can only be performed by Snowflake.
References:
https://docs.snowflake.com/en/user-guide/data-availability.html
User-level network policies can be created by which of the following roles? (Select TWO).
ROLEADMIN
ACCOUNTADMIN
SYSADMIN
SECURITYADMIN
USERADMIN
 User-level network policies in Snowflake can be created by roles with the necessary privileges to manage security and account settings. The ACCOUNTADMIN role has the highest level of privileges across the account, including the ability to manage network policies. The SECURITYADMIN role is specifically responsible for managing security objects within Snowflake, which includes the creation and management of network policies.
References:
Which of the following indicates that it may be appropriate to use a clustering key for a table? (Select TWO).
The table contains a column that has very low cardinality
DML statements that are being issued against the table are blocked
The table has a small number of micro-partitions
Queries on the table are running slower than expected
The clustering depth for the table is large
A clustering key in Snowflake is used to co-locate similar data within the same micro-partitions to improve query performance, especially for large tables where data is not naturally ordered or has become fragmented due to extensive DML operations. The appropriate use of a clustering key can lead to improved scan efficiency and better column compression, resulting in faster query execution times.
The indicators that it may be appropriate to use a clustering key for a table include:
References:
Which stage type can be altered and dropped?
Database stage
External stage
Table stage
User stage
 External stages can be altered and dropped in Snowflake. An external stage points to an external location, such as an S3 bucket, where data files are stored. Users can modify the stage’s definition or drop it entirely if it’s no longer needed. This is in contrast to table stages, which are tied to specific tables and cannot be altered or dropped independently.
References:
Which copy INTO command outputs the data into one file?
SINGLE=TRUE
MAX_FILE_NUMBER=1
FILE_NUMBER=1
MULTIPLE=FAISE
 The COPY INTO command in Snowflake can be configured to output data into a single file by setting the MAX_FILE_NUMBER option to 1. This option limits the number of files generated by the command, ensuring that only one file is created regardless of the amount of data being exported.
References:
Which activities are included in the Cloud Services layer? {Select TWO).
Data storage
Dynamic data masking
Partition scanning
User authentication
Infrastructure management
The Cloud Services layer in Snowflake is responsible for a wide range of services that facilitate the management and use of Snowflake, including:
These services are part of Snowflake's fully managed, cloud-based architecture, which abstracts and automates many of the complexities associated with data warehousing.
References:
A user wants to add additional privileges to the system-defined roles for their virtual warehouse. How does Snowflake recommend they accomplish this?
Grant the additional privileges to a custom role.
Grant the additional privileges to the ACCOUNTADMIN role.
Grant the additional privileges to the SYSADMIN role.
Grant the additional privileges to the ORGADMIN role.
Snowflake recommends enhancing the granularity and management of privileges by creating and utilizing custom roles. When additional privileges are needed beyond those provided by the system-defined roles for a virtual warehouse or any other resource, these privileges should be granted to a custom role. This approach allows for more precise control over access rights and the ability to tailor permissions to the specific needs of different user groups or applications within the organization, while also maintaining the integrity and security model of system-defined roles.
References:
While clustering a table, columns with which data types can be used as clustering keys? (Select TWO).
BINARY
GEOGRAPHY
GEOMETRY
OBJECT
VARIANT
A clustering key can be defined when a table is created by appending a CLUSTER Where each clustering key consists of one or more table columns/expressions, which can be of any data type, except GEOGRAPHY, VARIANT, OBJECT, or ARRAY https://docs.snowflake.com/en/user-guide/tables-clustering-keys
Which function will provide the proxy information needed to protect Snowsight?
SYSTEMADMIN_TAG
SYSTEM$GET_PRIVATELINK
SYSTEMSALLONTLIST
SYSTEMAUTHORIZE
The SYSTEM$GET_PRIVATELINK function in Snowflake provides proxy information necessary for configuring PrivateLink connections, which can protect Snowsight as well as other Snowflake services. PrivateLink enhances security by allowing Snowflake to be accessed via a private connection within a cloud provider’s network, reducing exposure to the public internet.
References:
Which Snowflow object does not consume and storage costs?
Secure view
Materialized view
Temporary table
Transient table
Temporary tables in Snowflake do not consume storage costs. They are designed for transient data that is needed only for the duration of a session. Data stored in temporary tables is held in the virtual warehouse's cache and does not persist beyond the session's lifetime, thereby not incurring any storage charges.
References:
What SnowFlake database object is derived from a query specification, stored for later use, and can speed up expensive aggregation on large data sets?
Temporary table
External table
Secure view
Materialized view
A materialized view in Snowflake is a database object derived from a query specification, stored for later use, and can significantly speed up expensive aggregations on large data sets. Materialized views store the result of their underlying query, reducing the need to recompute the result each time the view is accessed. This makes them ideal for improving the performance of read-heavy, aggregate-intensive queries.
References:
Which Snowflake layer is associated with virtual warehouses?
Cloud services
Query processing
Elastic memory
Database storage
The layer of Snowflake's architecture associated with virtual warehouses is the Query Processing layer. Virtual warehouses in Snowflake are dedicated compute clusters that execute SQL queries against the stored data. This layer is responsible for the entire query execution process, including parsing, optimization, and the actual computation. It operates independently of the storage layer, enabling Snowflake to scale compute and storage resources separately for efficiency and cost-effectiveness.
References:
What is a directory table in Snowflake?
A separate database object that is used to store file-level metadata
An object layered on a stage that is used to store file-level metadata
A database object with grantable privileges for unstructured data tasks
A Snowflake table specifically designed for storing unstructured files
 A directory table in Snowflake is an object layered on a stage that is used to store file-level metadata. It is not a separate database object but is conceptually similar to an external table because it stores metadata about the data files in the stage5.
What is the default File Format used in the COPY command if one is not specified?
CSV
JSON
Parquet
XML
The default file format for the COPY command in Snowflake, when not specified, is CSV (Comma-Separated Values). This format is widely used for data exchange because it is simple, easy to read, and supported by many data analysis tools.
What Snowflake features allow virtual warehouses to handle high concurrency workloads? (Select TWO)
The ability to scale up warehouses
The use of warehouse auto scaling
The ability to resize warehouses
Use of multi-clustered warehouses
The use of warehouse indexing
 Snowflake’s architecture is designed to handle high concurrency workloads through several features, two of which are particularly effective:
These features ensure that Snowflake can manage varying levels of demand without manual intervention, providing a seamless experience even during peak usage.
References:
Which data type can be used to store geospatial data in Snowflake?
Variant
Object
Geometry
Geography
Snowflake supports two geospatial data types: GEOGRAPHY and GEOMETRY. The GEOGRAPHY data type is used to store geospatial data that models the Earth as a perfect sphere, which is suitable for global geospatial data. This data type follows the WGS 84 standard and is used for storing points, lines, and polygons on the Earth’s surface. The GEOMETRY data type, on the other hand, represents features in a planar (Euclidean, Cartesian) coordinate system and is typically used for local spatial reference systems. Since the question specifically asks about geospatial data, which commonly refers to Earth-related spatial data, the correct answer is GEOGRAPHY3. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A tag object has been assigned to a table (TABLE_A) in a schema within a Snowflake database.
Which CREATE object statement will automatically assign the TABLE_A tag to a target object?
CREATE TABLE
CREATE VIEW
CREATE TABLE
CREATE MATERIALIZED VIEW
When a tag object is assigned to a table, using the statement CREATE TABLE <table_name> AS SELECT * FROM TABLE_A will automatically assign the TABLE_A tag to the newly created table2.
Which Snowflake object does not consume any storage costs?
Secure view
Materialized view
Temporary table
Transient table
Temporary tables do not consume any storage costs in Snowflake because they only exist for the duration of the session that created them and are automatically dropped when the session ends, thus incurring no long-term storage charges4. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What objects in Snowflake are supported by Dynamic Data Masking? (Select TWO).'
Views
Materialized views
Tables
External tables
Future grants
Dynamic Data Masking in Snowflake supports tables and views. These objects can have masking policies applied to their columns to dynamically mask data at query time3.
Which Snowflake command can be used to unload the result of a query to a single file?
Use COPY INTO
Use COPY INTO
Use COPY INTO
Use COPY INTO
The Snowflake command to unload the result of a query to a single file is COPY INTO
What tasks can an account administrator perform in the Data Exchange? (Select TWO).
Add and remove members.
Delete data categories.
Approve and deny listing approval requests.
Transfer listing ownership.
Transfer ownership of a provider profile.
 An account administrator in the Data Exchange can perform tasks such as adding and removing members and approving or denying listing approval requests. These tasks are part of managing the Data Exchange and ensuring that only authorized listings and members are part of it12.
Which Snowflake table objects can be shared with other accounts? (Select TWO).
Temporary tables
Permanent tables
Transient tables
External tables
User-Defined Table Functions (UDTFs)
In Snowflake, permanent tables and external tables can be shared with other accounts using Secure Data Sharing. Temporary tables, transient tables, and UDTFs are not shareable objects
Which solution improves the performance of point lookup queries that return a small number of rows from large tables using highly selective filters?
Automatic clustering
Materialized views
Query acceleration service
Search optimization service
The search optimization service improves the performance of point lookup queries on large tables by using selective filters to quickly return a small number of rows. It creates an optimized data structure that helps in pruning the micro-partitions that do not contain the queried values3. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What factors impact storage costs in Snowflake? (Select TWO).
The account type
The storage file format
The cloud region used by the account
The type of data being stored
The cloud platform being used
 The factors that impact storage costs in Snowflake include the account type (Capacity or On Demand) and the cloud region used by the account. These factors determine the rate at which storage is billed, with different regions potentially having different rates3.
Which Snowflake feature allows administrators to identify unused data that may be archived or deleted?
Access history
Data classification
Dynamic Data Masking
Object tagging
 The Access History feature in Snowflake allows administrators to track data access patterns and identify unused data. This information can be used to make decisions about archiving or deleting data to optimize storage and reduce costs.Â
A permanent table and temporary table have the same name, TBL1, in a schema.
What will happen if a user executes select * from TBL1 ;?
The temporary table will take precedence over the permanent table.
The permanent table will take precedence over the temporary table.
An error will say there cannot be two tables with the same name in a schema.
The table that was created most recently will take precedence over the older table.
 In Snowflake, if a temporary table and a permanent table have the same name within the same schema, the temporary table takes precedence over the permanent table within the session where the temporary table was created4.
Which command is used to unload data from a Snowflake database table into one or more files in a Snowflake stage?
CREATE STAGE
COPY INTO