GET 70% Discount on All Products
Coupon code: "Board70"
A company’s daily Snowflake workload consists of a huge number of concurrent queries triggered between 9pm and 11pm. At the individual level, these queries are smaller statements that get completed within a short time period.
What configuration can the company’s Architect implement to enhance the performance of this workload? (Choose two.)
Enable a multi-clustered virtual warehouse in maximized mode during the workload duration.
Set the MAX_CONCURRENCY_LEVEL to a higher value than its default value of 8 at the virtual warehouse level.
Increase the size of the virtual warehouse to size X-Large.
Reduce the amount of data that is being processed through this workload.
Set the connection timeout to a higher value than its default.
These two configuration options can enhance the performance of the workload that consists of a huge number of concurrent queries that are smaller and faster.
References:
Which Snowflake architecture recommendation needs multiple Snowflake accounts for implementation?
Enable a disaster recovery strategy across multiple cloud providers.
Create external stages pointing to cloud providers and regions other than the region hosting the Snowflake account.
Enable zero-copy cloning among the development, test, and production environments.
Enable separation of the development, test, and production environments.
The Snowflake architecture recommendation that necessitates multiple Snowflake accounts for implementation is the separation of development, test, and production environments. This approach, known as Account per Tenant (APT), isolates tenants into separate Snowflake accounts, ensuring dedicated resources and security isolation12.
References
•Snowflake’s white paper on “Design Patterns for Building Multi-Tenant Applications on Snowflake” discusses the APT model and its requirement for separate Snowflake accounts for each tenant1.
•Snowflake Documentation on Secure Data Sharing, which mentions the possibility of sharing data across multiple accounts3.
What are purposes for creating a storage integration? (Choose three.)
Control access to Snowflake data using a master encryption key that is maintained in the cloud provider’s key management service.
Store a generated identity and access management (IAM) entity for an external cloud provider regardless of the cloud provider that hosts the Snowflake account.
Support multiple external stages using one single Snowflake object.
Avoid supplying credentials when creating a stage or when loading or unloading data.
Create private VPC endpoints that allow direct, secure connectivity between VPCs without traversing the public internet.
Manage credentials from multiple cloud providers in one single Snowflake object.
The purpose of creating a storage integration in Snowflake includes:B. Store a generated identity and access management (IAM) entity for an external cloud provider - This helps in managing authentication and authorization with external cloud storage without embedding credentials in Snowflake. It supports various cloud providers like AWS, Azure, or GCP, ensuring that the identity management is streamlined across platforms.C. Support multiple external stages using one single Snowflake object - Storage integrations allow you to set up access configurations that can be reused across multiple external stages, simplifying the management of external data integrations.D. Avoid supplying credentials when creating a stage or when loading or unloading data - By using a storage integration, Snowflake can interact with external storage without the need to continuously manage or expose sensitive credentials, enhancing security and ease of operations.References: Snowflake documentation on storage integrations, found within the SnowPro Advanced: Architect course materials.
An Architect is troubleshooting a query with poor performance using the QUERY function. The Architect observes that the COMPILATION_TIME Is greater than the EXECUTION_TIME.
What is the reason for this?
The query is processing a very large dataset.
The query has overly complex logic.
The query Is queued for execution.
The query Is reading from remote storage
Following objects can be cloned in snowflake
Permanent table
Transient table
Temporary table
External tables
Internal stages
References: : Cloning Considerations : CREATE TABLE … CLONE : CREATE EXTERNAL TABLE … CLONE : Temporary Tables : Internal Stages
How can the Snowflake context functions be used to help determine whether a user is authorized to see data that has column-level security enforced? (Select TWO).
Set masking policy conditions using current_role targeting the role in use for the current session.
Set masking policy conditions using is_role_in_session targeting the role in use for the current account.
Set masking policy conditions using invoker_role targeting the executing role in a SQL statement.
Determine if there are ownership privileges on the masking policy that would allow the use of any function.
Assign the accountadmin role to the user who is executing the object.
Snowflake context functions are functions that return information about the current session, user, role, warehouse, database, schema, or object. They can be used to help determine whether a user is authorized to see data that has column-level security enforced by setting masking policy conditions based on the context functions. The following context functions are relevant for column-level security:
The other options are not valid ways to use the Snowflake context functions for column-level security:
References:
A company is storing large numbers of small JSON files (ranging from 1-4 bytes) that are received from IoT devices and sent to a cloud provider. In any given hour, 100,000 files are added to the cloud provider.
What is the MOST cost-effective way to bring this data into a Snowflake table?
An external table
A pipe
A stream
A copy command at regular intervals
References: : Pipes : Loading Data Using Snowpipe : External Tables : Streams : COPY INTO