Download Snowflake.ADA-C01.VCEplus.2024-02-13.45q.vcex

Download Exam

File Info

Exam SnowPro Advanced-Administrator Certification
Number ADA-C01
File Name Snowflake.ADA-C01.VCEplus.2024-02-13.45q.vcex
Size 61 KB
Posted Feb 13, 2024
Download Snowflake.ADA-C01.VCEplus.2024-02-13.45q.vcex

How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase

Coupon: MASTEREXAM
With discount: 20%






Demo Questions

Question 1

A Snowflake Administrator wants to create a virtual warehouse that supports several dashboards, issuing various queries on the same database.
For this warehouse, why should the Administrator consider setting AUTO_SUSPEND to 0 or NULL?


  1. To save costs on warehouse shutdowns and startups for different queries
  2. To save costs by running the warehouse as little as possible
  3. To keep the data cache warm to support good performance of similar queries
  4. To keep the query result cache warm for good performance on repeated queries
Correct answer: C
Explanation:
According to the Snowflake documentation1, the AUTO_SUSPEND property specifies the number of seconds of inactivity after which a warehouse is automatically suspended. If the property is set to 0 or NULL, the warehouse never suspends automatically. For a warehouse that supports several dashboards, issuing various queries on the same database, setting AUTO_SUSPEND to 0 or NULL can help to keep the data cache warm, which means that the data used by the queries is already loaded into the warehouse memory and does not need to be fetched from the storage layer. This can improve the performance of similar queries that access the same data. Option A is incorrect because setting AUTO_SUSPEND to 0 or NULL does not save costs on warehouse shutdowns and startups, but rather increases the costs by keeping the warehouse running continuously. Option B is incorrect because setting AUTO_SUSPEND to 0 or NULL does not run the warehouse as little as possible, but rather runs the warehouse as much as possible. Option D is incorrect because setting AUTO_SUSPEND to 0 or NULL does not affect the query result cache, which is a separate cache that stores the results of previous queries for a period of time. The query result cache is not dependent on the warehouse state, but on the query criteria2.
According to the Snowflake documentation1, the AUTO_SUSPEND property specifies the number of seconds of inactivity after which a warehouse is automatically suspended. If the property is set to 0 or NULL, the warehouse never suspends automatically. For a warehouse that supports several dashboards, issuing various queries on the same database, setting AUTO_SUSPEND to 0 or NULL can help to keep the data cache warm, which means that the data used by the queries is already loaded into the warehouse memory and does not need to be fetched from the storage layer. This can improve the performance of similar queries that access the same data. Option A is incorrect because setting AUTO_SUSPEND to 0 or NULL does not save costs on warehouse shutdowns and startups, but rather increases the costs by keeping the warehouse running continuously. Option B is incorrect because setting AUTO_SUSPEND to 0 or NULL does not run the warehouse as little as possible, but rather runs the warehouse as much as possible. Option D is incorrect because setting AUTO_SUSPEND to 0 or NULL does not affect the query result cache, which is a separate cache that stores the results of previous queries for a period of time. The query result cache is not dependent on the warehouse state, but on the query criteria2.



Question 2

What SCIM integration types are supported in Snowflake? (Select THREE).


  1. Amazon Web Services (AWS)
  2. Google Cloud Platform (GCP)
  3. Okta
  4. Custom
  5. Azure Active Directory (Azure AD)
  6. Duo Security Provisioning Connector
Correct answer: CDE
Explanation:
According to the Snowflake documentation1, Snowflake supports SCIM 2.0 to integrate Snowflake with Okta and Microsoft Azure AD, which both function as identity providers. Snowflake also supports identity providers that are neither Okta nor Microsoft Azure (i.e. Custom). Therefore, the SCIM integration types that are supported in Snowflake are Okta, Custom, and Azure AD. Option A is incorrect because Amazon Web Services (AWS) is not a SCIM identity provider. Option B is incorrect because Google Cloud Platform (GCP) is not a SCIM identity provider. Option F is incorrect because Duo Security Provisioning Connector is not a SCIM identity provider.
According to the Snowflake documentation1, Snowflake supports SCIM 2.0 to integrate Snowflake with Okta and Microsoft Azure AD, which both function as identity providers. Snowflake also supports identity providers that are neither Okta nor Microsoft Azure (i.e. Custom). Therefore, the SCIM integration types that are supported in Snowflake are Okta, Custom, and Azure AD. Option A is incorrect because Amazon Web Services (AWS) is not a SCIM identity provider. Option B is incorrect because Google Cloud Platform (GCP) is not a SCIM identity provider. Option F is incorrect because Duo Security Provisioning Connector is not a SCIM identity provider.



Question 3

A team of developers created a new schema for a new project. The developers are assigned the role DEV_TEAM which was set up using the following statements:
USE ROLE SECURITYADMIN;
CREATE ROLE DEV TEAM;
GRANT USAGE, CREATE SCHEMA ON DATABASE DEV_DB01 TO ROLE DEV_TEAM;
GRANT USAGE ON WAREHOUSE DEV_WH TO ROLE DEV_TEAM;
Each team member's access is set up using the following statements:
USE ROLE SECURITYADMIN;
CREATE ROLE JDOE_PROFILE;
CREATE USER JDOE LOGIN NAME = 'JDOE' DEFAULT_ROLE='JDOE_PROFILE';
GRANT ROLE JDOE_PROFILE TO USER JDOE;
GRANT ROLE DEV_TEAM TO ROLE JDOE_PROFILE;
New tables created by any of the developers are not accessible by the team as a whole.
How can an Administrator address this problem?


  1. Assign ownership privilege to DEV_TEAM on the newly-created schema.
  2. Assign usage privilege on the virtual warehouse DEV_WH to the role JDOE_PROFILE.
  3. Set up future grants on the newly-created schemas.
  4. Set up the new schema as a managed-access schema.
Correct answer: C
Explanation:
According to the Snowflake documentation1, future grants are a way to automatically grant privileges on future objects of a specific type that are created in a database or schema. By setting up future grants on the newlycreated schemas, the administrator can ensure that any tables created by the developers in those schemas will be accessible by the DEV_TEAM role, without having to grant privileges on each table individually. Option A is incorrect because assigning ownership privilege to DEV_TEAM on the newly-created schema does not grant privileges on the tables in the schema, only on the schema itself. Option B is incorrect because assigning usage privilege on the virtual warehouse DEV_WH to the role JDOE_PROFILE does not affect the access to the tables in the schemas, only the ability to use the warehouse. Option D is incorrect because setting up the new schema as a managed-access schema does not grant privileges on the tables in the schema, but rather requires explicit grants for each table.
According to the Snowflake documentation1, future grants are a way to automatically grant privileges on future objects of a specific type that are created in a database or schema. By setting up future grants on the newlycreated schemas, the administrator can ensure that any tables created by the developers in those schemas will be accessible by the DEV_TEAM role, without having to grant privileges on each table individually. Option A is incorrect because assigning ownership privilege to DEV_TEAM on the newly-created schema does not grant privileges on the tables in the schema, only on the schema itself. Option B is incorrect because assigning usage privilege on the virtual warehouse DEV_WH to the role JDOE_PROFILE does not affect the access to the tables in the schemas, only the ability to use the warehouse. Option D is incorrect because setting up the new schema as a managed-access schema does not grant privileges on the tables in the schema, but rather requires explicit grants for each table.



Question 4

When a role is dropped, which role inherits ownership of objects owned by the dropped role?


  1. The SYSADMIN role
  2. The role above the dropped role in the RBAC hierarchy
  3. The role executing the command
  4. The SECURITYADMIN role
Correct answer: B
Explanation:
According to the Snowflake documentation1, when a role is dropped, ownership of all objects owned by the dropped role is transferred to the role that is directly above the dropped role in the role hierarchy. This is to ensure that there is always a single owner for each object in the system.1: Drop Role | Snowflake Documentation
According to the Snowflake documentation1, when a role is dropped, ownership of all objects owned by the dropped role is transferred to the role that is directly above the dropped role in the role hierarchy. This is to ensure that there is always a single owner for each object in the system.
1: Drop Role | Snowflake Documentation



Question 5

What are the requirements when creating a new account within an organization in Snowflake? (Select TWO).


  1. The account requires at least one ORGADMIN role within one of the organization's accounts.
  2. The account name is immutable and cannot be changed.
  3. The account name must be specified when the account is created.
  4. The account name must be unique among all Snowflake customers.
  5. The account name must be unique within the organization.
Correct answer: CE
Explanation:
According to the CREATE ACCOUNT documentation, the account name must be specified when the account is created, and it must be unique within an organization, regardless of which Snowflake Region the account is in. The other options are incorrect because:The account does not require at least one ORGADMIN role within one of the organization's accounts. The account can be created by an organization administrator (i.e. a user with the ORGADMIN role) through the web interface or using SQL, but the new account does not inherit the ORGADMIN role from the existing account. The new account will have its own set of users, roles, databases, and warehouses.The account name is not immutable and can be changed. The account name can be modified by contacting Snowflake Support and requesting a name change. However, changing the account name may affect some features that depend on the account name, such as SSO or SCIM.The account name does not need to be unique among all Snowflake customers. The account name only needs to be unique within the organization, as the account URL also includes the region and cloud platform information. For example, two accounts with the same name can exist in different regions or cloud platforms, such as myaccount.us-east-1.snowflakecomputing.com and myaccount.eu-west-1.aws.snowflakecomputing.com.
According to the CREATE ACCOUNT documentation, the account name must be specified when the account is created, and it must be unique within an organization, regardless of which Snowflake Region the account is in. The other options are incorrect because:
  • The account does not require at least one ORGADMIN role within one of the organization's accounts. The account can be created by an organization administrator (i.e. a user with the ORGADMIN role) through the web interface or using SQL, but the new account does not inherit the ORGADMIN role from the existing account. The new account will have its own set of users, roles, databases, and warehouses.
  • The account name is not immutable and can be changed. The account name can be modified by contacting Snowflake Support and requesting a name change. However, changing the account name may affect some features that depend on the account name, such as SSO or SCIM.
  • The account name does not need to be unique among all Snowflake customers. The account name only needs to be unique within the organization, as the account URL also includes the region and cloud platform information. For example, two accounts with the same name can exist in different regions or cloud platforms, such as myaccount.us-east-1.snowflakecomputing.com and myaccount.eu-west-1.aws.snowflakecomputing.com.



Question 6

A Snowflake customer is experiencing higher costs than anticipated while migrating their data warehouse workloads from on-premises to Snowflake. The migration workloads have been deployed on a single warehouse and are characterized by a large number of small INSERTs rather than bulk loading of large extracts. That single warehouse has been configured as a single cluster, 2XL because there are many parallel INSERTs that are scheduled during nightly loads.
How can the Administrator reduce the costs, while minimizing the overall load times, for migrating data warehouse history?


  1. There should be another 2XL warehouse deployed to handle a portion of the load queries.
  2. The 2XL warehouse should be changed to 4XL to increase the number of threads available for parallel load queries.
  3. The warehouse should be kept as a SMALL or XSMALL and configured as a multi-cluster warehouse to handle the parallel load queries.
  4. The INSERTS should be converted to several tables to avoid contention on large tables that slows down query processing.
Correct answer: C
Explanation:
According to the Snowflake Warehouse Cost Optimization blog post, one of the strategies to reduce the cost of running a warehouse is to use a multi-cluster warehouse with auto-scaling enabled. This allows the warehouse to automatically adjust the number of clusters based on the concurrency demand and the queue size. A multi-cluster warehouse can also be configured with a minimum and maximum number of clusters, as well as a scaling policy to control the scaling behavior. This way, the warehouse can handle the parallel load queries efficiently without wasting resources or credits. The blog post also suggests using a smaller warehouse size, such as SMALL or XSMALL, for loading data, as it can perform better than a larger warehouse size for small INSERTs. Therefore, the best option to reduce the costs while minimizing the overall load times for migrating data warehouse history is to keep the warehouse as a SMALL or XSMALL and configure it as a multi-cluster warehouse to handle the parallel load queries. The other options are incorrect because:A. Deploying another 2XL warehouse to handle a portion of the load queries will not reduce the costs, but increase them. It will also introduce complexity and potential inconsistency in managing the data loading process across multiple warehouses.B. Changing the 2XL warehouse to 4XL will not reduce the costs, but increase them. It will also provide more compute resources than needed for small INSERTs, which are not CPU-intensive but I/O-intensive.D. Converting the INSERTs to several tables will not reduce the costs, but increase them. It will also create unnecessary data duplication and fragmentation, which will affect the query performance and data quality.
According to the Snowflake Warehouse Cost Optimization blog post, one of the strategies to reduce the cost of running a warehouse is to use a multi-cluster warehouse with auto-scaling enabled. This allows the warehouse to automatically adjust the number of clusters based on the concurrency demand and the queue size. A multi-cluster warehouse can also be configured with a minimum and maximum number of clusters, as well as a scaling policy to control the scaling behavior. This way, the warehouse can handle the parallel load queries efficiently without wasting resources or credits. The blog post also suggests using a smaller warehouse size, such as SMALL or XSMALL, for loading data, as it can perform better than a larger warehouse size for small INSERTs. Therefore, the best option to reduce the costs while minimizing the overall load times for migrating data warehouse history is to keep the warehouse as a SMALL or XSMALL and configure it as a multi-cluster warehouse to handle the parallel load queries. The other options are incorrect because:
  • A. Deploying another 2XL warehouse to handle a portion of the load queries will not reduce the costs, but increase them. It will also introduce complexity and potential inconsistency in managing the data loading process across multiple warehouses.
  • B. Changing the 2XL warehouse to 4XL will not reduce the costs, but increase them. It will also provide more compute resources than needed for small INSERTs, which are not CPU-intensive but I/O-intensive.
  • D. Converting the INSERTs to several tables will not reduce the costs, but increase them. It will also create unnecessary data duplication and fragmentation, which will affect the query performance and data quality.



Question 7

What roles or security privileges will allow a consumer account to request and get data from the Data Exchange? (Select TWO).


  1. SYSADMIN
  2. SECURITYADMIN
  3. ACCOUNTADMIN
  4. IMPORT SHARE and CREATE DATABASE
  5. IMPORT PRIVILEGES and SHARED DATABASE
Correct answer: CD
Explanation:
According to the Accessing a Data Exchange documentation, a consumer account can request and get data from the Data Exchange using either the ACCOUNTADMIN role or a role with the IMPORT SHARE and CREATE DATABASE privileges. The ACCOUNTADMIN role is the top-level role that has all privileges on all objects in the account, including the ability to request and get data from the Data Exchange. A role with the IMPORT SHARE and CREATE DATABASE privileges can also request and get data from the Data Exchange, as these are the minimum privileges required to create a database from a share. The other options are incorrect because:A. The SYSADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SYSADMIN role is a pre-defined role that has all privileges on all objects in the account, except for the privileges reserved for the ACCOUNTADMIN role, such as managing users, roles, and shares.B. The SECURITYADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SECURITYADMIN role is a predefined role that has the privilege to manage security objects in the account, such as network policies, encryption keys, and security integrations, but not data objects, such as databases, schemas, and tables.E. The IMPORT PRIVILEGES and SHARED DATABASE are not valid privileges in Snowflake. The correct privilege names are IMPORT SHARE and CREATE DATABASE, as explained above.
According to the Accessing a Data Exchange documentation, a consumer account can request and get data from the Data Exchange using either the ACCOUNTADMIN role or a role with the IMPORT SHARE and CREATE DATABASE privileges. The ACCOUNTADMIN role is the top-level role that has all privileges on all objects in the account, including the ability to request and get data from the Data Exchange. A role with the IMPORT SHARE and CREATE DATABASE privileges can also request and get data from the Data Exchange, as these are the minimum privileges required to create a database from a share. The other options are incorrect because:
  • A. The SYSADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SYSADMIN role is a pre-defined role that has all privileges on all objects in the account, except for the privileges reserved for the ACCOUNTADMIN role, such as managing users, roles, and shares.
  • B. The SECURITYADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SECURITYADMIN role is a predefined role that has the privilege to manage security objects in the account, such as network policies, encryption keys, and security integrations, but not data objects, such as databases, schemas, and tables.
  • E. The IMPORT PRIVILEGES and SHARED DATABASE are not valid privileges in Snowflake. The correct privilege names are IMPORT SHARE and CREATE DATABASE, as explained above.



Question 8

An Administrator wants to delegate the administration of a company's data exchange to users who do not have access to the ACCOUNTADMIN role.
How can this requirement be met?


  1. Grant imported privileges on data exchange EXCHANGE_NAME to ROLE_NAME;
  2. Grant modify on data exchange EXCHANGE_NAME to ROLE_NAME;
  3. Grant ownership on data exchange EXCHANGE_NAME to ROLE NAME;
  4. Grant usage on data exchange EXCHANGE_NAME to ROLE_NAME;
Correct answer: B
Explanation:
According to the [GRANT MODIFY] documentation, the MODIFY privilege on a data exchange allows a role to perform administrative tasks on the data exchange, such as inviting members, approving profiles, and reviewing listings. This privilege can be granted by the ACCOUNTADMIN role or a role that already has the MODIFY privilege on the data exchange. Therefore, to delegate the administration of a company's data exchange to users who do not have access to the ACCOUNTADMIN role, the best option is to grant the MODIFY privilege on the data exchange to a role that the users can assume. The other options are incorrect because:A. There is no such privilege as IMPORTED PRIVILEGES in Snowflake. The correct privilege name is IMPORT SHARE, which allows a role to create a database from a share. This privilege is not related to the administration of a data exchange, but to the consumption of shared data.C. There is no such privilege as OWNERSHIP in Snowflake. The correct privilege name is OWNED BY, which indicates the role that owns an object and has full control over it. However, this privilege cannot be granted or revoked, but only transferred by the current owner to another role using the GRANT OWNERSHIP command. Therefore, this option is not feasible for delegating the administration of a data exchange.D. The USAGE privilege on a data exchange allows a role to access the data exchange and view the available data listings. This privilege does not allow a role to perform administrative tasks on the data exchange, such as inviting members, approving profiles, and reviewing listings. Therefore, this option is not sufficient for delegating the administration of a data exchange.
According to the [GRANT MODIFY] documentation, the MODIFY privilege on a data exchange allows a role to perform administrative tasks on the data exchange, such as inviting members, approving profiles, and reviewing listings. This privilege can be granted by the ACCOUNTADMIN role or a role that already has the MODIFY privilege on the data exchange. Therefore, to delegate the administration of a company's data exchange to users who do not have access to the ACCOUNTADMIN role, the best option is to grant the MODIFY privilege on the data exchange to a role that the users can assume. The other options are incorrect because:
  • A. There is no such privilege as IMPORTED PRIVILEGES in Snowflake. The correct privilege name is IMPORT SHARE, which allows a role to create a database from a share. This privilege is not related to the administration of a data exchange, but to the consumption of shared data.
  • C. There is no such privilege as OWNERSHIP in Snowflake. The correct privilege name is OWNED BY, which indicates the role that owns an object and has full control over it. However, this privilege cannot be granted or revoked, but only transferred by the current owner to another role using the GRANT OWNERSHIP command. Therefore, this option is not feasible for delegating the administration of a data exchange.
  • D. The USAGE privilege on a data exchange allows a role to access the data exchange and view the available data listings. This privilege does not allow a role to perform administrative tasks on the data exchange, such as inviting members, approving profiles, and reviewing listings. Therefore, this option is not sufficient for delegating the administration of a data exchange.



Question 9

A company has implemented Snowflake replication between two Snowflake accounts, both of which are running on a Snowflake Enterprise edition. The replication is for the database APP_DB containing only one schema, APP_SCHEMA. The company's Time Travel retention policy is currently set for 30 days for both accounts. An Administrator has been asked to extend the Time Travel retention policy to 60 days on the secondary database only.
How can this requirement be met?


  1. Set the data retention policy on the secondary database to 60 days.
  2. Set the data retention policy on the schemas in the secondary database to 60 days.
  3. Set the data retention policy on the primary database to 30 days and the schemas to 60 days.
  4. Set the data retention policy on the primary database to 60 days.
Correct answer: A
Explanation:
According to the Replication considerations documentation, the Time Travel retention period for a secondary database can be different from the primary database. The retention period can be set at the database, schema, or table level using the DATA_RETENTION_TIME_IN_DAYS parameter. Therefore, to extend the Time Travel retention policy to 60 days on the secondary database only, the best option is to set the data retention policy on the secondary database to 60 days using the ALTER DATABASE command. The other options are incorrect because:B. Setting the data retention policy on the schemas in the secondary database to 60 days will not affect the database-level retention period, which will remain at 30 days. The most specific setting overrides the more general ones, so the schema-level setting will apply to the tables in the schema, but not to the database itself.C. Setting the data retention policy on the primary database to 30 days and the schemas to 60 days will not affect the secondary database, which will have its own retention period. The replication process does not copy the retention period settings from the primary to the secondary database, so they can be configured independently.D. Setting the data retention policy on the primary database to 60 days will not affect the secondary database, which will have its own retention period. The replication process does not copy the retention period settings from the primary to the secondary database, so they can be configured independently.
According to the Replication considerations documentation, the Time Travel retention period for a secondary database can be different from the primary database. The retention period can be set at the database, schema, or table level using the DATA_RETENTION_TIME_IN_DAYS parameter. Therefore, to extend the Time Travel retention policy to 60 days on the secondary database only, the best option is to set the data retention policy on the secondary database to 60 days using the ALTER DATABASE command. The other options are incorrect because:
  • B. Setting the data retention policy on the schemas in the secondary database to 60 days will not affect the database-level retention period, which will remain at 30 days. The most specific setting overrides the more general ones, so the schema-level setting will apply to the tables in the schema, but not to the database itself.
  • C. Setting the data retention policy on the primary database to 30 days and the schemas to 60 days will not affect the secondary database, which will have its own retention period. The replication process does not copy the retention period settings from the primary to the secondary database, so they can be configured independently.
  • D. Setting the data retention policy on the primary database to 60 days will not affect the secondary database, which will have its own retention period. The replication process does not copy the retention period settings from the primary to the secondary database, so they can be configured independently.



Question 10

When does auto-suspend occur for a multi-cluster virtual warehouse?


  1. When there has been no activity on any cluster for the specified period of time.
  2. After a specified period of time when an additional cluster has started on the maximum number of clusters specified for a warehouse.
  3. When the minimum number of clusters is running and there is no activity for the specified period of time.
  4. Auto-suspend does not apply for multi-cluster warehouses.
Correct answer: C
Explanation:
According to the Multi-cluster Warehouses documentation, auto-suspend is a feature that allows a warehouse to automatically suspend itself after a specified period of inactivity. For a multi-cluster warehouse, auto-suspend applies to the entire warehouse, not to individual clusters. Therefore, auto-suspend occurs when the minimum number of clusters is running and there is no activity for the specified period of time. The other options are incorrect because:A. Auto-suspend does not occur when there has been no activity on any cluster for the specified period of time. This would imply that each cluster has its own auto-suspend timer, which is not the case. The warehouse has a single auto-suspend timer that is reset by any activity on any cluster.B. Auto-suspend does not occur after a specified period of time when an additional cluster has started on the maximum number of clusters specified for a warehouse. This would imply that the auto-suspend timer is affected by the number of clusters running, which is not the case. The auto-suspend timer is only affected by the activity on the warehouse, regardless of the number of clusters running.D. Auto-suspend does apply for multi-cluster warehouses, as explained above. It is a feature that can be enabled or disabled for any warehouse, regardless of the number of clusters.
According to the Multi-cluster Warehouses documentation, auto-suspend is a feature that allows a warehouse to automatically suspend itself after a specified period of inactivity. For a multi-cluster warehouse, auto-suspend applies to the entire warehouse, not to individual clusters. Therefore, auto-suspend occurs when the minimum number of clusters is running and there is no activity for the specified period of time. The other options are incorrect because:
  • A. Auto-suspend does not occur when there has been no activity on any cluster for the specified period of time. This would imply that each cluster has its own auto-suspend timer, which is not the case. The warehouse has a single auto-suspend timer that is reset by any activity on any cluster.
  • B. Auto-suspend does not occur after a specified period of time when an additional cluster has started on the maximum number of clusters specified for a warehouse. This would imply that the auto-suspend timer is affected by the number of clusters running, which is not the case. The auto-suspend timer is only affected by the activity on the warehouse, regardless of the number of clusters running.
  • D. Auto-suspend does apply for multi-cluster warehouses, as explained above. It is a feature that can be enabled or disabled for any warehouse, regardless of the number of clusters.









CONNECT US

Facebook

Twitter

PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount!



HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files