0% found this document useful (0 votes)
13 views39 pages

Question Set 15

The document consists of a series of questions and answers related to Snowflake's features, focusing on data storage, time travel, and data types. It includes explanations for correct and incorrect answers, emphasizing the capabilities of Snowflake in handling JSON data, retention periods, and the use of streams. Key topics covered include the VARIANT data type, task scheduling, and the undrop command for restoring objects.

Uploaded by

suryakingz21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views39 pages

Question Set 15

The document consists of a series of questions and answers related to Snowflake's features, focusing on data storage, time travel, and data types. It includes explanations for correct and incorrect answers, emphasizing the capabilities of Snowflake in handling JSON data, retention periods, and the use of streams. Key topics covered include the VARIANT data type, task scheduling, and the undrop command for restoring objects.

Uploaded by

suryakingz21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Question 1: Correct

Storage fees are incurred for maintaining historical data during the Time Travel but not
during Fail-safe periods as Fail safe is managed by Snowflake

TRUE


FALSE
(Correct)

Explanation
Storage cost is for both Time Travel & Fail Safe. Refer Link -
https://docs.snowflake.com/en/user-guide/data-cdp-storage-costs.html
Question 2: Incorrect
You are a developer with in one company running a Snowflake data warehouse. You
required to load JSON based data into a table. What is the best way to do it in snowflake?

Write a program in Java or Python to convert the JSON into tabular format and then load in
Snowflake table
(Incorrect)


Ask the source team to send data into CSV instead of JSON


Create a table with variant column and load the json data into that table
(Correct)

Explanation
Snowflake provides capability to store the semi structured data into Variant column.
Question 3: Incorrect
What data structure types can be ingested into a VARIANT column in a Snowflake table?
(Check all that apply)

JSON
(Correct)


ORC
(Correct)


HADOOP


AVRO
(Correct)


XML
(Correct)


PARQUET
(Correct)

Explanation
Refer Snowflake Document section of Semi Structure - https://docs.snowflake.com/en/user-
guide/semistructured-concepts.html
Question 4: Incorrect
What are the additional metadata columns available in Stream to track changes?

METADATA$ACTION
(Correct)


METADATA$ISDELETE
(Incorrect)


METADATA$ISUPDATE
(Correct)


METADATA$ISINSERT
(Incorrect)


METADATA$ROW_ID
(Correct)

Explanation
Refer link - https://docs.snowflake.com/en/user-guide/streams.html
Question 5: Correct
In Snowflake Enterprise edition, the maximum retention period for a permanent table is?

90 days
(Correct)


7 days


365 days


1 day
Explanation
For Snowflake Enterprise Edition (and higher): For transient databases, schemas, and tables, the
retention period can be set to 0 (or unset back to the default of 1 day). The same is also true for
temporary tables. For permanent databases, schemas, and tables, the retention period can be set
to any value from 0 up to 90 days
Question 6: Correct
After dropping an object, creating an object with the same name does not restore the
object. Instead, it creates a new version of the object. The original, dropped version is still
available and can be restored.

TRUE
(Correct)


FALSE
Explanation
After dropping an object, creating an object with the same name does not restore the object.
Instead, it creates a new version of the object. The original, dropped version is still available and
can be restored. Restoring a dropped object restores the object in place (i.e. it does not create a
new object).
Question 7: Correct
What are the different types of streams available?

Insert-Only
(Correct)


Standard
(Correct)


Native


Append-Only
(Correct)


Update-Only
Explanation
Refer link - https://docs.snowflake.com/en/user-guide/streams.html
Question 8: Incorrect
Which of the following is a common case for Cloning in Snowflake? (Select all that apply)

Data Encryption Protection


Data Life Cycle Management
(Correct)


"Point in Time" Snapshot
(Correct)


Agile Release and Development
(Correct)

Question 9: Incorrect
The FLATTEN command will parse nested objects into separate rows. One version of the
FLATTEN command uses a join and the other uses an object keyword. Select the two
words that represent the options used with the FLATTEN command

SIDEWAYS


SCHEMATA


TABLE
(Correct)


LATERAL
(Correct)

Explanation
Refer Snowflake Document section of Semi Structure - https://docs.snowflake.com/en/user-
guide/semistructured-concepts.html
Question 10: Correct
What is the standard time travel retention period which is enabled for all snowflake
accounts?

7 days


48 hours


14 days


24 hours
(Correct)

Explanation
The standard retention period is 1 day (24 hours) and is automatically enabled for all Snowflake
accounts: For Snowflake Standard Edition, the retention period can be set to 0 (or unset back to
the default of 1 day) at the account and object level (i.e. databases, schemas, and tables). For
Snowflake Enterprise Edition (and higher): For transient databases, schemas, and tables, the
retention period can be set to 0 (or unset back to the default of 1 day). The same is also true for
temporary tables. For permanent databases, schemas, and tables, the retention period can be set
to any value from 0 up to 90 days.
Question 11: Correct
Which type of table stream tracks all DML changes to the source table including inserts,
updates, deletes and also truncate?

Append-Only


Standard
(Correct)


Insert-Only


Multifunction


Native
Explanation
A standard (i.e. delta) table stream tracks all DML changes to the source table, including inserts,
updates, and deletes (including table truncates). This stream type performs a join on inserted and
deleted rows in the change set to provide the row level delta. As a net effect, for example, a row
that is inserted and then deleted between two transactional points of time in a table is removed in
the delta (i.e. is not returned when the stream is queried).
Question 12: Skipped
Which of the following are true about the variant data type in Snowflake? Select all that
apply.

Requires a custom mapping for each record type


Can be queried using JSON path notation
(Correct)


Optimized storage based on repeated elements
(Correct)


Stored in a separate file format from structured data
Explanation
Refer Snowflake Document section of Semi Structure - https://docs.snowflake.com/en/user-
guide/semistructured-concepts.html
Question 13: Correct
The VARIANT data type imposes, how much size limit on individual rows

64 MB Compressed


16 MB Compressed
(Correct)


16 MB Uncompressed


64 MB Uncompressed
Question 14: Correct
Which file format is an open-source data serialization and RPC framework originally
developed for use with Apache Hadoop

PARQUET


AVRO
(Correct)


ORC


XML


JSON
Explanation
Avro is an open-source data serialization and RPC framework originally developed for use with
Apache Hadoop. It utilizes schemas defined in JSON to produce serialized data in a compact
binary format. The serialized data can be sent to any destination (i.e. application or program) and
can be easily deserialized at the destination because the schema is included in the data
Question 15: Incorrect
JSON arrays are written inside which type of brackets?

Square Brackets
(Correct)


Round Brackets

Curly Braces
(Incorrect)

Explanation
JSON arrays are written inside square brackets. An array can contain multiple objects, separated
by commas. For example: {"employees":[ {"firstName":"John", "lastName":"Doe"},
{"firstName":"Anna", "lastName":"Smith"}, {"firstName":"Peter", "lastName":"Jones"} ] }
Question 16: Correct
If you have a table with a 10-day retention period and you decrease the period to 1-day,
then what happens to the data that is already in time travel for 4 days

The data would be moved to fail safe
(Correct)


The data would move to fail safe after 1 day


The data would be removed and will not even be available in fail safe


The data would continue to be in time travel, and new time travel retention of 1 day will be applied to
only active data which is not already in time travel window
Explanation
Reduces the amount of time data is retained in Time Travel: For active data modified after the
retention period is reduced, the new shorter period applies. For data that is currently in Time
Travel: If the data is still within the new shorter period, it remains in Time Travel. If the data is
outside the new period, it moves into Fail-safe. For example, if you have a table with a 10-day
retention period and you decrease the period to 1-day, data from days 2 to 10 will be moved into
Fail-safe, leaving only the data from day 1 accessible through Time Travel. However, the
process of moving the data from Time Travel into Fail-safe is performed by a background
process, so the change is not immediately visible. Snowflake guarantees that the data will be
moved, but does not specify when the process will complete; until the background process
completes, the data is still accessible through Time Travel
Question 17: Correct
Using Time Travel, you can perform which of the following actions within a defined period
of time (i.e. within the time travel period of the object)

Create clones of entire tables, schemas, and databases at or before specific points in the past


Restore tables, schemas, and databases that have been dropped


Query data in the past that has since been updated or deleted

All options are correct
(Correct)

Explanation
Using Time Travel, you can perform the following actions within a defined period of time:
Query data in the past that has since been updated or deleted. Create clones of entire tables,
schemas, and databases at or before specific points in the past. Restore tables, schemas, and
databases that have been dropped. Once the defined period of time has elapsed, the data is moved
into Snowflake Fail-safe and these actions can no longer be performed.
Question 18: Correct
Changing the time travel retention period for your account or individual objects changes
the value for all lower-level objects that do not have a retention period explicitly set

FALSE


TRUE
(Correct)

Explanation
Changing the retention period for your account or individual objects changes the value for all
lower-level objects that do not have a retention period explicitly set. For example: If you change
the retention period at the account level, all databases, schemas, and tables that do not have an
explicit retention period automatically inherit the new retention period. If you change the
retention period at the schema level, all tables in the schema that do not have an explicit retention
period inherit the new retention period. Keep this in mind when changing the retention period for
your account or any objects in your account because the change might have Time Travel
consequences that you did not anticipate or intend. In particular, we do not recommend changing
the retention period to 0 at the account level.
Question 19: Correct
JSON data is a hierarchical collection of name/value pairs grouped into objects and arrays

TRUE
(Correct)


FALSE
Explanation
JSON data is a hierarchical collection of name/value pairs grouped into objects and arrays:
Colons : separate names and values in name/value pairs. Curly braces {} denote objects. Square
brackets [] denote arrays. Commas , separate entities in objects and arrays. Refer Link -
https://docs.snowflake.com/en/user-guide/semistructured-intro.html
Question 20: Correct
What is the limit of number of tasks that can be included in a tree of tasks?

10000

1000
(Correct)


100


Unlimited
Explanation
A simple tree of tasks is limited to a maximum of 1000 tasks total (including the root task) in a
resumed state. An individual task in the tree is limited to a single predecessor task; however, a
task can have a maximum of 100 child tasks (i.e. other tasks that identify the task as a
predecessor).
Question 21: Incorrect
Which semi structure data types are available in Snowflake

ARRAY
(Correct)


VARIANT
(Correct)


SEMI STRUCTURE


STRING


OBJECT
(Correct)

Explanation
Refer Snowflake Document section of Semi Structure - https://docs.snowflake.com/en/user-
guide/semistructured-concepts.html
Question 22: Incorrect
The time travel retention period can only be set at individual object level, and cannot be
setup at account level

FALSE
(Correct)


TRUE
(Incorrect)

Explanation
The retention period can be applied at Account level, and object level (i.e. databases, schemas,
and tables).
Question 23: Incorrect
If a task is still running when the next scheduled execution time occurs, what happens?

The tasks that is running fails and stops


The tasks that is running fails and retriggers


The next execution of the tasks starts as soon as the current execution gets over
(Incorrect)


That schedule time is skipped
(Correct)

Explanation
Snowflake ensures only one instance of a task with a schedule (i.e. a standalone task or the root
task in a tree of tasks) is executed at a given time. If a task is still running when the next
scheduled execution time occurs, then that scheduled time is skipped.
Question 24: Incorrect
Time Travel in Snowflake is useful for which of the following tasks

Analyzing data usage/manipulation over specified periods of time
(Correct)


Restoring data-related objects (tables, schemas, and databases) that might have been accidentally or
intentionally deleted
(Correct)


Restoring data in event of a system failure or other catastrophic event, e.g. a hardware failure or
security breach.


Duplicating and backing up data from key points in the past
(Correct)

Explanation
Snowflake Time Travel enables accessing historical data (i.e. data that has been changed or
deleted) at any point within a defined period. It serves as a powerful tool for performing the
following tasks: Restoring data-related objects (tables, schemas, and databases) that might have
been accidentally or intentionally deleted. Duplicating and backing up data from key points in
the past. Analyzing data usage/manipulation over specified periods of time. Refer Link -
https://docs.snowflake.com/en/user-guide/data-time-travel.html
Question 25: Correct
UNDROP command is applicable to which all objects

TABLES
(Correct)


SCHEMA
(Correct)


ROLES


USERS


DATABASES
(Correct)

Explanation
Refer Link - https://docs.snowflake.com/en/user-guide/data-time-travel.html
Question 26: Incorrect
Dropped tables, schemas, and databases can be listed using the SHOW commands with
which keyword specified?

HISTORY
(Correct)


PREVIOUS VERSIONS


OLD VERSIONS


DROPPED_RECORDS
(Incorrect)

Explanation
Dropped tables, schemas, and databases can be listed using the following commands with the
HISTORY keyword specified: SHOW TABLES SHOW SCHEMAS SHOW DATABASES The
output includes all dropped objects and an additional DROPPED_ON column, which displays
the date and time when the object was dropped. If an object has been dropped more than once,
each version of the object is included as a separate row in the output.
Question 27: Correct
select * from my_table before(statement => '8e5d0ca9-005e-44e6-b858-a8f5b37c5726');
This query will select historical data from "my_table" up to, but not including any changes
made by the specified statement

TRUE
(Correct)


FALSE
Explanation
Refer link - https://docs.snowflake.com/en/user-guide/data-time-travel.html
Question 28: Incorrect
To restore the table by undrop command what all conditions should be met?

User must of CREATE privileges on the schema in which table is getting restored
(Correct)


The table can be restored in any schema irrespective from whichever schema the table was dropped.
(Incorrect)


The table can be restored in same schema only from which it has been dropped
(Correct)


User executing undrop command should have ownership privilege on the table
(Correct)

Explanation
Similar to dropping an object, a user must have OWNERSHIP privileges for an object to restore
it. In addition, the user must have CREATE privileges on the object type for the database or
schema where the dropped object will be restored. Restoring tables and schemas is only
supported in the current schema or current database, even if a fully-qualified object name is
specified Refer Link - https://docs.snowflake.com/en/user-guide/data-time-travel.html#data-
retention-period
Question 29: Correct
Which parameter is used to set the time travel retention period?

DATA_RETENTION_TIME_IN_DAYS
(Correct)


DATA_RETENTION_TIME_IN_HOURS


DATA_RETENTION_TIME


DATA_RETENTION_TIME_IN_WEEKS
Explanation
To specify the data retention period for Time Travel: The
DATA_RETENTION_TIME_IN_DAYS object parameter can be used by users with the
ACCOUNTADMIN role to set the default retention period for your account. The same parameter
can be used to explicitly override the default when creating a database, schema, and individual
table. The data retention period for a database, schema, or table can be changed at any time.
Question 30: Correct
A task object defines a recurring schedule for executing a SQL statement, including
statements that call stored procedures

TRUE
(Correct)


FALSE
Explanation
A task object defines a recurring schedule for executing a SQL statement, including statements
that call stored procedures. Tasks can be chained together for successive execution to support
more complex periodic processing.
Question 31: Correct
What is the full form of JSON?

JavaScript Object Notation
(Correct)


Java Semi Object Notation


Java Object Notation


Javascript Notation
Question 32: Skipped
Non-native values such as dates and timestamps are stored as strings when loaded into a
VARIANT column

TRUE
(Correct)


FALSE
Explanation
Non-native values such as dates and timestamps are stored as strings when loaded into a
VARIANT column, so operations on these values could be slower and also consume more space
than when stored in a relational column with the corresponding data type.
Question 33: Correct
Within a transaction multiple SQL statements will see the same set of records in a stream

FALSE


TRUE
(Correct)

Explanation
Streams support repeatable read isolation. In repeatable read mode, multiple SQL statements
within a transaction see the same set of records in a stream. This differs from the read committed
mode supported for tables, in which statements see any changes made by previous statements
executed within the same transaction, even though those changes are not yet committed.
Question 34: Incorrect
Restoring tables and schemas is only supported in the current schema or current database,
even if a fully-qualified object name is specified.

TRUE
(Correct)


FALSE
(Incorrect)

Question 35: Correct


If the TIMESTAMP, OFFSET, or STATEMENT specified in the AT | BEFORE clause
falls outside the data retention period for the table, the query fails and returns an error.

FALSE


TRUE
(Correct)

Explanation
Refer link - https://docs.snowflake.com/en/user-guide/data-time-travel.html
Question 36: Incorrect
MAX_DATA_EXTENSION_TIME_IN_DAYS - This parameter is used for what purpose?

Maximum number of days for which Snowflake can extend the data retention period for time travel
of an object
(Incorrect)


Maximum number of days for which Snowflake can extend the data retention period for fail safe of
an object


Maximum number of days for which Snowflake can extend the data retention period for tables to
prevent streams on the tables from becoming stale
(Correct)

Explanation
Maximum number of days for which Snowflake can extend the data retention period for tables to
prevent streams on the tables from becoming stale. By default, if the
DATA_RETENTION_TIME_IN_DAYS setting for a source table is less than 14 days, and a
stream has not been consumed, Snowflake temporarily extends this period to the stream’s offset,
up to a maximum of 14 days, regardless of the Snowflake Edition for your account. The
MAX_DATA_EXTENSION_TIME_IN_DAYS parameter enables you to limit this automatic
extension period to control storage costs for data retention or for compliance reasons.
Question 37: Correct
Multiple streams can be created on a single table

FALSE


TRUE
(Correct)

Question 38: Incorrect


If the role that a running task is executing under is dropped while the task is running, the
task completes processing under the dropped role.

TRUE
(Correct)


FALSE
(Incorrect)

Question 39: Correct


The data retention period for a database, schema, or table cannot be changed after they are
created.

FALSE
(Correct)


TRUE
Explanation
The data retention period for a database, schema, or table can be changed at any time.
Question 40: Correct
If you have a table with a 10-day retention period and increase the period to 20 days, then
what happens to the data that is currently in time travel for past 2 days.

The data would moved to fail safe as per new retention period i.e. will move to fail safe after 20 days
of time travel
(Correct)


The data will moved to fail safe immediately


The data would moved to fail safe as per previous retention period only i.e. will move to fail safe
after 10 days of time travel


The table will lose all time travel data, and the new time travel retention period will be applied going
forward
Explanation
Refer Link - https://docs.snowflake.com/en/user-guide/data-time-travel.html
Question 41: Correct
Which two table types don’t have fail safe?

PERMANENT


TRANSIENT
(Correct)


TEMPORARY
(Correct)

Explanation
To help manage the storage costs associated with Time Travel and Fail-safe, Snowflake provides
two table types, temporary and transient, which do not incur the same fees as standard (i.e.
permanent) tables: Transient tables can have a Time Travel retention period of either 0 or 1 day.
Temporary tables can also have a Time Travel retention period of 0 or 1 day; however, this
retention period ends as soon as the table is dropped or the session in which the table was created
ends. Transient and temporary tables have no Fail-safe period. As a result, the maximum
additional fees incurred for Time Travel and Fail-safe by these types of tables is limited to 1 day
Question 42: Correct
An individual task in the tree can have multiple predecessor task

TRUE


FALSE
(Correct)

Explanation
A simple tree of tasks is limited to a maximum of 1000 tasks total (including the root task) in a
resumed state. An individual task in the tree is limited to a single predecessor task; however, a
task can have a maximum of 100 child tasks (i.e. other tasks that identify the task as a
predecessor).
Question 43: Correct
For which table type the retention period ends as soon as the the table is dropped or the
session in which the table was created ends.

MATERIALIZED


TEMPORARY
(Correct)


PERMANENT


TRANSIENT
Explanation
To help manage the storage costs associated with Time Travel and Fail-safe, Snowflake provides
two table types, temporary and transient, which do not incur the same fees as standard (i.e.
permanent) tables: Transient tables can have a Time Travel retention period of either 0 or 1 day.
Temporary tables can also have a Time Travel retention period of 0 or 1 day; however, this
retention period ends as soon as the table is dropped or the session in which the table was created
ends. Transient and temporary tables have no Fail-safe period. As a result, the maximum
additional fees incurred for Time Travel and Fail-safe by these types of tables is limited to 1 day
Question 44: Incorrect
What is the default time limit on a single run of a task

30 min


There is no limit as such
(Incorrect)


10 min


60 min
(Correct)

Explanation
There is a 60 minute default limit on a single run of a task. This limitation was implemented as a
safeguard against non-terminating tasks. Query the TASK_HISTORY table function. If the task
was canceled or exceeded the window scheduled for the task, the cause is often an undersized
warehouse. Review the warehouse size and consider increasing it to fit within the schedule
window or the one-hour limit.Alternatively, consider increasing the timeout limit for the task by
executing ALTER TASK … SET USER_TASK_TIMEOUT_MS = <num>.
Question 45: Correct
Which of the following tables cannot be cloned as permanent table?

Temporary Tables
(Correct)


Permanent Tables


Transient Tables
(Correct)

Explanation
Refer Link for details - https://docs.snowflake.com/en/user-guide/tables-temp-transient.html

Refer the section Comparison of Table Types in the above link.

Question 46: Incorrect


What features are provided by Snowflake to enable continuous data pipelines?

All options are correct
(Correct)


Continous Data Loading
(Incorrect)


Change Data Tracking


Recurring Tasks
Explanation
Refer document - https://docs.snowflake.com/en/user-guide/data-pipelines-intro.html
Question 47: Incorrect
When the owner role of a given task (i.e. the role with the OWNERSHIP privilege on the
task) is deleted, what happens?

The task is “re-possessed” by the role that dropped the owner role
(Correct)


The roles granted to any task cannot be dropped unless all associated tasks are dropped


The tasks is suspended immediately, and the task will not have any owner till explicitly assigned


The tasks is also dropped
(Incorrect)

Explanation
When the owner role of a given task (i.e. the role with the OWNERSHIP privilege on the task) is
deleted, the task is “re-possessed” by the role that dropped the owner role. This ensures that
ownership moves to a role that is closer to the root of the role hierarchy. When a task is re-
possessed, it is automatically paused, i.e., all executions currently in flight complete processing,
but new executions will not be scheduled until the task is resumed explicitly by the new owner.
The rationale for this is to prevent a user with access to a particular role from leaving behind
tasks that suddenly execute with higher permissions when the role is removed. If the role that a
running task is executing under is dropped while the task is running, the task completes
processing under the dropped role.
Question 48: Correct
Which file format option can be enabled in JSON or AVRO file to remove the outer array
structure and load the records into separate table rows

READ_INTERNAL_ARRAYS


STRIP_OUTER_ARRAY
(Correct)


REMOVE_OUTER_ARRAY


ELIMINATE_OUTER_ARRAY
Explanation
In general, JSON and Avro data sets are a simple concatenation of multiple documents. The
JSON or Avro output from some software is composed of a single huge array containing
multiple records. There is no need to separate the documents with line breaks or commas, though
both are supported. Instead, we recommend enabling the STRIP_OUTER_ARRAY file format
option for the COPY INTO <table> command to remove the outer array structure and load the
records into separate table rows: copy into <table> from @~/<file>.json file_format = (type =
'JSON' strip_outer_array = true);
Question 49: Correct
After the retention period for an object has passed and the object has been purged, it is no
longer displayed in the SHOW <object_type> HISTORY output.

FALSE


TRUE
(Correct)

Question 50: Incorrect


Which type of table stream is supported only on external tables?

Update-Only


Native


Standard
(Incorrect)


Insert-Only
(Correct)


Append-Only
Explanation
Insert-only Supported on external tables only. An insert-only stream tracks row inserts only; they
do not record delete operations that remove rows from an inserted set (i.e. no-ops). For example,
in-between any two offsets, if File1 is removed from the cloud storage location referenced by the
external table, and File2 is added, the stream returns records for the rows in File2 only. Unlike
when tracking CDC data for standard tables, Snowflake cannot access the historical records for
files in cloud storage. Overwritten files are essentially handled as new files: The old version of
the file is removed from cloud storage, but the insert-only stream does not record the delete
operation. The new version of the file is added to cloud storage, and the insert-only stream
records the rows as inserts. The stream does not record the diff of the old and new file versions
Question 51: Correct
Which of the following Snowflake Object is used for a schedule execution of SQL
statement?

Stored Procedure


Tasks
(Correct)


Snowpipe


Streams
Explanation
User-defined tasks allow scheduled execution of SQL statements. Tasks run according to a
specified execution configuration, using any combination of a set interval and/or a flexible
schedule using a subset of familiar cron utility syntax. Tasks currently run in user-specified
warehouses but will eventually run in a serverless environment using Snowflake-supplied
compute resources.
Question 52: Correct
If a row is added and then updated in the current offset,then the
METADATA$ISUPDATE row records a TRUE value.

TRUE


FALSE
(Correct)

Explanation
Note that streams record the differences between two offsets. If a row is added and then updated
in the current offset, the delta change is a new row. The METADATA$ISUPDATE row records
a FALSE value
Question 53: Correct
When a task is created, it is in resumed state by default?

TRUE


FALSE
(Correct)

Explanation
Create a task using CREATE TASK. The task is suspended by default.
Question 54: Incorrect
When working with unfamiliar semi-structured data, you might not know the key names in
an object. You can use the FLATTEN function with which argument to return the list of
distinct key names in all nested elements in an object?

DISTINCT
(Incorrect)


NESTING


GROUPING


RECURSIVE
(Correct)

Explanation
Refer link - https://docs.snowflake.com/en/user-guide/semistructured-considerations.html
Question 55: Correct
A table named "MY_TABLE" has a retention period of 7 days. The table is dropped, and a
new table with the same name "MY_TABLE" has been created. Now if the UNDROP
MY_TABLE command is executed what will happen?

The UNDROP command will overwrite the already existing table with the previously dropped table


The UNDROP command will fail
(Correct)


The UNDROP command will restore the dropped version with a new name
Explanation
If an object with the same name already exists, UNDROP fails. You must rename the existing
object, which then enables you to restore the previous version of the object.
Question 56: Incorrect
Snowflake supports which of the following file formats for data loading? Select all that
apply.

CSV
(Correct)


PDF


Parquet
(Correct)


ORC
(Correct)

Explanation
Refer Snowflake Document section of Semi Structure - https://docs.snowflake.com/en/user-
guide/semistructured-concepts.html
Question 57: Incorrect
If multiple DML statements change a row, the stream records all the action taken on that
row.

FALSE
(Correct)


TRUE
(Incorrect)
Explanation
The stream maintains only the delta of the changes; if multiple DML statements change a row,
the stream records only the latest action taken on that row
Question 58: Correct
A stream allows querying and consuming a set of changes to a table, at the row level,
between two transactional points of time

TRUE
(Correct)


FALSE
Explanation
A stream object records the delta of change data capture (CDC) information for a table (such as a
staging table), including inserts and other data manipulation language (DML) changes. A stream
allows querying and consuming a set of changes to a table, at the row level, between two
transactional points of time.
Question 59: Skipped
What is the full form of ORC?

Optimized Read Columnar


Object Read Class


Optimized Row Columnar
(Correct)


Object Row Columnar
Explanation
Used to store Hive data, the ORC (Optimized Row Columnar) file format was designed for
efficient compression and improved performance for reading, writing, and processing data over
earlier Hive file formats. For more information about ORC, see https://orc.apache.org/.
Question 60: Incorrect
Which of the following currently uses user-specified warehouse, but eventually run in a
serverless environment using Snowflake-supplied compute resources

Stored Procedure


Snowpipe
(Incorrect)


Tasks
(Correct)


Streams
Explanation
Refer Link - https://docs.snowflake.com/en/user-guide/tasks.html
Question 61: Incorrect
Snowflake Tasks executions are associated with the user who created the task, i.e. in the
history database we can see that the task is getting executed under the username of the
User who created that task

FALSE
(Correct)


TRUE
(Incorrect)

Explanation
Snowflake runs tasks with the privileges of the task owner (i.e. the role that has OWNERSHIP
privilege on the task), but task runs are not associated with a user. Instead, each run is executed
by a system service. Tasks are decoupled from specific users to avoid complications that can
arise when users are dropped, locked due to authentication issues, or have roles removed.
Because task runs are decoupled from a user, the query history for task runs are associated with
the system service. SYSTEM is not a user in the account; it is a behind-the-scenes service. As
such, there are no user credentials for this service, and no individual (from Snowflake or in your
account) can assume its identity. Activity for the system service is limited to your account. The
same encryption protections and other security protocols are built into this service as are
enforced for other operations.
Question 62: Correct
Snowflake minimizes the amount of storage required for historical data by maintaining
only the information required to restore the individual table rows that were updated or
deleted

FALSE


TRUE
(Correct)

Explanation
The fees are calculated for each 24-hour period (i.e. 1 day) from the time the data changed. The
number of days historical data is maintained is based on the table type and the Time Travel
retention period for the table. Also, Snowflake minimizes the amount of storage required for
historical data by maintaining only the information required to restore the individual table rows
that were updated or deleted. As a result, storage usage is calculated as a percentage of the table
that changed. Full copies of tables are only maintained when tables are dropped or truncated.
Question 63: Correct
Fail Safe is configurable by the user

FALSE
(Correct)


TRUE
Explanation
Fail-safe provides a (non-configurable) 7-day period during which historical data is recoverable
by Snowflake. This period starts immediately after the Time Travel retention period ends.
Question 64: Correct
Which parameter is used to check if the stream contains change data for a table or not?

SYSTEM$STREAM_HAS_DATA
(Correct)


SYSTEM$STREAM_HAS_NEW_DATA


SYSTEM$STREAM_HAS_CHANGED_DATA


SYSTEM$STREAM_HAS_MODIFIED_DATA
Explanation
Tasks may optionally use table streams to provide a convenient way to continuously process new
or changed data. A task can transform new or changed rows that a stream surfaces. Each time a
task is scheduled to run, it can verify whether a stream contains change data for a table (using
SYSTEM$STREAM_HAS_DATA) and either consume the change data or skip the current run
if no change data exists.
Question 65: Correct
An append-only table stream tracks row inserts only. Update and delete operations
(including table truncates) are not recorded

TRUE
(Correct)


FALSE
Explanation
An append-only table stream tracks row inserts only. Update and delete operations (including
table truncates) are not recorded. For example, if 10 rows are inserted into a table and then 5 of
those rows are deleted before the offset for an append-only stream is advanced, the stream
records 10 rows. An append-only stream returns the appended rows only and therefore can be
much more performant than a standard stream for extract, load, transform (ELT) and similar
scenarios that depend exclusively on row inserts. For example, the source table can be truncated
immediately after the rows in an append-only stream are consumed, and the record deletions do
not contribute to the overhead the next time the stream is queried or consumed.
Question 66: Correct
Many logical data models refer to entities, attributes and values. In JSON, and some other
semi-structured types, the word "attribute" is often replaced with what word?

Keys
(Correct)


Values


Rows


Columns


Relationships
Explanation
Refer Snowflake Document section of Semi Structure - https://docs.snowflake.com/en/user-
guide/semistructured-concepts.html
Question 67: Incorrect
When a table data changes (for e.g. few records are updated and few are deleted) and the
table is within the time travel retention period, which of the following statement is true?

Snowflake maintains only the information pertaining to these changed records (i.e. updated & deleted
ones)
(Correct)


Snowflake takes a full copy of the table every time any change is made to the table
(Incorrect)

Explanation
Snowflake minimizes the amount of storage required for historical data by maintaining only the
information required to restore the individual table rows that were updated or deleted. As a
result, storage usage is calculated as a percentage of the table that changed. Full copies of tables
are only maintained when tables are dropped or truncated.
Question 68: Correct
You can view the amount of storage used by failsafe under the Account->Billing & Usage
section

TRUE
(Correct)

FALSE
Explanation
Refer Link - https://docs.snowflake.net/manuals/user-guide/data-failsafe.html
Question 69: Correct
Which two table types can have a maximum retention period of 1 day?

MATERIALIZED


TRANSIENT
(Correct)


TEMPORARY
(Correct)


PERMANENT
Explanation
To help manage the storage costs associated with Time Travel and Fail-safe, Snowflake provides
two table types, temporary and transient, which do not incur the same fees as standard (i.e.
permanent) tables: Transient tables can have a Time Travel retention period of either 0 or 1 day.
Temporary tables can also have a Time Travel retention period of 0 or 1 day; however, this
retention period ends as soon as the table is dropped or the session in which the table was created
ends. Transient and temporary tables have no Fail-safe period. As a result, the maximum
additional fees incurred for Time Travel and Fail-safe by these types of tables is limited to 1 day
Question 70: Correct
What is the default table type when a CREATE TABLE is executed?

TRANSIENT


PERMANENT
(Correct)


MATERIALIZED


TEMPORARY
Explanation
The default type for tables is permanent. To define a table as temporary or transient, you must
explicitly specify the type during table creation: CREATE [ OR REPLACE ] [ TEMPORARY |
TRANSIENT ] TABLE <name> …
Question 71: Correct
Which of the following roles can use SQL to view the task history within a specified date
range

Accountadmin
(Correct)


Public


Task owner (i.e. role that has the OWNERSHIP privilege on a task)
(Correct)


Any role can view


Any role that has the global MONITOR EXECUTION privilege
(Correct)

Explanation
Refer Link - https://docs.snowflake.com/en/user-guide/tasks-intro.html
Question 72: Correct
Under what circumstances would Fail-Safe NOT be an effective method for data recovery?

To recover data that may have been lost or damaged due to extreme operational failures


As a means for accessing historical data after the Time Travel retention period has ended
(Correct)


To protect data in the event of a system failure or other catastrophic event e.g. a hardware failure or
security breach
Explanation
Fail Safe is something that is managed by Snowflake, and the end user cannot directly query data
from Fail Safe.

In case for any table whos time travel period is over, and we need to still restore the data within 7
days of time travel getting over, then Snowflake Support team can help us get that data.

However, as per Snowflake "Fail-safe is not provided as a means for accessing historical data
after the Time Travel retention period has ended. It is for use only by Snowflake to recover data
that may have been lost or damaged due to extreme operational failures."

You can refer this link for the details - https://docs.snowflake.com/en/user-guide/data-


failsafe.html
Question 73: Correct
What is the column datatype used for ingesting semi-structured data?

DATETIME


STRING


VARIANT
(Correct)


BINARY


VARCHAR
Explanation
Refer Snowflake Document section of Semi Structure - https://docs.snowflake.com/en/user-
guide/semistructured-concepts.html
Question 74: Correct
Currently, streams cannot track changes in materialized views.

TRUE
(Correct)


FALSE
Question 75: Correct
True or false: A table in Snowflake can contain both a variant columns and a timestamp.

TRUE
(Correct)


FALSE
Explanation
Refer Snowflake Document section of Semi Structure - https://docs.snowflake.com/en/user-
guide/semistructured-concepts.html
Question 76: Incorrect
A single tasks in snowflake can execute multiple SQL Statements

FALSE
(Correct)


TRUE
(Incorrect)

Explanation
Currently, a task can execute a single SQL statement, including a call to a stored
procedure.Tasks can be combined with table streams for continuous ELT workflows to process
recently changed table rows. Streams ensure exactly once semantics for new or changed data in a
table. Tasks can also be used independently to generate periodic reports by inserting or merging
rows into a report table or perform other periodic work.
Question 77: Incorrect
Data retention period of database is set to 2 days. Data retention period of schema inside
that database is set to 4 days. Data retention of a table inside the schema is set to 7 days.
Now if the database is dropped, the table that was present in that database can be restored
back within how many days?

2 days
(Correct)


4 days


Since the Database itself is dropped, the underlying objects cannot be restored


7 days
(Incorrect)

Explanation
Currently, when a database is dropped, the data retention period for child schemas or tables, if
explicitly set to be different from the retention of the database, is not honored. The child schemas
or tables are retained for the same period of time as the database. Similarly, when a schema is
dropped, the data retention period for child tables, if explicitly set to be different from the
retention of the schema, is not honored. The child tables are retained for the same period of time
as the schema. To honor the data retention period for these child objects (schemas or tables),
drop them explicitly before you drop the database or schema.
Question 78: Incorrect
Table stream also contains actual table data

TRUE
(Incorrect)


FALSE
(Correct)

Explanation
A stream itself does not contain any table data. A stream only stores the offset for the source
table and returns CDC records by leveraging the versioning history for the source table. When
the first stream for a table is created, a pair of hidden columns are added to the source table and
begin storing change tracking metadata. These columns consume a small amount of storage. The
CDC records returned when querying a stream rely on a combination of the offset stored in the
stream and the change tracking metadata stored in the table.
Question 79: Correct
Which command can be executed to get the version history of table "Test1" in schema
"mytestdb.myschema" within the time travel period in snowflake?

show history of table like 'Test1' in mytestdb.myschema;


show tables version like 'Test1' in mytestdb.myschema;


show version of table like 'Test1' in mytestdb.myschema;


show tables history like 'Test1' in mytestdb.myschema;
(Correct)

Explanation
Refer Link - https://docs.snowflake.com/en/user-guide/data-time-travel.html#data-retention-
period
Question 80: Correct
A stream object records data manipulation language (DML) changes made to tables,
including inserts and updates but cannot record deletes made to tables

FALSE
(Correct)


TRUE
Explanation
A stream object records data manipulation language (DML) changes made to tables, including
inserts, updates, and deletes, as well as metadata about each change, so that actions can be taken
using the changed data. This process is referred to as change data capture (CDC). An individual
table stream tracks the changes made to rows in a source table. A table stream (also referred to as
simply a “stream”) makes a “change table” available of what changed, at the row level, between
two transactional points of time in a table. This allows querying and consuming a sequence of
change records in a transactional fashion.
Question 81: Correct
What is the maximum number of child tasks, a given task can have?

10


Unlimited

100
(Correct)


1000


10000
Explanation
A simple tree of tasks is limited to a maximum of 1000 tasks total (including the root task) in a
resumed state. An individual task in the tree is limited to a single predecessor task; however, a
task can have a maximum of 100 child tasks (i.e. other tasks that identify the task as a
predecessor).
Question 82: Skipped
The CDC records returned when querying a stream rely on a combination of -

Offset stored in Table


Offset Stored in Stream
(Correct)


Change Tracking Metadata Stored in Stream


The change tracking metadata stored in the table
(Correct)

Explanation
Note that a stream itself does not contain any table data. A stream only stores the offset for the
source table and returns CDC records by leveraging the versioning history for the source table.
When the first stream for a table is created, a pair of hidden columns are added to the source
table and begin storing change tracking metadata. These columns consume a small amount of
storage. The CDC records returned when querying a stream rely on a combination of the offset
stored in the stream and the change tracking metadata stored in the table.
Question 83: Correct
If you have a table with a 10-day retention period and increase the period to 20 days, then
the data that was moved to fail safe on 11th day, will be moved back to Time Travel, and
will have more 9 days left for time travel retention end

TRUE


FALSE
(Correct)
Explanation
The data once moved to fail safe cannot moved back to time travel retenion period.
Question 84: Correct
True or false: You can point Snowflake at any S3 bucket to directly query the files in that
bucket as long as the files are in Parquet or ORC format.

FALSE
(Correct)


TRUE
Explanation
Refer Snowflake Document section of Semi Structure - https://docs.snowflake.com/en/user-
guide/semistructured-concepts.html
Question 85: Correct
If you change the data retention period for a table, the new retention period impacts all
data that is active, as well as any data currently in Time Travel

TRUE
(Correct)


FALSE
Explanation
Refer Link - https://docs.snowflake.com/en/user-guide/data-time-travel.html
Question 86: Correct
Tasks cannot be chained together for successive execution to support more complex
periodic processing

TRUE


FALSE
(Correct)

Explanation
A task object defines a recurring schedule for executing a SQL statement, including statements
that call stored procedures. Tasks can be chained together for successive execution to support
more complex periodic processing.
Question 87: Correct
A table named "MY_TABLE" has a retention period of 7 days. The table is dropped, and a
new table with the same name "MY_TABLE" has been created. The new table that is
created is also dropped. Now retrieving the original version (i.e. the 1st version) of the table
is not possible. This is True or False?

TRUE

FALSE
(Correct)

Explanation
If an object has been dropped more than once, each version of the object is included as a separate
row in the output, and hence any previous version of the object can be restored till the time it is
in the retention period window
Question 88: Correct
Check all true statements about Fail-safe:

Only a Snowflake employee can recover data from fail-safe storage
(Correct)


Fail-safe is a reliable way to create Dev/Test/QA and other environments


The data stored as part of fail-safe is part of storage costs charged to customers
(Correct)


Fail-safe is not available for tables that have Time Travel.
Question 89: Correct
Snowflake Tasks can be event based as well along with time based?

TRUE


FALSE
(Correct)

Explanation
There is no event source that can trigger a task; instead, a task runs on a schedule, which can be
defined when creating a task (using CREATE TASK) or later (using ALTER TASK). Snowflake
ensures only one instance of a task with a schedule (i.e. a standalone task or the root task in a tree
of tasks) is executed at a given time. If a task is still running when the next scheduled execution
time occurs, then that scheduled time is skipped.
Question 90: Correct
Which two SQL Extension clause are provided by Snowflake to support time travel?

BEFORE
(Correct)


AT
(Correct)

DURING


PRIOR
Explanation
To support Time Travel, the following SQL extensions have been implemented: AT | BEFORE
clause which can be specified in SELECT statements and CREATE … CLONE commands
(immediately after the object name). The clause uses one of the following parameters to pinpoint
the exact historical data you wish to access: TIMESTAMP OFFSET (time difference in seconds
from the present time) STATEMENT (identifier for statement, e.g. query ID) UNDROP
command for tables, schemas, and databases
Question 91: Incorrect
select * from my_table at(offset => -60*5); This query will select historical data from
"my_table" as of -

5 hours ago
(Incorrect)


5 seconds ago


5 minutes ago
(Correct)


5 days ago
Explanation
Refer link - https://docs.snowflake.com/en/user-guide/data-time-travel.html
Question 92: Correct
Storage fees are incurred for maintaining historical data during both the Time Travel and
Fail-safe periods

FALSE


TRUE
(Correct)

Question 93: Correct


DATA_RETENTION_TIME_IN_DAYS paramete is defined as 1 day at account level, and
the same parameter is set as 7 days at Database level. Now a table is created inside the
given database without specifying any explicit DATA_RETENTION_TIME_IN_DAYS
parameter. What would be the data retention period for table in this case?

7 days
(Correct)


0 day


1 day
Explanation
If a retention period is specified for a database or schema, the period is inherited by default for
all objects created in the database/schema.
Question 94: Correct
Which of the following parameters are provided by Snowflake to be used with AT &
BEFORE clause to pinpoint the exact historical data you wish to access

TIMESTAMP


All options are correct
(Correct)


OFFSET (time difference in seconds from the present time)


STATEMENT (identifier for statement, e.g. query ID)
Explanation
Refer Link - https://docs.snowflake.com/en/user-guide/data-time-travel.html
Question 95: Correct
Extended time travel of 90 days is supported starting which snowflake edition

Virtual Private Snowflake


Premier


Enterprise
(Correct)


Business Critical


Standard
Explanation
Standard Time Travel is 1 day. Extended Time Travel (up to 90 days) requires Snowflake
Enterprise Edition.
Question 96: Correct
Fail Safe and time travel are one and the same thing

FALSE
(Correct)


TRUE
Explanation
Separate and distinct from Time Travel, Fail-safe ensures historical data is protected in the event
of a system failure or other catastrophic event, e.g. a hardware failure or security breach.
Question 97: Skipped
Fail Safe is not supported in Standard Snowflake edition

FALSE
(Correct)


TRUE
Explanation
Fail Safe is a default feature available with all snowflake edition, without any extra license cost
Question 98: Correct
To help manage the storage costs associated with Time Travel and Fail-safe, Snowflake
provides which two table types

TEMPORARY
(Correct)


PERMANENT


MATERIALIZED


TRANSIENT
(Correct)

Explanation
To help manage the storage costs associated with Time Travel and Fail-safe, Snowflake provides
two table types, temporary and transient, which do not incur the same fees as standard (i.e.
permanent) tables: Transient tables can have a Time Travel retention period of either 0 or 1 day.
Temporary tables can also have a Time Travel retention period of 0 or 1 day; however, this
retention period ends as soon as the table is dropped or the session in which the table was created
ends. Transient and temporary tables have no Fail-safe period. As a result, the maximum
additional fees incurred for Time Travel and Fail-safe by these types of tables is limited to 1 day
Question 99: Correct
In Snowflake Enterprise edition, the maximum retention period for a transient or
temporary table is?

Transient & Temporary table cannot be configured for time travel


7 days


1 day
(Correct)


90 days
Explanation
For Snowflake Enterprise Edition (and higher): For transient databases, schemas, and tables, the
retention period can be set to 0 (or unset back to the default of 1 day). The same is also true for
temporary tables. For permanent databases, schemas, and tables, the retention period can be set
to any value from 0 up to 90 days
Question 100: Correct
When a database or schema that contains a source table and stream is cloned, any
unconsumed records in the stream (in the clone) are inaccessible

TRUE
(Correct)


FALSE
Explanation
Currently, when a database or schema that contains a source table and stream is cloned, any
unconsumed records in the stream (in the clone) are inaccessible. This behavior is consistent
with Time Travel for tables. If a table is cloned, historical data for the table clone begins at the
time/point when the clone was created
Question 101: Correct
What is the data type provided by Snowflake to store semi structure data?

Varchar


Text


Semi-structure

Variant
(Correct)

Explanation
Snowflake loads semi-structured data into a single VARIANT column
Question 102: Correct
The cron expression in a task definition supports specifying a time zone

FALSE


TRUE
(Correct)

Explanation
The cron expression in a task definition supports specifying a time zone. A scheduled task runs
according to the specified cron expression in the local time for a given time zone. Special care
should be taken with regard to scheduling tasks for time zones that recognize daylight saving
time. Tasks scheduled during specific times on days when the transition from standard time to
daylight saving time (or the reverse) occurs can have unexpected behaviors.

You might also like