Oracle Autonomous Data Warehouse:
These Are the Droids You’re Looking For
             November 1, 2018
                 Jim Czuprynski
                    President
           Zero Defect Computing, Inc.
                @JimTheWhyGuy
Jim Czuprynski: Credentials
• 35+ years of database-centric IT experience
• Oracle DBA since 2001
• Oracle 9i, 10g, 11g, 12c OCP
• Oracle ACE Director
• 100+ articles on databasejournal.com and IOUG SELECT
• Co-author of 4 Oracle books
• Oracle-centric blog (Generally, It Depends)
• Regular speaker at Oracle OpenWorld, IOUG
  COLLABORATE, Hotsos Symposium, and Regional OUGs
        E-mail me at jczuprynski@zerodefectcomputing.com
        Follow me on Twitter (@JimTheWhyGuy)
        Connect with me on LinkedIn (Jim Czuprynski)
                                                            April 2-6, 2017 in Las Vegas, NV USA   #C17LV
Our Agenda
• Autonomous Data Warehouse (ADW): The Basics
• Creating, Controlling, and Monitoring an ADW Instance
• Monitoring ADW Performance in Multiple Dimensions
• What Happens If My ADW Instance Becomes [Over]whelmed?
• Loading and Migrating Data Into ADW
• Conclusions, Use Cases, and What It Means For Oracle DBAs
                                            April 2-6, 2017 in Las Vegas, NV USA   #C17LV
500+ Technical Experts
Helping Peers Globally
3 Membership Tiers                                                                              Connect:
• Oracle ACE Director                  bit.ly/OracleACEProgram                                      oracle-ace_ww@oracle.com
• Oracle ACE
• Oracle ACE Associate                                                                                     Facebook.com/oracleaces
                                                                                                          @oracleace
                         Nominate yourself or someone you know: acenomination.oracle.com
                                                                   April 2-6, 2017 in Las Vegas, NV USA   #C17LV
COLLABORATE is the largest user-led, user-focused event for the Oracle community.
 Featuring 1,000+ education and networking opportunities. Discover solutions for
                  wherever you are in your technology journey.
                           COLLABORATE 19
                               April 7-11, 2019
                          Gonzalez Convention Center
                              San Antonio, Texas
                  Registration is now open!
             http://collaborate.ioug.org/register
            Call for Presentations Now Open
    Submit your abstract by November 2 for the opportunity to share your expertise and
                 receive complimentary registration to COLLABORATE 19.
Submission Tracks
•   Data Science & Analytics
•   Database
•   DevOps and Deploy
•   Personal Growth & Leadership
                                          collaborate.ioug.org/c19callforspeakers
Is Autonomosity Threatening Your Job? Well … You Are Not Alone.
                                    If you are nervous about whether
                                   Autonomous Database is a threat to
                                          your Oracle DBA job …
                                               … then you are not the only one
                                              worried about automation’s impact.
                                                April 2-6, 2017 in Las Vegas, NV USA   #C17LV
Moving to ADW: A Suggested Business Process Flow
     Assess                  Plan               Migrate                    Monitor                                 Tweak
      • Is my          • What migration       • Transfer data         • Watch for any                             • Should any
     application         strategy is most       using chosen             unexpected                                 application
   workload really         appropriate?           migration            service outage /                         workloads shift to
   ready to move       • How long of an         strategy, and           performance                              a different ADW
      to ADW?             outage can my            keep it              degradation /                           instance service?
                            production          synchronized           user complaints
                            application
                              afford?
                     As an evolving Oracle Data Enterprise Architect, it’s crucial
                         to recognize and embrace the main thrust of ADW:
                                            No More Knobs!
                                                                April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Creating a New Instance (1)
       Access your
    13 Specify your Cloud
                    cloud Dashboard
                          account … …                  5 Build a new compartment for your ADWC instance …
      4 … and choose what kind of instance to create
                                                          6 … and check out the other compartments available
      2 … and get logged in
                                                             April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Creating a New Instance (2)
      1 Specify a compartment and                      4 … and your first ADW instance is now ready to access
         administrator credentials …
                                       2 … and ADW instance creation begins!
                                                     3 ADW instance now shows up in chosen compartment …
                                                                       April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Creating a New Instance (3)
                                                                                  4 … supply a robust password …
    1 Connect to the new instance
   2 Here’s
      using your first look
            the ADMIN       at the…ADW Service Console!
                         account
   3 Request new credentials for access …
                                                              … and save the
                                                          5
                                                              new credentials in
                                                              TNSNAMES home
                                                                           April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Some Unexpected Surprises! (Well, At Least At First.)
   2    Building a
       partitioned            1 Creating
                     SQL> CREATE         a non-unique, “unconstrained” index:
                                  TABLE tpcds.customer_credit_ratings     (
          table:          ccr_customer_number NUMBER(7)
                         ,ccr_last_reported DATE
                         ,ccr_credit_rating NUMBER(5)
                          SQL> CREATE INDEX tpcds.store_sales_cdemo_idx
                         ,ccr_missed_payments   NUMBER(3)
                               ON tpcds.store_sales
                         ,ccr_credit_maximum   NUMBER(7)(ss_cdemo_sk);
                     )
                         STORAGE
                            2     (INITIAL  8M NEXT 4M)
                                     ON tpcds.store_sales      (ss_cdemo_sk)
                         PARTITION BY RANGE
                                          * (ccr_last_reported)
                         INTERVAL(NUMTOYMINTERVAL(3,
                          ERROR at line 2:              'MONTH'))
                        (PARTITION ccr_oldest
                          ORA-01031: insufficient privileges
                                  VALUES LESS THAN (TO_DATE('1998-04-01', 'yyyy-mm-dd’)));
                     SQL> ,ccr_last_reported DATE
                           *                                                                            Wait … what?
                     ERROR at line 30:                                                       Hmmm … that’s
                                                                                                 But this is a data
                     ORA-00439: feature not enabled: Partitioning                             interesting!
                                                                                                            warehouse!
                                                            April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Some Unexpected Surprises - Satisfied! (2)
 2    Building a
     partitioned   SQL> CREATE TABLE tpcds.customer_credit_ratings (
                           1 Creating a non-unique, “unconstrained” index:
        table:          ccr_customer_number NUMBER(7)
                       ,ccr_last_reported DATE
                       ,ccr_credit_rating NUMBER(5)
                       ,ccr_missed_payments  NUMBER(3)
                        SQL> CREATE INDEX tpcds.store_sales_cdemo_idx
                       ,ccr_credit_maximum NUMBER(7)
                            ON tpcds.store_sales (ss_cdemo_sk);
                   )
                       STORAGE (INITIAL 8M NEXT 4M)
                        Index TPCDS.STORE_SALES_CDEMO_IDX
                       PARTITION BY RANGE (ccr_last_reported) created.
                       INTERVAL(NUMTOYMINTERVAL(3, 'MONTH'))
                      (PARTITION ccr_oldest
                               VALUES LESS THAN (TO_DATE('1998-04-01', 'yyyy-mm-dd’)));
                                                       Good! Secondary
                   Table TPCDS.CUSTOMER_CREDIT_RATINGS created.
                                                       indexes are now                                 Aha! So this
                                                          permitted.                                works now! After
                                                                                                    all, this is a data
                                                                                                       warehouse!
                                                          April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Monitoring Instance and Statement Performance
                            2 Performance
     1 How is the ADW instance                can also
                                  performing right now,be viewed
                                                        and        for aany
                                                             are there   particular
                                                                             evidentnarrower timeagainst
                                                                                     “pushbacks”  period a running workload?
4 Viewing an individual SQL statement’s performance …                  5 … and the statement’s execution plan
                          3 Viewing the performance     of running
                                            6 Terminating            as well session:
                                                             a “wayward”     as completed individual statements
                                                                                April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Automatic Monitoring of SQL Statement Performance
    SQL statements are automatically monitored for performance
13 Individual                                                          … and the actual operations consuming the most resources
              aspects of SQL statement activity are easy to review … 2
                                                                       are 4
                                                                              … as well as specific CPU, memory, and I/O metrics
    when they exceed 5 seconds of CPU or I/O activity …                    easy to identify while the statement’s still running
                                                                               April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Automatically Provided Database Services
Once an 18c ADW database is created, three database services are also automatically provisioned:
• Format: [adw instance name]_{HIGH | MEDIUM | LOW}
• Each service provides different levels of application resource provisioning
• Leverages Database Resource Manager (DBRM) and I/O Resource Manager (IORM)
                                 Concurrent
                   Resources
                                     SQL
 Service Name          Per                                     Implications of These Limitations:
                                 Statements
                   Statement
  TPCDS_HIGH          High            3         Any one statement can use all CPUs and all I/O resources
                                 Multiple of
TPCDS_MEDIUM        Medium      OCPUs (linear Any one statement can use all CPUs and multiple I/O resources
                                  scaling)
  TPCDS_LOW           Low         2X OCPUs      Any one statement can use a single CPU and multiple I/O resources
        See the detailed documentation for complete information on how these database services work.
                                                                  April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: But What If I Run Out of Resources?                                                                    Workload Distribution:
                                                                                                                  3 – HIGH
   Workload Exhaustion Demonstration using TPC-DS Top 15 “Ugly” Queries                                         6 – MEDIUM
                                                                                                                  8 - LOW
                                                  Start-out:
                                                  4 OCPUs …
             Ramp-up:
             6 OCPUs …
                                                           Ramp-down:
                                                            2 OCPUs …
                                                                  April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Database Feature Limitations
                   Several database features normally available for an OCI-
                 resident Oracle database are restricted for ADW instances:
  Object / Permission / Feature       Restrictions
             Tablespaces              Cannot be added, removed, or modified
             Parallelism              Enabled by default, based on number of OCPUs and chosen database service for application
                                      to connect
             Compression              HCC compression is enabled by default; compression method is changeable
            Result Caching            Enabled by default for all statements; cannot be changed
       Node File System and OS        No direct access permitted
  Database Links to Other Databases   Prohibited to preserve security features
     PL/SQL Calls Using DB Links      Likewise, prohibited
             Parallel DML             Enabled by default, but can be disabled at session level:
                                      ALTER SESSION DISABLE PARALLEL DML;
               See Restrictions for Database Features for a complete list of these ADW limitations.
                                                                                 April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Unavailable SQL Commands
       The following SQL commands cannot be executed against an ADW instance:
 SQL Command                 Reason for Unavailability
   ADMINISTER KEY
                       PDB-level security tightly enforced
    MANAGEMENT
 CREATE / ALTER / DROP
                       Tablespaces are strictly controlled
     TABLESPACE
    ALTER PROFILE     Resource limits and security restraints tightly enforced
 CREATE DATABASE LINK Self-containment and security
  See Restrictions for SQL Commands for complete information on these unavailable SQL commands.
                                                            April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Permitted Changes to Initialization Parameters
            Only the following database initialization parameters may be modified:
                                Initialization Parameters That Can Be Modified
  APPROX_FOR_AGGREGATION                          OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES*
  APPROX_FOR_COUNT_DISTINCT                       OPTIMIZER_IGNORE_HINTS
  APPROX_FOR_PERCENTILE                           OPTIMIZER_IGNORE_PARALLEL_HINTS
  AWR_PDB_AUTOFLUSH_ENABLED                       PLSCOPE_SETTINGS
                                                  PLSQL_DEBUG
                                                  PLSQL_WARNINGS
                                                  PLSQL_OPTIMIZE_LEVEL
                                                  PLSQL_CCFLAGS
  Most NLS parameters                             TIME_ZONE*
                                 * Only via ALTER SESSION
    See Restrictions to Database Initialization Parameters for more information on permissible changes.
                                                                  April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Unavailable Options and Packs
  The following database options and packs are not enabled for ADW instances:
                                 Non-Usable Database Options and Packs
         Oracle Application Express                                 Oracle Spatial and Graph
             Oracle Tuning Pack                          Oracle Data Masking and Subsetting Pack
       Oracle Real Application Testing              Oracle R capabilities of Oracle Advanced Analytics
           Oracle Database Vault                                Oracle Industry Data Models
 Oracle Data Masking and Subsetting Pack                                             Oracle Text
Oracle Database Lifecycle Management Pack                                    Oracle Multimedia
 Oracle Cloud Management Pack for Oracle                                              Java in DB
                Database
                Oracle OLAP                                                     Oracle XML DB
        Oracle Workspace Manager                                                          Context
  See Restrictions for Database Features for complete information on unusable database options and packs.
                                                               April 2-6, 2017 in Las Vegas, NV USA   #C17LV
Manual vs. Autonomous Database: Comparison of Capabilities
     Am I Empowered To …                                                                                     18c       ADW
     Add my own schemas?                                                                                     Yes        Yes
     Connect applications directly via TNSNAMES?                                                             Yes        Yes
     Elastically upsize or downsize CPUs, memory, and storage?                                               Yes        Yes
     Create my own CDBs and PDBs?                                                                            Yes        No
     Clone a PDB to the same or another CDB?                                                                 Yes        No
     Build my own tablespaces?                                                                               Yes        No
     Modify memory pool sizes (e.g. SGA_SIZE)?                                                                   Yes   No
     Modify security settings (e.g. keystores)?                                                            Yes         No
     Connect directly as SYS?                                                                              Yes         No
     Build a PDB using RMAN backups?                                                                       Yes         No
                                                                                                        Via Proxy
     Connect with Enterprise Manager Cloud Control for monitoring?                                                     No
                                                                                                         Agent
                                                                 April 2-6, 2017 in Las Vegas, NV USA   #C17LV
                                                                                                                              22
ADW: Migrating and Loading Data
    Am I Empowered To …                                     18c                         ADW
                                                                                Yes, but source files
    Load data with SQL*Loader or SQLDeveloper?              Yes
                                                                              reside in object storage
                                                                             Yes, but export dump set
    Load data with Data Pump Import?                        Yes
                                                                             resides in object storage
                                                                             Yes, but export dump set
    Export data with Data Pump Export?                      Yes
                                                                             resides in object storage
    Synchronize data with GoldenGate*?                      Yes              Yes, within certain limits
           *See this documentation for complete information on GoldenGate capabilities
                                  for Autonomous Databases.
                                                              April 2-6, 2017 in Las Vegas, NV USA   #C17LV
                                                                                                              23
ADW: Loading Data Via DBMS_CLOUD.COPY_DATA (1)
12 Creating
   Creating credentials for accessing file system:
            the new table:                                       3 Loading data with DBMS_CLOUD.COPY_DATA:
     SQL> BEGIN
          DBMS_CLOUD.CREATE_CREDENTIAL(
          SQL> CREATE
                   SQL> TABLE  tpcds.customer_credit_ratings (
                         BEGIN=>
            credential_name       'extb_tpcds'
                ccr_customer_number    NUMBER(7)
                             DBMS_CLOUD.COPY_DATA(
           ,username => 'IOUGCloudTrial@ioug.org'
              ,ccr_last_reported    DATE => 'CUSTOMER_CREDIT_RATINGS'
                             table_name
           ,password => '(;n<T1#-MpY>4u>_yilK'
              ,ccr_credit_rating     NUMBER(5) => 'EXTB_TPCDS'
                            ,credential_name
          );
              ,ccr_missed_payments     NUMBER(3)
                            ,file_uri_list   =>
     END;
              ,ccr_credit_maximum     NUMBER(7)
                            'https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/iougcloudtrial/
     /
          )                            ADWExternalTables/CreditScoring_Current.dat'
              STORAGE (INITIAL    8M NEXT =>
                            ,schema_name   4M)'TPCDS'
              PARTITION BY,field_list
                              RANGE (ccr_last_reported)
                                          => 'ccr_customer_number CHAR(08),ccr_last_reported CHAR(10)
              INTERVAL(NUMTOYMINTERVAL(3,,ccr_credit_rating
                                              'MONTH'))         CHAR(05),ccr_missed_payments CHAR(03)
             (PARTITION ccr_oldest           ,ccr_credit_maximum CHAR(07)’
                        VALUES  LESS=>
                           ,format    THAN (TO_DATE('1998-04-01',
                                        '{"delimiter"             'yyyy-mm-dd'))
                                                      : "|" , "dateformat" : "YYYY-MM-DD"}');
              );
                   EXCEPTION
          Table created.
                       WHEN OTHERS THEN
                            DBMS_OUTPUT.PUT_LINE('ERROR:' || SQLCODE || ' ' || SQLERRM);
                      END;
                      /
                      PL/SQL procedure successfully completed.
                                                                     April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Loading Data Via DBMS_CLOUD. COPY_DATA (2)
      Show the
 14
  2 Monitoring
    …resulting
      even whena running
                  it fails toload task LOG
                               complete… successfully!
                                            file opened
                                                     at 10/08/18 16:03:01
 SET LINESIZE 132                Total Number of Files=1
       LOG File:
 SET PAGESIZE 20000                                                                             3 Show status of running
                                 Data File: https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/iougcloudt
                                                                                                    load operations:
 COL owner_name                  Log HEADING
                         FORMAT A08    File: COPY$38_144722.log
                                                "Owner"
 COL table_name                  LOG file
                               FORMAT   A24 opened
                                             HEADINGat"Table|Loaded"
                                                          10/08/18 16:03:01
 COL type                        Bad HEADING
                         FORMAT A08    File: COPY$38_355882.bad
                                                "Operation"
 COL status                      Field
                         FORMAT A10      Definitions
                                     HEADING    "Status"for table COPY$WQPDD1Q3X2892USR6RY7
 COL start_dtm           FORMAT A19Record
                                     HEADING format  DELIMITED
                                                "Started   At"    BY
 COL update_dtm          FORMAT A19Data   in file
                                     HEADING        has same
                                                "Finished    At"endianness as the platform
 COL logfile_table       FORMAT A12Rows   with "LOGFILE|Table"
                                     HEADING     all null fields are accepted
SQL>  SELECT *
 COL badfile_table       FORMAT A12Fields
                                     HEADING in "BADFILE|Table"
                                                 Data Source:
 SELECT  FROM  copy$38_log;            CCR_CUSTOMER_NUMBER                    CHAR (8)
       owner_name                        Terminated by "|"
      ,table_name                      CCR_LAST_REPORTED                      CHAR (10)
      ,type                              Date datatype DATE, date mask YYYY-MM-DD
      ,status                            Terminated by "|"
      ,TO_CHAR(start_time,'YYYY-MM-DD  CCR_CREDIT_RATING
                                            HH24:MI:SS') start_dtm            CHAR (5)
   Table
      ,TO_CHAR(update_time,'YYYY-MM-DD   Terminated    by "|"update_dtm
                                             HH24:MI:SS')                                             LOGFILE       BADFILE
   Owner    Loaded
      ,logfile_table                  Operatio  Status
                                       CCR_MISSED_PAYMENTS  Started At          Finished
                                                                              CHAR (3)   At           Table         Table
   -------- ------------------------ -------- ---------- ------------------- ------------------- ------------ ------------
      ,badfile_table                     Terminated by "|"
   TPCDS    CUSTOMER_CREDIT_RATINGS COPY        COMPLETED 2018-10-08 11:00:59 2018-10-08 11:03:12 COPY$38_LOG COPY$38_BAD
   FROM user_load_operations
   TPCDS                               CCR_CREDIT_MAXIMUM
            CUSTOMER_CREDIT_RATINGS COPY        FAILED                        CHAR
                                                            2018-10-08 10:51:09    (7)
                                                                                2018-10-08  10:53:16 COPY$37_LOG COPY$37_BAD
  WHERE
   TPCDS  type   = 'COPY'
            CUSTOMER_CREDIT_RATINGS COPY Terminated
                                                FAILED by  "|"
                                                            2018-10-08 10:50:49 2018-10-08 10:50:49
  ORDER      start_time DESC; DateCOPY
   TPCDS BY CUSTOMER_CREDIT_RATINGS     Cache Statistics
                                                FAILED        for table10:50:03
                                                            2018-10-08   COPY$WQPDD1Q3X2892USR6RY7
                                                                                2018-10-08 10:50:03
   TPCDS    CUSTOMER_CREDIT_RATINGS Date
                                     COPY conversion
                                                FAILED cache    disabled
                                                            2018-10-08    due to
                                                                       10:34:33   overflow10:35:56
                                                                                2018-10-08    (default   size: 1000)
                                                                                                      COPY$34_LOG   COPY$34_BAD
                                                                           April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Migrating Data Via DataPump Export and Import
 1 Export data from source database:
  2 Transfer export dump set to Object Container
   $> expdp vevo/vevo@pdbvevo parfile=ADW_VEVO.expdp
5 Import data into ADW   instance:
                    Export:  Release 18.0.0.0.0                          4   Add new2018
                                                  - Production on Sat Sep 1 19:12:37 schema             into ADW Instance:
  3  Set
 #####   up credentials for access:
                    Version 18.1.0.0.0
                  Import: Release 12.2.0.1.0 - Production on Sun Sep 2 21:51:32 2018
   # File:        ADW_VEVO.expdp
   # Purpose: DataPump Copyright   (c) 1982,
                               Export          2018, Oracle and/or        SQL>  CONNECT
                                                                    its affiliates.       admin/'N0M0reKn0bs#@tpcds_high;
                                                                                     All rights  reserved.
                  Copyright (c)      1982,parameter    file
                                            2017, Oracle      for its
                                                           and/or  VEVO   schema
                                                                        affiliates.   All rights reserved.
SQL>
$>     CONNECT
   #./impdp
      1.)        admin/'N0M0reKn0bs#@tpcds_high;
           Exclude  all:
              admin/IOUG1sAwesome@TPCDS_HIGH
                       Connected to: Oracle Database 18c  \ EE Extreme Perf Release 18.0.0.0.0 - Production
   #       - Clusters
                  Connected
                       Startingto:"VEVO"."SYS_EXPORT_SCHEMA_01":
                                     Oracle Database 18c Enterprise       CREATE Release
                                                                         Edition  USER vevo
                                                                     vevo/********@pdbvevo 12.2.0.1.0   - 64bit Production
                                                                                            parfile=ADW_VEVO.expdp
DIRECTORY=DATA_PUMP_DIR \
SQL>
   # BEGIN        Master
           - Database      table
                       Processing
                         Links     "ADMIN"."SYS_IMPORT_FULL_02"
                                     object  type                               IDENTIFIED
                                                                      successfully
                                                  SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY           BY
                                                                                    loaded/unloadedN0M0reKn0bs#
VERSION=18.0.0     \ Processing
                  Starting
   #                    and "ADMIN"."SYS_IMPORT_FULL_02":
          DBMS_CLOUD.CREATE_CREDENTIAL(
           - Indexes Processing
REMAP_SCHEMA=vevo:vevo        Index
                               \       Types                       admin/********@TPCDS_HIGH
                                     object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
                                                                                                 DIRECTORY=DATA_PUMP_DIR
                                                                                TEMPORARY TABLESPACE
                                     object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS             temp         VERSION=18.0.0
                  . . .                                                         PROFILE DEFAULT;
   #       - credential_name
             Materialized           =>
                               Views,   ‘DPI_VEVO’
                                          Logs,   and Zone Maps
DUMPFILE=default_credential:https://swiftobjectstorage.us-ashburn-
                       Processing
                  Processing         object
                                 object  typetype SCHEMA_EXPORT/TABLE/TABLE_DATA
                                                SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
   # 2.) For,username  . =>
                         . . 'jczuprynski@zerodefectcomputing.com'
                partitioned
                  Processing     tables,
                                 object  typeunload   all table data in \a single operation (rather
                                                SCHEMA_EXPORT/TABLE/TABLE
1.oraclecloud.com/v1/iougcloudtrial/DP_VEVO/export%U.dmp
   #
PARALLEL=4  ,password
           than
              \ unload
                       Processing
                         =>each
                  Processing         object
                             'N0M0reKn0bs#'
                                   tabletype
                                 object      type
                                           partition     as a separateGRANT
                                                  SCHEMA_EXPORT/TABLE/TABLE      RESOURCE
                                                                            operation)
                                                SCHEMA_EXPORT/TABLE/TABLE_DATA           forTOfaster
                                                                                                 vevo; loading
                       Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
                                                                          GRANT dump
                                                                                 CREATE   PROCEDURE
   );
   # 3.) Use 4 degrees
                  . . importedof\ parallelism
                                   "VEVO"."T_STAFF" and write to multiple              files
                                                                                      13.86   KB       26TOrows
                                                                                                             vevo;
PARTITION_OPTIONS=MERGE. . exported    "VEVO"."T_CANVASSING"                         27.77 MB 539821 rows
END;
   #####          . . imported "VEVO"."T_CANVASSING"
                       . . exported "VEVO"."T_STAFF"
                                                                          GRANT CREATE    PUBLIC
                                                                                      27.77
                                                                                     13.86 KB
                                                                                                    SYNONYM
                                                                                              MB 539821
                                                                                                    26 rows
                                                                                                           rowsTO vevo;
TRANSFORM=SEGMENT_ATTRIBUTES:N
                  . . imported             \
                                   "VEVO"."T_CAMPAIGN_ORG"                            8.031
/ DIRECTORY=DATA_PUMP_DIR
                       . . exported    "VEVO"."T_CAMPAIGN_ORG"            GRANT CREATE
                                                                                     8.031  KBKB
                                                                                          SEQUENCE  25 25
                                                                                                       TO  rows
                                                                                                           vevo;
                                                                                                        rows
TRANSFORM=DWCS_CVT_IOTS:Y
                  . . imported      \
                                   "VEVO"."T_VOTING_RESULTS"                          80.90   MB 1887761   rows
   EXCLUDE=INDEX,      . .  exported   "VEVO"."T_VOTING_RESULTS"          GRANT  CREATE SESSION TO vevo;
                                                                                     80.90  MB 1887761  rows
                  . . CLUSTER,
                       . . exported
                                    INDEXTYPE, MATERIALIZED_VIEW,
TRANSFORM=CONSTRAINT_USE_DEFAULT_INDEX:Y
                       imported    "VEVO"."T_VOTERS"
                                       "VEVO"."T_VOTERS"\                    MATERIALIZED_VIEW_LOG,
                                                                                      84.84MBMB 180000
                                                                                     84.84        180000 rows
SQL>   ALTER  DATABASE
   MATERIALIZED_ZONEMAP,  PROPERTY
                                 DB_LINK                                  GRANT CREATE    SYNONYM TOrows  vevo;
                  Processing     object  type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
EXCLUDE=index,cluster,indextype,materialized_view,materialized_view_log
                       ORA-39173:    Encrypted  data has been stored unencrypted in dump file set.
       SET default_credential    object
                       Master table    = 'ADMIN.DPI_VEVO';
   DATA_OPTIONS=GROUP_PARTITION_TABLE_DATA
                  Processing             type                             GRANT CREATE    TABLE TO vevo;
                                                SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
                                       "VEVO"."SYS_EXPORT_SCHEMA_01"   successfully loaded/unloaded
,materialized_zonemap,db_link
   PARALLEL=4 Processing                                                  GRANT CREATE VIEW TO vevo;
                       ******************************************************************************
                                 object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
                       Dump file
   SCHEMAS=vevo Processing         set for
                                 object  typeVEVO.SYS_EXPORT_SCHEMA_01    GRANT DROP PUBLIC SYNONYM TO vevo;
                                                                        is:
                                                SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
                         /u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export01.dmp
                  Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
   DUMPFILE=export%u.dmp                                                  GRANT EXECUTE ANY PROCEDURE TO vevo;
                         /u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export02.dmp
                  Processing     object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
                                                                          GRANT READ,WRITE ON DIRECTORY data_pump_dir TO vevo;
                         /u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export03.dmp
                  Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
                        /u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export04.dmp
                  Job "ADMIN"."SYS_IMPORT_FULL_02" successfully completed at Mon Sep 3 02:52:42 2018 elapsed 0 00:01:06
                       Job "VEVO"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Sep 1 19:13:17 2018 elapsed 0 00:00:39
                                                                                  April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Advantages of “No More Knobs“
Remember, ADW is all about no more knobs … and that’s really advantageous!
• Service instance can be stopped and restarted as necessary
   • Useful for conserving Cloud credits
• Easy to connect to
   • Only a few entries in SQLNET.ORA file and TNSNAMES.ORA are required
• Regular RMAN backups taken automatically on regular nightly schedule
• No instance tuning required
   • Memory pool sizes are already locked in
   • Parallelism is automatically derived depending on number of OCPUs and service name selected for connection
• Only appropriate licensed options are included
   • No worries about accidently incurring potential additional licensing fees
• Direct-path loads are fully supported
• DataPump Export and Import provides for rapid provisioning from existing databases
• Support for GoldenGate has been added as well
                                                                          April 2-6, 2017 in Las Vegas, NV USA   #C17LV
Under the Covers of ADW: Some FAQs
                                                                           Yes, absolutely! Database In-Memory,
          Is ADW really using all of Exadata’s key features for
                                                                          Smart Scan, storage indexes, and Smart
                most efficient execution of queries?
                                                                             Flash Cache are definitely in use.
               Does ADW automatically partition tables                                      Ummm … no, not yet.
                          and indexes?
          Can I review the results of the AI and Machine                       Not really, at least not yet in 18c …
          Learning algorithms that ADW has applied to                                    but stay tuned!
                    my workloads and objects?
                                                                        Several new features are in the works for
               Wow, ADW seems to be a bit primitive right                19c ADW, including a reporting API that
                now! Is it going to get more advanced?                  will show what action has been taken by
                                                                              workload optimization AI/ML.
                                                                  April 2-6, 2017 in Las Vegas, NV USA   #C17LV
ADW: Summary of Appropriate Use Cases
ADW is most appropriate for the following application workload requirements and environments:
• Read-only or read-mostly application workloads focused on leveraging analytical processing
   •   All features of Exadata storage cell software – Smart Scan, Smart Storage, storage indexes - brought to bear on processing workloads
   •   Tables are also cached in columnar format automatically with Exadata Smart Flash Cache
   •   All query results are automatically saved within SQL Query Result Caches, thus limiting future re-read of same data
   •   Up to 128 OCPUs and 128 TB of storage can be can be requested per ADW instance (subject to availability within instance’s region)
• Ideally, data warehousing application workload(s) should already be well-tuned to avoid surprises
• Virtually no DBA resources required for database management
    • No instance tuning is necessary
    • Selection of appropriate database service for the workload is really the only choice required
    • Parallelism derived from database service selected and number of OCPUs available
    • Scale-up and scale-down requires just a single button push
• Database migration and transformation only limited by desired / appropriate transferal methods
   •   Fresh load: DBMS_CLOUD.COPY_DATA, SQL*Loader, or INSERT INTO … SELECT FROM an EXTERNAL Table
   •   Existing schema(s): DataPump Import
   •   Tight synchronization required: GoldenGate
   •   Extremely large data transfers possible via Oracle Cloud Infrastructure Data Transfer Appliance
                                                                                    April 2-6, 2017 in Las Vegas, NV USA   #C17LV
Either Drive a School Bus, or Program Its Replacement. Your Choice.
                                       You can be a school bus driver kind of
                                       DBA – oblivious to the elimination of
                                           your job in just a few years …
                                         It’s all up to you.
… or become a Data Enterprise Architect, ahead
    of the curve, ready to take advantage of
Autonomous Database and the future it portends
                                                 April 2-6, 2017 in Las Vegas, NV USA   #C17LV
Useful Resources and Documentation
• ADW Documentation:
   https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/index.html
• Oracle Autonomous and Secure Cloud Services Blog:
   https://blogs.oracle.com/autonomous-and-secure-cloud-services
• Oracle Cloud Infrastructure Data Transfer Appliance:
   https://blogs.oracle.com/cloud-infrastructure/introducing-oracle-cloud-infrastructure-data-transfer-appliance
• Using GoldenGate to Replicate Data to ADW Cloud:
   https://docs.oracle.com/goldengate/c1230/gg-winux/GGODB/replicating-data-oracle-autonomous-data-warehouse-
   cloud.htm#GGODB-GUID-660E754E-B9A6-48DD-AA66-0D6B66A022CD
                                                                        April 2-6, 2017 in Las Vegas, NV USA   #C17LV