DUL User's and Configuration
Guide V3.2.0.0
ORACLE CONFIDENTIAL
DUL and this documentation is Oracle Confidential and for Internal use only.
Table of contents
       Principles and feature list
       UNLOAD statement overview
       Using DUL
            o Data Dictionary available
            o without Data Dictionary
       Output format
       Some DUL internals
       DDL - DUL Dump Language specification
       DDL description
       DUL startup behaviour
       Configuring DUL
            o DUL paramater file: init.dul
            o Port specific parameters
            o Control file: control.dul
       Sample unload sessions
            o Data Dictionary available
            o Without Data Dictionary
       SQL*Loader problems and work arounds
       Script to unload Data Dictionary
DUL's PRINCIPLES and FEATURE LIST
STANDALONE C-PROGRAM
DUL is a standalone C program that directly retrieves rows from tables in data files. The Oracle
RDBMS software is NOT used at all. DUL does dirty reads, it assumes that every transaction is
committed. Nor does it check/require that media recovery has been done.
LAST RESORT
DUL is intended to retrieve data that cannot be retrieved otherwise. It is NOT an alternative for
EXP, SQL*Plus etc. It is meant to be a last resort, not for normal production usage.
Before you use DUL you must be aware that the rdbms has many
hidden features to force a bad database open. Undocumented
parameters and events can be used to skip roll forward, to disable
rollback and more.
DATABASE CORRUPT - BLOCKS OK
The database can be corrupted, but an individual data block used must be 100% correct. During
all unloading checks are made to make sure that blocks are not corrupted and belong to the
correct segment. If during a scan a bad block is encountered, an error message is printed in the
loader file and to standard output. Unloading will continue with the next row or block.
ROWS in CLUSTERS/TABLES
DUL can and will only unload table/cluster data. It will NOT dump triggers, stored procedures
nor create scripts for tables or views. (But the data dictionary tables describing them can be
unloaded). The data will be unloaded in a format suitable for SQL*Loader or IMP. A matching
control file for SQL*Loader is generated as well.
Cross Platform Unloading
Cross-platform unloading is supported. The database can be copied from a different operating
system than the DUL-host. (Databases/systems done so far: Sequent/ptx, Vax Vms, Alpha Vms,
MVS, HP9000/8xx, IBM AIX, SCO Unix, Alpha OSF/1).
ROBUST
DUL will not dump, spin or hang no matter how badly corrupted the database is.
(NEARLY) ALL ORACLE FEATURES SUPPORTED
Full support for all database constructs: row chaining, row migration, hash/index clusters, longs,
raws, rowids, dates, numbers, multiple free list groups, segment high water mark, NULLS,
trailing NULL columns, and unlimited extents.
SUPPORTED RDBMS VERSIONS
DUL should work with all versions 6 and 7. DUL has been tested with versions from 6.0.26 up to
7.3.2. Even the old block header layout (pre 6.0.27.2) is supported.
MULTI BYTE SUPPORT
DUL is essentially a single byte application. The command parser does not understand multibyte
characters, but it is possible to unload any multi byte database. For all possible caveats there is a
work around.
RESTRICTIONS
MLSLABELS
Multi Level Security Lables of trusted oracle are not supported.
(LONG) RAW
DUL can unload (long) raws, but there is no way to reload these 1-to-1 with SQL*Loader. There
is no suitable format in SQL*Loader to preserve all long raws. Use the export mode instead or
write a Pro*C program to load the data.
PORTABLE
DUL can be ported to any operating system with an ANSI-C compiler. DUL has been ported to
many UNIX variants, VMS and WindowsNT.
RDBMS INTERNALS
A good knowledge of the Oracle RDBMS internals is a pre requisite to be able to use DUL
successfully. Andre Bakker's two days internals course is a minimum.
DDL (DUL Description Language) UNLOAD STATEMENT
      OVERVIEW
DUL uses an SQL like command interface. There are DDL statements to unload extents, tables,
users or the entire database. Data dictionary information required can be specified in the ddl
statements or taken from the previously unloaded data dictionary. The following three statements
will unload the DEPT table. The most common form is if the data dictionary and the extent map
are available:
           UNLOAD TABLE scott.dept;
All relevant information can be specified in the statement as well:
           REM Columns with type in the correct order
           REM The segment header loaction in the storage clause
           UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
                  STORAGE( EXTENTS ( FILE 1 BLOCK 1205 ));
Oracle version 6:
           REM version 6 data blocks have segment header location in each block
           ALTER SESSION SET USE_SCANNED_EXTENT_MAP = TRUE;
           UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
                   STORAGE( EXTENTS ( FILE 1 BLOCK 1205 ));
Oracle7:
           REM Oracle7 data blocks have object id in each block
           ALTER SESSION SET USE_SCANNED_EXTENT_MAP = TRUE;
           UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
                   STORAGE( OBJNO 1501 );
CONFIGURATION FILES
There are two configuration files for DUL. "init.dul" contains all configuration parameters. (size
of caches, details of header layout, oracle block size, output file format) In the control file,
"control.dul", the data file names and the oracle file numbers must be specified.
USING DUL WHEN USER$, OBJ$, TAB$ and COL$ can be
    unloaded
Steps to follow:
    1. configure DUL for the target database. This means creating a correct init.dul and
        control.dul.
    2. Unload the four data dictionary tables. Use "dul dictv7.ddl" for Oracle7 or "dul
        dictv6.ddl" for version 6 of Oracle.
    3. unload the tables. Use one of the following commands:
           o "UNLOAD TABLE owner.table ; (do not forget the semicolon).
           o "UNLOAD USER user name ;
           o "UNLOAD DATABASE ;
USING DUL WITHOUT SYSTEM TABLESPACE
Steps to follow:
    1. configure DUL for the target database. This means creating a correct init.dul and
        control.dul. (See Port specific parameters ).
    2. SCAN DATABASE; : scan the database, build extent and segment map
    3. SCAN TABLES; or SCAN EXTENTS; : gather row statistics
    4. Identify the lost tables from the output of step 3.
    5. UNLOAD the identified tables.
Identifying the tables can be an overwhelming task. But it can be (and has been) done. You need
in depth knowledge about your application and the application tables. Column types can be
guessed by DUL, but table and column names are lost. Any old SYSTEM tablespace from the
same database but weeks old can be of great help!.
AUTOMATED SEARCH
To ease the hunt for the lost tables: the scanned statistical information in seen_tab.dat and
seen_col.dat can be loaded into a fresh database. If you recreate the tables ( Hopefully the create
table scripts are still available) then structure information of a "lost" table can be matched to the
"seen" tables scanned information with two SQL*Plus scripts. (fill.sql and getlost.sql).
HINTS WHEN LOOKING FOR TABLES:
       Names are not really relevant for DUL, only for the person who must load the data. But
        the unloaded data does not have any value, if you do not know from which table it came.
       The guessed column types can be wrong. Although the algorithm is conservative and
        decides UNKNOWN if not sure.
       Trailing NULL columns are not stored in the database. So if the last columns only contain
        NULL's than the scanner will NOT find them. (During unload trailing NULL columns are
        handled correctly).
       When a table is dropped, the description is removed from the data dictionary only. The
        data blocks are not overwritten unless they are reused for a new segment. So the scanner
        software can see a table that has been dropped.
       Tables without rows will go unnoticed.
       Newer objects have a higher object id than older objects. If an table is recreated, or if
        there is a test and a production version of the same table the object id can be used to
        decide.
DUL'S OUTPUT FORMAT.
There are three different modes of output format.
    Export mode
    SQL*Loader mode: stream data files
       SQL*Loader mode: Fixed physical record data files
Export mode
Export mode is a feature available only in DUL version 3. To enable export mode, you must set
the init.dul parameter EXPORT_MODE to TRUE.
For each table a separate IMP loadable file will be generated. The
generated file is completely different from a table mode export
generated by EXP! The file is the minimal format that IMP can load. It is
a single table dump file. With only an insert table statement and the
table data. Table grants, storage clauses, or triggers will not be
included. An optional create table statement is included if the
COMPATIBLE parameter has been set to 6 or 7. The character set
indication in the file in the generated header is V6 style. It is set to
mean ASCII based characterset.
Extreme care has been taken that the dump file can always be loaded
with imp. Only complete good rows are written to the dump file. For
this each row is buffered. The size of this buffer can changed with the
init.dul parameter BUFFER. Incomplete or bad rows are not written out.
SQL*Loader modes
For both SQL*Loader output formats the columns will be space separated and enclosed in double
quotes. Any double quote in the data will be doubled. SQL*Loader recognizes this and will load
only one. The character used to enclose the columns can be changed from double quote to any
character you like with the init.dul parameter LDR_ENCLOSE_CHAR.
There are two styles of physical record organization:
Stream mode
Nothing special is done in stream mode, a newline is printed after each record. This is a compact
format and can be used if the data does not contain newline characters. To enable stream mode set
LDR_PHYS_REC_SIZE = 0 in init.dul.
Fixed physical records
This mode is essential if the data can contain newlines. One logical record, one comlete row, can
be composed of multiple physical records. The default is record length is 81, this fits nicely on
the screen of a VT220. The physical record size can be specified with LDR_PHYS_REC_SIZE in
init.dul.
Output file names
The file names generated are: owner name_table name.ext. The extension is ".dmp" for IMP
loadable files. ".dat" and ".ctl" are used for the SQL*Loader datafile and the control file. To
prevent variable substitution and other unwanted side effects, strange characters are stripped.
(Only alpha numeric and '_' are allowed).
If the FILE parameter is set the generated names will be FILEnnn.ext.
This possibility is a work around if the file system does not support long
enough file names.
SOME DUL INTERNALS
REQUIRED INFORMATION
To unload table data from a database block the following information must be known:
    1. Column/Cluster Information: The number and type of the columns. For char or varchar
       columns the maximum length as well. The number of cluster columns and the table
       number in the cluster. This information can be supplied in the unload statement or it can
       be taken from the previously unloaded USER$, OBJ$, TAB$ and COL$.
    2. Segment/Extent information: When unloading a table the extent table in the data segment
       header block is used to locate all data blocks. The location of this segment header block
       (file number and block number) is taken from the data dictionary or can be specified in
       the unload statement. If the segment header is not correct/available then another method
       must be used. DUL can build its own extent map by scanning the whole database. (in a
       separate run of DUL with the scan database statement.)
BINARY HEADERS
C-Structs in block headers are not copied directly, they are retrieved with specialized functions.
All offsets of structure members are programmed into DUL. This approach makes it possible to
cross-unload. (Unload an MVS created data file on an HP) Apart from byte order only four layout
types have been found so far.
    1. Vax VMS and Netware : No alignment padding between structure members.
    2. Korean Ticom Unix machines : 16 bit alignment of structure members.
    3. MS/DOS 16 bit alignment and 16 bit wordsize.
    4. Rest of the world (Including Alpha VMS) structure member alignment on member size.
MACHINE DEPENDENCIES
Machine dependencies (of the database) are configurable with parameters:
    Order of bytes in a word (big/little endian).
    Number of bits for the low part of the FILE# in a DBA (Block Address).
    Alignment of members in a C-struct.
    Number of blocks or bytes before the oracle file header block.
    Size of a word used in the segment header structure.
UNLOADING THE DATA DICTIONARY
DUL can use the data dictionary: (USER$, OBJ$, TAB$ and COL$) For the data dictionary to be
used, these internal tables must be unloaded first. (sample DDL scripts available "dictv6.ddl" and
"dictv7.ddl") These scripts are different for different DUL versions.
DDL ( DUL DESCRIPTION LANGUAGE ) SPECIFICATION
ALTER SESSION SET init.dul parameter = value ;
DESCRIBE owner_name . table_name ;
DUMP [ FILE file_no ] [ BLOCK block_no ]
[ LEVEL level_no ] ;
UNLOAD DATABASE;
UNLOAD USER user_name;
UNLOAD TABLE
       [ owner_name . ]table_name
       [ ( column_definitions ) ]
       [ cluster_clause ]
       [ storage_clause ] ;
UNLOAD EXTENT table name
       [ ( column definitions ) ]
       storage_clause ;
storage_clause ::=
         STORAGE ( storage_specification
         [ more_storage_specs ] )
storage_specification ::=
         OBJNO object_id_number
|        TABNO cluster_table_number
|        SEGOBJNO cluster_object_number
|        EXTENTS ( FILE data_segment_header_file_number
                    BLOCK data_segment_header_block_number
                  [ BLOCKS extent_size_in oracle_blocks ] ] )
|    any_normal_storage_specification_but_silently_ignored
SCAN DATABASE;
       Scans all blocks of all data files.
       Two files are generated:
       1: seg.dat information of found segment headers (index/cluster/table):
                 (object id, file number, and block number).
       2: ext.dat information of contiguous table/cluster data blocks.
                 (object id(V7), file and block number of segment header (V6),
                 file number and block number of first block,
                 number of blocks, number of tables)
SCAN TABLES;
       Uses seg.dat and ext.dat as input.
       Scans all tables in all data segments (a header block and at least one
       matching extent with at least 1 table).
SCAN EXTENTS;
       Uses seg.dat and ext.dat as input.
       All extents for which no corresponding segment header has been found.
       (Only useful if a tablespace is not complete, or a segment header
       is corrupt).
REM any_text_you_like_till_End_Of_Line : comment
 NOT allowed inside ddl statements. ( To avoid a two layer lexical scan).
EXIT QUIT and EOF all cause DUL to terminate.
DDL ( DUL DESCRIPTION LANGUAGE ) DESCRIPTION
Rules for UNLOAD EXTENT and UNLOAD TABLE:
Extent Map
UNLOAD TABLE requires an extent map. In 99.99% of the cases the extent map in the segment
header is available. In the rare 0.01% that the segment header is lost an extent map can be build
with the scan database command. The self build extent map will only be used during an unload if
the parameter USE_SCANNED_EXTENT_MAP is set to TRUE.
All data blocks have some ID of the segment they belong to. But there
is a difference between V6 and V7. Data blocks created by Oracle
version 6 have the address of the segment header block. Data blocks
created by Oracle7 have the segment object id in the header.
Column Specification
The column definitions must be specified in the order the columns are stored in the segment, that
is ordered by col$.segcol#. This is not necessarily the same order as the columns where specified
in the create table statement. Cluster columns are moved to the front, longs to the end. Columns
added to the table with alter table command, are always stored last.
Unloading a single extent
UNLOAD EXTENT can be used to unload 1 or more adjacent blocks. The extent to be unloaded
must be specified with the STORAGE clause: To specify a single extent use: STORAGE
( EXTENTS( FILE fno BLOCK bno BLOCKS #blocks) ) (FILE and BLOCK specify the first
block, BLOCKS the size of the extent)
DUL specific column types
There are two extra DUL specific data types:
   1. UNKNOWN: a heuristic guess will be made for each column.
   2. IGNORE: the column will be skipped as if it was not there at all.
Identifying USER$, OBJ$, TAB$ and COL$
There is a "hidden" trick with file and object numbers that is used to locate the data dictionary
tables. The trick is based on the fact that object numbers are fixed for OBJ$, COL$, USER$ and
TAB$ due to the rigid nature of sql.bsq. This will not be documented because I myself could not
understand my first attempt to describe it.
DESCRIPTION OF SCAN COMMANDS
SCAN TABLES and SCAN EXTENTS scan for the same information and produce similar
output. ALL columns of ALL rows are inspected. For each column the following statistics are
gathered:
     How often the column is seen in a data block.
     The maximum internal column length.
     How often the column IS NULL.
     How often the column consists of at least 75% printable ascii.
     How often the column consists of 100% printable ascii.
     How often the column is a valid oracle number.
     How often the column is a nice number. (not many leading or trailing zero's)
     How often the column is a valid date.
     How often the column is a possible valid rowid.
These statistics are combined and a column type is suggested. Using this suggestion five rows are
unloaded to show the result. These statistics are dumped to two files (seen_tab.dat and
seen_col.dat). There are SQL*Loader and SQL*Plus scripts available to automate a part of the
identification process. (Currently known as the getlost option).
DESCRIBE
There is a describe command. It will show the dictionary information for the table, available in
DUL's dictionary cache.
DUL STARTUP SEQUENCE
During startup DUL goes through the following steps:
    the parameter file "init.dul" is processed.
    the DUL control file (default "control.dul") is scanned.
    Try to load dumps of the USER$, OBJ$, TAB$ and COL$ if available into DUL's data
       dictionary cache.
    Try to load seg.dat and col.dat.
    Accept DDL-statements or run the DDL script specified as first arg.
DUL parameters to be specified in init.dul:
ALIGN_FILLER
      OBSOLETE
      Replaced by OSD_C_STRUCT_ALIGNMENT
ASCII2EBCDIC
      BOOLEAN
      Must (var)char fields be translated from EBCDIC to ASCII. (For unloading MVS
      database on a ASCII host)
BIG_ENDIAN_FLAG
      OBSOLETE
      Replaced by OSD_BIG_ENDIAN_FLAG
BLOCKS_TO_SKIP
      NUMBER
      Number of data blocks to skip before starting unload. This is usefull for unloading huge
      tables in smaller portions. See also the parameter MAX_UNLOAD_BLOCKS.
BUFFER
     NUMBER (bytes)
     row output buffer size used in export mode only. In export mode each row is first stored
     in this buffer, before it is written to the dump file.
COMPATIBLE
     NUMBER
     Database version , valid values are 6 or 7. If set you can dump file header blocks. Only if
     this parameter is set then a create table statement is included in the dump file.
CONTROL_FILE
     TEXT
     Name of the DUL control file (default: "control.dul").
DBA_FILE_BITS
     OBSOLETE
     Replaced by OSD_DBA_FILE_BITS
DB_BLOCK_SIZE
     NUMBER
     Oracle block size in bytes (Maximal 32 K)
DB_LEADING_OFFSET
     OBSOLETE
     Replaced by OSD_FILE_LEADER_SIZE
DC_COLUMNS
     NUMBER
DC_OBJECTS
     NUMBER
DC_TABLES
     NUMBER
DC_USERS
     NUMBER
     Sizes of dul dictionary caches. If one of these is too low startup will fail.
EXPORT_MODE
     BOOLEAN
     EXPort like output mode or SQL*Loader format
FILE
     TEXT
     Base for (dump or data) file name generation
LDR_ENCLOSE_CHAR
     TEXT
     The character to enclose fields in SQL*Loader mode.
LDR_PHYS_REC_SIZE
     NUMBER
     Physical record size for the generated loader datafile.
     LDR_PHYS_REC_SIZE = 0 No fixed records, each record is terminated with a newline.
     LDR_PHYS_REC_SIZE > 2: Fixed record size.
MAX_OPEN_FILES
     Maximum # files that are concurrently kept open at the OS level.
MAX_UNLOAD_BLOCKS
     Maximum number of of data blocks that are read. This is meant to break a huge table into
     pieces. See also the parameter BLOCKS_TO_SKIP.
OSD_BIG_ENDIAN_FLAG
     Byte order in machine word. Big Endian is also known as MSB first. For an explanation
     why this is called Big Endian, you should read Gullivers Travels.
OSD_DBA_FILE_BITS
      File Number Size in DBA in bits. Or to be more precise the size of the low order part of
      the file number.
OSD_FILE_LEADER_SIZE
      bytes/blocks added before the real oracle file header block
OSD_C_STRUCT_ALIGNMENT
      C Structure member alignment (0,16 or 32) Must be set to 32 for most ports
OSD_WORD_SIZE
      Size of a machine word always 32, except for MS/DOS(16)
TICOM_FILLER
      OBSOLETE
      The new parameter is OSD_C_STRUCT_ALIGNMENT
USE_SCANNED_EXTENT_MAP
      BOOLEAN
      Use the scanned extent map in ext.dat in unload table. This parameter is only useful if
      some segment headers are missing or incorrect.
SAMPLE init.dul :
# sample init.dul configuration parameters
# these must be big enough for the database in question
# the cache must hold all entries from the dollar tables.
dc_columns = 200000
dc_tables = 10000
dc_objects = 10000
dc_users = 40
# OS specific parameters
osd_big_endian_flag = false
osd_dba_file_bits = 6
osd_c_struct_alignment = 32
osd_file_leader_size = 1
# database parameters
db_block_size = 2048
# loader format definitions
LDR_ENCLOSE_CHAR = "
LDR_PHYS_REC_SIZE = 81
Configuring the port dependent parameters
big_endian_flag
big endian or little endian (byte order in machine words): HP, SUN and mainframes are generally
big endian: OSD_BIG_ENDIAN_FLAG = TRUE. DEC and Intel platforms are little endian:
OSD_BIG_ENDIAN_FLAG = FALSE.
There is no standard trick for this, the following might work on a unix
system:
 echo dul | od -x
 If the output is like:
    0000000 6475 6c0a
    0000004
 You are on a big endian machine.
 If you see:
    0000000 7564 0a6c
    0000004
 This is a little endian machine.
dba_file_bits
The number of bits in a dba used for the low order part of file number. Perform the following
query:
 SQL> select dump(chartorowid('0.0.1')) from dual;
 Typ=69    Len=6:    8,0,0,0,0,0     ->     dba_filebits   =   5    (SCO)
 Typ=69    Len=6:    4,0,0,0,0,0     ->     dba_filebits   =   6    (Sequent , HP)
 Typ=69    Len=6:    1,0,0,0,0,0     ->     dba_filebits   =   8    (NCR,AIX)
 Typ=69    Len=6:    0,16,0,0,0,0     ->    dba_filebits   =   12   (MVS)
OSD_C_STRUCT_ALIGNMENT
Structure layout in data file headers. 0: No padding between members in a C-struct (VAX/VMS
only) 16: Some korean ticom machines and MS/DOS 32: Structure members are member size
aligned. (All others including ALPHA/VMS) Check the following query:
SELECT *
FROM v$type_size
WHERE type IN ( 'KCBH', 'KTNO', 'KCBH', 'KTBBH', 'KTBIT', 'KDBH'
        , 'KTECT', 'KTETB', 'KTSHC') ;
In general osd_c_struct_alignment = 32 and the following output is expected:
K     KTNO    TABLE NUMBER IN CLUSTER         1
KCB     KCBH     BLOCK COMMON HEADER           20
KTB    KTBIT TRANSACTION VARIABLE HEADER         24
KTB    KTBBH TRANSACTION FIXED HEADER           48
KDB     KDBH     DATA HEADER             14
KTE    KTECT EXTENT CONTROL                44
KTE    KTETB EXTENT TABLE                8
KTS    KTSHC SEGMENT HEADER                 8
8 rows selected.
For VAX/VMS and Netware ONLY osd_c_struct_alignment = 0 and this output is expected:
COMPONEN TYPE                DESCRIPTION                      SIZE
-------- -------- -------------------------------- ----------
K        KTNO        TABLE NUMBER IN CLUSTER                         1
KCB        KCBH        BLOCK COMMON HEADER                            20
KTB        KTBIT TRANSACTION VARIABLE HEADER                            23
KTB        KTBBH TRANSACTION FIXED HEADER                              42
KDB        KDBH         DATA HEADER                             14
KTE       KTECT EXTENT CONTROL                                    39
KTE       KTETB EXTENT TABLE                                    8
KTS       KTSHC SEGMENT HEADER                                     7
8 rows selected.
If there is a different list this will require some major hacking and sniffing and possibly a major
change to DUL. (Email bduijnen@nl.oracle.com)
osd_file_leader_size
Number of blocks/bytes before the oracle file header. Unix datafiles have an extra leading block
( file size, block size magic number) A large number ( > 100) is seen as a byte offset, a small
number is seen as a number of oracle blocks.
Unix    :             osd_file_leader_size = 1
Vms     :             osd_file_leader_size = 0
Desktop :             osd_file_leader_size = 512
Others    :           Unknown ( Use Andre Bakker's famous PATCH utility to find out)
                      An Oracle7 file header block starts with the pattern 0X0B010000.
You can add an additional byte offset in control.dul in the optional third field (for instance for
AIX or DEC UNIX data files on raw device)
Control file specification
A control file (default name "control.dul") is used to translate the file numbers to file names. The
format of the control file is simple: Each entry on a separate line, first the file number and then
the file name. A third optional field is an extra positive or negative byte offset, that will be added
to all fseek() operations for that datafile. This makes it possible to skip over the extra block for
AIX on raw devices or to unload from fragments of a datafile.
For instance:
  1 /usr/oracle/dbs/system.dbf
  8 /usr/oracle/dbs/data.dbf 4096
The file header blocks are NOT verified. This would make it impossible to unload files with a
corrupted header block. For debugging it is possible to dump the file header.
Sample unload session: data dictionary usable for DUL
    1. create a suitable "init.dul"
    2. create a control.dul
    3. sqldba
    4.   connect internal
    5.   startup mount
    6.   spool control.dul
    7.   select * from v$dbfile;
    8.   exit
    9. edit the result
    10. dul dictv7.ddl
    11. $ dul dictv7.ddl
    12.
    13. UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:04:12 1995
    14.
    15. Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights
        reserved.
    16.
    17. . unloading table              OBJ$ 1487 rows unloaded
    18. . unloading table              TAB$    197 rows unloaded
    19. . unloading table              COL$ 5566 rows unloaded
    20. . unloading table             USER$      13 rows unloaded
    21. restart dul
    22. UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:05:00 1995
    23.
  24. Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights
      reserved.
  25.
  26. Loaded 1487 objects
  27. Loaded 197 tables
  28. Loaded 5566 columns
  29. Loaded 13 users
  30. DUL> unload table scott.emp;
  31. About to unload SCOTT's tables ...
  32. . unloading table             EMP     14 rows unloaded
  33. DUL>
Example unload session: data dictionary UNUSABLE for
    DUL
  1. create a suitable "init.dul" (See config guide)
  2. create a control.dul See above
  3. scan the database for segment headers and extents:
  4. $ dul
  5. UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:10:16 1995
  6.
  7. Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights
      reserved.
  8.
  9. DUL> scan database;
  10. data file 1 20480 blocks scanned
  11. data file 4 7680 blocks scanned
  12. data file 5 512 blocks scanned
  13. DUL>quit
  14. Restart DUL and scan the found tables for column statistics this creates a huge amount of
      output:
  15. echo scan tables \; | dul > scan.out&
  16.
  17. [ many lines here]
  18.
  19. Object id 1601 table number 0
  20.
  21. UNLOAD TABLE T1601_0 ( C1 NUMBER, C2 UNKNOWN, C3 UNKNOWN, C4
      NUMBER, C5 DATE
  22.      , C6 NUMBER, C7 NUMBER, C8 NUMBER )
  23. STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
  24. Colno Seen MaxIntSz Null% C75% C100 Num% NiNu% Dat% Rid%
  25. 1 14          3 0% 0% 0% 100% 100% 0% 0%
  26. 2 14          6 0% 100% 100% 100% 14% 0% 21%
  27. 3 14          9 0% 100% 100% 100% 14% 0% 0%
  28. 4 14          3 7% 0% 0% 100% 100% 0% 0%
  29. 5 14          7 0% 0% 0% 0% 0% 100% 0%
  30. 6 14          3 0% 0% 0% 100% 100% 0% 0%
  31. 7 14          2 71% 0% 0% 100% 100% 0% 0%
  32. 8 14          2 0% 0% 0% 100% 100% 0% 0%
  33. "7369" "SMITH" "CLERK" "7902" "17-DEC-1980 AD 00:00:00" "800" "" "20"
  34. "7499" "-0.000025253223" "SALESMAN" "7698" "20-FEB-1981 AD 00:00:00"
      "1600" "30+
  35. 0" "30"
   36. "7521" "WARD" "SALESMAN" "7698" "22-FEB-1981 AD 00:00:00" "1250" "500"
       "30"
   37. "7566" "JONES" "MANAGER" "7839" "02-APR-1981 AD 00:00:00" "2975" ""
       "20"
   38. "7654" "MARTIN" "SALESMAN" "7698" "28-SEP-1981 AD 00:00:00" "1250"
       "1400" "30"
   39.
   40. [ many more lines here ]
       This looks familiar, use the above information and your knowledge of the emp table to
       compose:
       UNLOAD TABLE emp ( empno number, ename char, job char, mgr number,
        hiredate date, sal number, comm number deptno number)
       STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
   41. use this statement to unload emp:
   42. $ dul
   43.
   44. UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:46:33 1995
   45.
   46. Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights
       reserved.
   47.
   48. Loaded 350 segments
   49. Loaded 204 extents
   50. Extent map sorted
   51. DUL> UNLOAD TABLE emp ( empno number, ename char, job char, mgr
       number,
   52. DUL 2> hiredate date, sal number, comm number deptno number)
   53. DUL 3> STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
   54. . unloading table              EMP      14 rows unloaded
   55. DUL>quit
SQL*Loader problems and work arounds
233827 LOADER REJECTS MULTI-LINE RECORDS IF USING 'TERMINATED BY
WHITESPACE'
       DESCRIPTION: Load of multi line records fails if the continuation is between columns.
       WORK AROUND: Use larger physical records (init.dul: LDR_PHYS_REC_SIZE)
       FIXED: 7.3
250325 CANNOT LOAD CHAR THAT'S USED AS ENCLOSING CHARACTER
       DESCRIPTION: The stutter syntax of a column that starts with an enclosing character is
       not parsed correctly. ("""string itself enclosed in quotes""" is not parsed correctly).
       WORK AROUND: Use other enclosing character. (init.dul: LDR_ENCLOSE_CHAR)
       FIXED: 7.1.6
Script to unload USER$, OBJ$, TAB$ and COL$
REM DDL Script to unload the dictionary cache for DUL V3 (Oracle 7)
REM force the settings, so I know what happens
alter session set export_mode = false;
alter   session   set   ldr_phys_rec_size = 0;
alter   session   set   ldr_enclose_char = """" ;
alter   session   set   file = "" ;
alter   session   set   max_unload_blocks = 0;
alter   session   set   blocks_to_skip = 0;
unload table OBJ$ ( OBJ# number, OWNER# number, NAME varchar2(30),
         NAMESPACE ignore, TYPE number)
         storage( objno 17 file 1);
unload table TAB$( OBJ# number, TS# ignore, FILE# number, BLOCK# number,
         CLU# ignore, TAB# number, COLS number, CLUCOLS number)
         cluster C_OBJ#(OBJ#)
         storage ( tabno 1 segobjno 1 file 1)
;
unload table COL$ ( OBJ# number, COL# number , SEGCOL# number,
         SEGCOLLENGTH ignore, OFFSET ignore, NAME char(30),
         TYPE# number, LENGTH number)
         cluster C_OBJ#(OBJ#)
         storage( tabno 5 segobjno 1 file 1)
;
unload table USER$( USER# number, NAME varchar2(30))
         cluster C_USER#(USER#)
         storage(tabno 1 segobjno 9 file 1)
;
REM restart and load the dictinary in the cache
exit
Bernard van Duijnen (bduijnen.nl)