How to clone the Oracle database software
On source (devastator)
ORACLE_HOME location is : /u01/app/oracle/product/11.2.0/dbhome_1
On Target (vixen)
Create the directory /u01/app/oracle/product/11.2.0 if it does not exist
*To send from source to target
]# tar czf - /u01/app/oracle/product/11.2.0/dbhome_1/ | ssh 192.168.0.30 tar xzf - -C
/u01/app/oracle/product/11.2.0/
*To receive from source to target
]# ssh root@192.168.0.10 "(cd /u01/app/oracle/product/11.2.0; tar -cf - dbhome_1|gzip )" |
gunzip | tar -xf -
* on Target
- set kernel parameters
- chown -R oracle:oinstall /u01/home/
export $ORACLE_HOME =/u01/app/oracle/product/11.2.0/dbhome_1
export PATH=$ORACLE_HOME:$PATH:.
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
*we should do following stuff because as group is changed sqlplus will not work or connect.
rename config.o and edit OS group name in config.c (Linux) or config.s (Solaris)
relink all
*if Our software is RAC enabled and we dont have crs running in target machine
cd $ ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk rac_off
make -f ins_rdbms.mk ioracle
relink all
Clone the Oracle Home using the perl script clone.pl which is located under the
$ORACLE_HOME/clone/bin
vixen:$ cd /u01/app/oracle/product/11.2.0/dbhome_1/clone/bin
vixen:$ perl clone.pl ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
ORACLE_HOME_NAME=11GR2_HOME
How to clone the Oracle database from RAC to NON-RAC (User managed)
1. Take Backup (Hot) of the Source (Snapshot Backup)
Snapshot Backup is faster than any other type of backup.
alter database begin backup;
instead os level cp command tell storage admin to perform snapshot backup, from
storage prompt takes a snapshot. a snapshot can be taken in 30 secs.
alter database end backup;
(Note:- if you dont put your database in backup mode and take a snapshot backup
then it would be a fuzzy backup).
2. Take a trace file backup of controlfile
alter database backup controlfile to trace;
[oracle@localhost dbs]$ cat >> contt.ctl
CREATE CONTROLFILE SET DATABASE "PCLONE" RESETLOGS ARCHIVELOG
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
MAXINSTANCES 32
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '+DATA/pclone/redo01.log' SIZE 50M BLOCKSIZE 512,
GROUP 2 '+DATA/pclone/redo02.log' SIZE 50M BLOCKSIZE 512,
GROUP 3 '+DATA/pclone/redo03.log' SIZE 50M BLOCKSIZE 512,
GROUP 4 '+DATA/pclone/redo04.log' SIZE 50M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
'+DATA/pclone/system01.dbf',
'+DATA/pclone/sysaux01.dbf',
'+DATA/pclone/undotbs01.dbf',
'+DATA/pclone/users01.dbf',
'+DATA/pclone/example01.dbf',
'+DATA/pclone/undotbs02.dbf',
'+DATA/pclone/tbs01.dbf'
CHARACTER SET WE8MSWIN1252
;
3. Copy files to the target host
4. create pfile
5. startup and create the controlfile by making necessary changes
SQL> @contt.ctl
Control file created.
6. do a thread merge recovery and sanity checks
SQL> recover database using backup controlfile until cancel;
ORA-00279: change 1080427 generated at 04/19/2013 06:31:35 needed for
thread 1
ORA-00289: suggestion :
/u01/oracle/pclone/fra/PCLONE/archivelog/2013_04_21/o1_mf_1_17_%u_.arc
ORA-00280: change 1080427 for thread 1 is in sequence #17
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/u01/oracle/pclone/thread_1_seq_17.287.813136907
ORA-00279: change 1080427 generated at needed for thread 2
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/u01/oracle/pclone/thread_2_seq_14.288.813136929
ORA-00279: change 1100697 generated at 04/19/2013 07:21:45 needed for
thread 1
ORA-00289: suggestion :
/u01/oracle/pclone/fra/PCLONE/archivelog/2013_04_21/o1_mf_1_18_%u_.arc
ORA-00280: change 1100697 for thread 1 is in sequence #18
ORA-00278: log file '/u01/oracle/pclone/thread_1_seq_17.287.813136907'
no
longer needed for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
cancel
Media recovery cancelled.
Note:-
expecting change of this 1080427 number generated by thread 1 and
beleives that this change is available in log seq 17.
alter system switch logfile; on all nodes
or
alter system archive log current; inititaes log switch on all nodes.
select name from v$archived_log where &1 between first_change# and
next_change#;
if switching log file takes too much time check alertlog.
- corrupt redo log block header
alter database clear unarchived logfile group 4;
7.Open the database
alter database open resetlogs;
8.do sanity checks
SQL> select name from v$datafile where name like '%MISS%';
no rows selected
SQL> select * from v$recover_file;
no rows selected
SQL> select tablespace_name, contents from dba_tablespaces;
TABLESPACE_NAME CONTENTS
------------------------------ ---------
SYSTEM PERMANENT
SYSAUX PERMANENT
UNDOTBS1 UNDO
TEMP TEMPORARY
USERS PERMANENT
UNDOTBS2 UNDO
EXAMPLE PERMANENT
TBS PERMANENT
8 rows selected.
SQL> sho parameter undo_tab
NAME TYPE VALUE
------------------------------------ -----------
------------------------------
undo_tablespace string UNDOTBS1
SQL> drop tablespace UNDOTBS2;
Tablespace dropped.
SQL> select tablespace_name, contents from dba_tablespaces;
TABLESPACE_NAME CONTENTS
------------------------------ ---------
SYSTEM PERMANENT
SYSAUX PERMANENT
UNDOTBS1 UNDO
TEMP TEMPORARY
USERS PERMANENT
EXAMPLE PERMANENT
TBS PERMANENT
7 rows selected.
SQL> select name from v$tempfile;
no rows selected
SQL> alter tablespace temp add tempfile
'/u01/oracle/pclone/temp01.dbf' size 20m autoextend on next 10m
maxsize 2g;
Tablespace altered.
SQL> select name from v$database;
NAME
---------
PCLONE
SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
pclone
9. delete unwanted components
sanity check
-how many datafiles are there.
-check log members and groups;
How to share a mount point on network in linux
1. On Source
#service nfs start
#system-config-nfs
a new window will come, enter your folder or mount point to be shared and the hostname of
target.
2. On Target
# service nfs start
#mount 147.43.0.10:/u01 /u01/share_point
3. To unmount
umount /u01/share_point
---------------------------------------------------------------------------------------------------------------------
Adding a Hard Disk in Linux
1. Create the Partition
fdisk /dev/sdb
2. Format the partition
mkfs -t ext3 /dev/sdb1
3. Create the mount point
mkdir /u03
mount -t ext3 /dev/sdb1 /software
4. Edit fstab
vi /etc/fstab
and add following line
/dev/sdb1 /software ext3 defaults 1 1
How to share a mount point in solaris
to share a filesystem
edit the /etc/dfs/dfstab and type in the following
share -F nfs -o rw=client mountpoint of shared filesystem
Note
rw means read write permission use the right permission that will serve your
need. it could be
ro (read only).
client is the client machine
mountpoint of shared filesystem is the mount point of where your intent
share resource live
eg /export/home or /var/mail
finally
stop and start nfs daemon to make change take effect
cd /usr/lib/nfs
./mountd
./nfsd
you could stop and start nfs by issuing
/etc/init.d/nfs.server stop
followed by
/etc/init.d/nfs.server start
make sure nfs daemon is start by issuing
ps -ef | grep nfs
this should show statd and lockd running.
the statd and lockd must be running on server and
client
To Mount in linux
#mount 147.43.0.10:/u01 /u01/share_point