Wednesday, April 20, 2011

Cloning ORACLE HOME in Oracle 10g


Cloning Oracle 10g ORACLE_HOME
I had to apply PSU Patch for few databases running in Common ORACLE_HOME ,where few database should not be patched.
I thought of installing the new oracle for the same and i found it is time consuming, when i googled i found in 10g there is an option to clone the existing home.
Below are the steps to clone the ORACLE_HOME:
1. Copy the Oracle_HOME to the destination location (whether it be on the same or a remote server). When you need to rename the directory where the current installation is located, you can do this as well.
If necessary, take the central inventory as well.
2. Check the existence of the ORACLE_HOME in the Central Inventory
$ pwd
/local/oracle/product/oraInventory/ContentsXML
$grep "HOME NAME" inventory.xml
<HOME NAME="OUIHome3" LOC="/local/oracle/product/10.2.0.4" TYPE="O" IDX="1"/>
<HOME NAME="OUIHome7" LOC="/local/oracle/product/10.2.0.5.2" TYPE="O" IDX="2"/>
$
3. (Conditional) Detach the ORACLE_HOME using runInstaller from the copied ORACLE_HOME (Do this if needed)
$ORACLE_HOME/oui/bin/runInstaller -detachhome ORACLE_HOME=/local/oracle/product/10.2.0.5.2
Starting Oracle Universal
Installer...
No pre-requisite checks found in oraparam.ini, no
system pre-requisite checks will be executed.
The inventory pointer is located at
/var/opt/oracle/oraInst.loc
The inventory is located at
/local/oracle/product/oraInventory'
DetachHome' was successful.
$
4. Check the Inventory to see if the ORACLE_HOME was removed (verify the REMOVED=”T” option is added to the ‘HOME NAME’ tag)
$ cd /local/oracle/product/oraInventory/ContentsXML
$ grep "HOME NAME" inventory.xml
$
5. cd /local/oracle/product
cp -r 10.2.0.5.2 10.2.0.5.2a
6. Reregister the ORACLE_HOME
Make sure nothing by the current user is running, because it will do a relink as part of the procedure.
$ cd $ORACLE_HOME/clone/bin
$ perl clone.pl ORACLE_HOME="/local/oracle/product/10.2.0.5.2a" ORACLE_HOME_NAME="OraDb10g_home1"
./runInstaller -silent -clone -waitForCompletion "ORACLE_HOME=/local/oracle/product/10.2.0.5.2a" "ORACLE_HOME_NAME=OraDb10g_home1" -noConfig -nowait
Starting Oracle Universal Installer...
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-11-07_03-21-26PM.
Please wait ...Oracle Universal Installer, Version 10.2.0.3.0 Production
Copyright (C) 1999, 2006, Oracle. All rights reserved.
You can find a log of this install session at:
/local/oracle/product/oraInventory/logs/cloneActions2008-11-07_03-21-26PM.log
....................................................................................................
100% Done.
Installation in progress (Fri Nov 07 15:21:37 CET 2008)
.................................................................................
81% Done.
Install successful
Linking in progress (Fri Nov 07 15:21:49 CET 2008)
Link successful
Setup in progress (Fri Nov 07 15:22:56 CET 2008)
Setup successful
End of install phases.(Fri Nov 07 15:23:03 CET 2008)
WARNING:
The following configuration
scripts need to be executed as the "root" user.
#!/bin/sh
#Root script to run
/local/oracle/product/10.2.0.5.2a/root.sh
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts
The cloning of OraDb10g_home1 was successful.
Please check '/local/oracle/product/oraInventory/logs/cloneActions2008-11-07_03-21-26PM.log' for more details.
$

7. Run Root.sh
$ su -
Password:
#
# /local/oracle/product/10.2.0.5a/root.sh
Running Oracle10 root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /local/oracle/product/10.2.0.5a
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying coraenv to /usr/local/bin
...
Entries will be added to the /var/opt/oracle/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
#
8. The clone is ready now.
NOTE : If you have an ORACLE_HOME with a number of patches (like I had with the 10.2.0.5.2 patch set) these will automatically be taken with the clone.

Steps to speed up your export/import process


Steps to speed up your export/import process:

There are different ways and tool provided by oracle to move/migrate the data from one database to other database or within the database. With the latest version of oracle like 9i,10g, 11i you have rman and other tools to help carry out the data migration/move, But still there are times when we need to use export/import to move/migrate the data.

Traditionally export/import is the last options used for migration/move of data. One of the important reasons was its performance. Below are the various options that can be used to improve the performance and other things like how to do compress export.

Use direct export.

Direct=y is one of the option that can be used to speed up the export process.
Eg. Exp system/ file=myexport.dmp full=y direct=y feedback=10000 log=myexport.log

Use buffer parameter to improve the exp/import process.

Eg. Exp system/ file=myexport.dmp full=y direct=y feedback=10000 buffer=65536 log=myexport.log

Steps to do compress export.

export DIR=/opt/oracle/exports/QCTRP1
export ORACLE_SID=QCTRP1
export ORACLE_HOME=/opt/oracle/product/9.2
export PATH=$ORACLE_HOME/bin/$PATH

# Create a pipe

rm -f $DIR/exp_pipe.dmp

mknod $DIR/exp_pipe.dmp p

# Start compress export

/usr/bin/compress < $DIR/exp_pipe.dmp >$DIR/exp_pipe.dmp.Z &

# Sleep for some time

#sleep 5

# Start the export

exp system@TDPROD file=$DIR/exp_pipe.dmp owner=mytest log=$DIR/exp_$ORACLE_SID.log

Steps to do export/import using single thread.

Cd $ORACLE_HOME/rdbms/lib
Make –f ins_rdbms.mk singletask
# make -f ins_rdbms.mk expst
# make -f ins_rdbms.mk impst
# make -f ins_rdbms.mk sqlldrst
# mv expst $ORACLE_HOME/bin/
# mv impst $ORACLE_HOME/bin/
# mv sqlldrst $ORACLE_HOME/bin/

$ORACLE_HOME/bin/expst system file=exp.dmp full=y log=exp.log

Please note the above steps/procedure is not recommended/supported by oracle. Test this in test environment and do it at your own risk. I have tried and test it. I helped in improving the performance of my export.

Script to do export/import directly without writing to disk.

cat expfinal.sh

#!/bin/bash -x

export DIR=/opt/oracle/exports/QCTRP1
export ORACLE_HOME=/opt/oracle/product/9.2
export PATH=$ORACLE_HOME/bin:$PATH

rm -f $DIR/exp_pipe

mknod $DIR/exp_pipe p

mknod $DIR/imp_pipe.dmp p

echo invoke import
imp mytest@TDPROD file=$DIR/exp_pipe fromuser=mytest touser=second log=imp_test.log imp_log.out &

exp mytest@TDPROD owner=MYTEST file=$DIR/exp_pipe statistics=none < href="mailto:oracle@unixram">oracle@unixram >

Please note that this has been tried and tested by doing export and import from the target database. The above script can be modified and there are various other things that can be done. Even though its been tested, advice to test this in test environment first. The advantage of using these steps is as follows

· The export/import process was approx 70% faster than the conventional export/import.
· This process does not require space of the f/s. This will be of grate use if the database size is very big and f/s space is a big constraint since the size of the export will be in GB.

ORACLE DATA PUMP in Oracle 10g


Oracle Data Pump is a feature of Oracle Database 10g that enables very fast bulk data and metadata movement between Oracle databases. Oracle Data Pump provides new high-speed, parallel Export and Import utilities (expdp and impdp) as well as a Web-based Oracle Enterprise Manager interface.
EXPORT USING DATAPUMP.
USER SCOTT for this example..password TIGER.
By default dumps and logs created in $ORACLE_HOME/rdbms/log area.
To avoid using the disk on which the ORACLE_HOME resides...

1. Create 2 directories on SERVER...example
mkdir /u01/ORACLE/bozo/datapump and /u01/ORACLE/bozo/pumplogs
2. Create directories with same path in database.
( The physical directories on server must exist for the 2 commands below to work.)
sql> create directory dump_dir as '/u01/ORACLE/bozo/datapump';
...All dumps are sent to this area.
sql> create directory log_dir as '/u01/ORACLE/bozo/pumplogs';
...All logs are sent to this area.

sql> grant read,write on directory dump_dir to SCOTT;
---user exporting needs write priv and user importing needs read priv.
Grant succeeded.

sql> grant read,write on directory log_dir to SCOTT;
Grant succeeded.

ESTIMATE SIZE OF EXPORT
-------NO EXPORT OF DATA,ONLY ESTIMATES SIZE OF DUMP:

test10:/opt/oracle> expdp scott/tiger logfile=log_dir:full1.log estimate_only=y
Export: Release 10.2.0.1.0 - Production on Monday, 17 October, 2005 9:10:14
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": SCOTT/******** logfile=log_dir:full1.log estimate_only=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. estimated "SCOTT"."DEPT" 64 KB
. estimated "SCOTT"."DEPTBKUP" 64 KB
. estimated "SCOTT"."EMPBKUP" 64 KB
Total estimation using BLOCKS method: 384 KB
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 09:10:28
Above command does not export data..only estimates size of export dump.

SCHEMA EXPORT: EXPORTS DATA.

test10:/opt/oracle>expdp scott/tiger logfile=log_dir:full1i.log dumpfile=dump_dir:schema.dmp parallel=2 filesize=2G

EXPORT OVER DATABASE LINK
test11:/u02/ORACLE/test10/pumplogs>expdp scott/tiger@test10 full=y directory=dump_dir NETWORK_LINK=bozo dumpfile=dump_dir:full.dmp logfile=log_dir:full.log

FULL DATABASE EXPORT

expdp system/temp full=y dumpfile=dump_dir:full.dmp logfile=log_dir:full3.log parallel=2 filesize=2G

TO REMOVE JOB FROM ANOTHER SESSION..

sql> select * from dba_datapump_jobs;

expdp scott/tiger attach=SYS_EXPORT_FULL_01
Export: Release 10.1.0.2.0 - Production on Thursday, 30 December, 2004 17:29
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options
Job: SYS_EXPORT_FULL_01
Owner: BH
Operation: EXPORT
Creator Privs: FALSE
GUID: 45DD2D9C04D8457C874C4AF0ADC93E6E
Start Time: Thursday, 30 December, 2004 17:29
Mode: SCHEMA
Instance: orcl
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND bh/******** DIRECTORY=MYDIR DUMPFILE=e.dmp LOGFILE=e.log JOB_NAME=BHEXP
DATA_ACCESS_METHOD AUTOMATIC
ESTIMATE BLOCKS
INCLUDE_METADATA 1
LOG_FILE_DIRECTORY MYDIR
LOG_FILE_NAME e.log
TABLE_CONSISTENCY 0
USER_METADATA 1
State: IDLING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: D:\DP\e.dmp
bytes written: 4,096
Worker 1 Status:
State: UNDEFINED
Export> kill_job
Are you sure you wish to stop this job ([y]/n): y
D:\>
sql> select * from dba_datapump_jobs;
no rows selected
If you do not have the dump file or the dump file is corrupted then the only way left is to drop the master table of the job from the schema.
SQL> conn scott/tiger
Connected.
SQL> drop table SCOTT.SYS_EXPORT_FULL_01 PURGE;
SQL> select * from dba_datapump_jobs;
no rows selected

How to Perform Split IMPORT in Unix environment?


#Split_Import
############################# BEFORE YOU START ################################################
#Purpose: If the export dump files are in split state you can run this script to do the import#
#Please put the password for SYSTEM #
#check the directory where you have kept the export dump files #
#check the directory where import logfiles will be created #
###############################################################################################
#!/bin/ksh
#
# splitimport
#
# Tue Dec 14 10:04:07 NFT 1999 Olle Welinder
#
trap 'rm -f $PIPES' EXIT QUIT
#
set -x
export NLS_LANG=AMERICAN_AMERICA.UTF8
export ORACLE_SID=ORADB
PREFIX=ORADB
DUMPDIR=/export
FILELIST=$(ls ${DUMPDIR}/${PREFIX}*)
SPLITP="/tmp/split_pipe"
COMPRP="/tmp/compress_pipe"
PIPES="$SPLITP $COMPRP"
ORAUSER=system
ORAPASSWD=
#
# Create named pipes
for a in $PIPES;do
mknod $a p
done
#
cat $FILELIST > $SPLITP &
sleep 5
uncompress < ${SPLITP} > ${COMPRP} &
sleep 60
#
imp ${ORAUSER}/${ORAPASSWD} file=${COMPRP} fromuser=USER1,USER2 touser=USER1, USER2 commit=y analyze=n ignore=n rows=y log=/export/USER_imp.log

How to Perform Split EXPORT in UNIX environment?


#Split_Export
#!/bin/ksh
#
# splitexport
# Tue Dec 14 08:48:13 NFT 1999 Olle Welinder
#
set -x
trap 'rm -f $PIPES' EXIT QUIT
#
#
export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
export ORACLE_SID=CFORD01
ORAUSER=system
ORAPASSWD=$1
SPLITP="/tmp/split_pipe"
COMPRP="/tmp/compress_pipe"
PIPES="$SPLITP $COMPRP"
INFILE=$1
FILESIZE=${2:-500}
PREFIX=${3:-"expEBCORDER01"}
DUMPDIR=${4:-"/export/54682"}
#if [ "${INFILE}" = "" ];then
# read INFILE?"Enter name of expfile==>"
#fi
#
# Create named pipes
for a in $PIPES;do
mknod $a p
done
#
#
nohup split -b ${FILESIZE}m ${SPLITP} "$DUMPDIR/${PREFIX}" &
nohup compress < ${COMPRP} > ${SPLITP} &
exp ${ORAUSER}/${ORAPASSWD} file=${COMPRP} compress=n statistics=none owner=ebcorder01 direct=true recordlength=65535 buffer=
10000000 log=/export/54682/exp_ebcorder01.log