Friday, November 18, 2011

Exadata Concept - Storage Index , Smart Scan - Demo

Exadata Concept - Storage Index , Smart Scan - Demo

Exadata is a Database Appliance containing a Storage Server which is database aware, hence it provides the fastest medium to reduce I/O and ensures that only required blocks travels between Storage Server to Database Servers. Although Exadata has an infiniband to communicate between storage server and database machine and infiniband provides a 40Gbps communication.

In the below test case i would be demonstrating the following things

1) Storage Server Access via CELLCLI
2) Smart Scan in Exadata
3) Storage Indexes in Exadata

Lets see some of the hardware portion of Exadata.

Physical Structures

Command

Utilities

Physical Disk & LUN

list physicaldisk;

CELLCLI

CELL Disks

list celldisk

CELLCLI

GRID Disk

list griddisk

CELLCLI

ASM Disk

lsdg -G DATA

ASMCMD

ASM Disk Group

lsdg -G DATA

ASMCMD

ASM Files

ls -s

ASMCMD



One Physical disk can be part of LUN's as defined in the Striping, then Each LUN or Physical Disk could be mapped to CELL Disks inside exadata. These Cell Disks are partitioned and grouped in as GRID Disk.

When you create an ASM Disk Group, the ASM Disks are mapped to GRID Disk at storage server. Since Exadata Storage Server runs on Oracle Enterprise Linux, it is possible to login on the storage server as root user and use the new utility called as CELLCLI, this utility allows you to quert exadata cell disks and Grid disk along with creation of FLASH Disk and different IORM Management.


[root@qr01cel01 ~]# cellcli
CellCLI: Release 11.2.2.1.0 - Production on Thu Oct 27 03:44:36 EDT 2011

Copyright (c) 2007, 2009, Oracle. All rights reserved.
Cell Efficiency Ratio: 11M

CellCLI> list physical disk;
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK00

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK00 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK01

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK01 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK02

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK02 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK03

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK03 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK04

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK04 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK05

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK05 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK06

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK06 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK07

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK07 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK08

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK08 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK09

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK09 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK10

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK10 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK11

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK11 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH00

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH00 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH01

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH01 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH02

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH02 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH03

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH03 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH04

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH04 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH05

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH05 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH06

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH06 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH07

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH07 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH08

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH08 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH09

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH09 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH10

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH10 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH11

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH11 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH12

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH12 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH13

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH13 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH14

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH14 normal
/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH15

/opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/FLASH15 normal


You can list the celldisk as well, remember cell disks are mapped to physical disks or LUN

CellCLI> list celldisk;
CD_00_qr01cel01 normal
CD_01_qr01cel01 normal
CD_02_qr01cel01 normal
CD_03_qr01cel01 normal
CD_04_qr01cel01 normal
CD_05_qr01cel01 normal
CD_06_qr01cel01 normal
CD_07_qr01cel01 normal
CD_08_qr01cel01 normal
CD_09_qr01cel01 normal
CD_DISK10_qr01cel01 normal
CD_DISK11_qr01cel01 normal
FD_00_qr01cel01 normal
FD_01_qr01cel01 normal
FD_02_qr01cel01 normal
FD_03_qr01cel01 normal
FD_04_qr01cel01 normal
FD_05_qr01cel01 normal
FD_06_qr01cel01 normal
FD_07_qr01cel01 normal
FD_08_qr01cel01 normal
FD_09_qr01cel01 normal
FD_10_qr01cel01 normal
FD_11_qr01cel01 normal
FD_12_qr01cel01 normal
FD_13_qr01cel01 normal
FD_14_qr01cel01 normal
FD_15_qr01cel01 normal

Check the Details of celldisk for individual celldisk and list out the devices being used


CellCLI> list celldisk CD_00_qr01cel01 detail;
name: CD_00_qr01cel01
comment:
creationTime: 2010-12-09T21:10:33-05:00
deviceName: /opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK00
devicePartition: /opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK00
diskType: HardDisk
errorCount: 0
freeSpace: 0
id: 0000012c-ce0b-412f-0000-000000000000
interleaving: none
lun: /opt/oracle/cell11.2.2.1.0_LINUX.X64_100929/disks/raw/DISK00
raidLevel: "RAID 0"
size: 1.203125G
status: normal


As i told earlier that Grid disks are group of cell disks, you can list out grid disk by the below command.

CellCLI> list griddisk;
DATA_CD_00_qr01cel01 active
DATA_CD_01_qr01cel01 active
DATA_CD_02_qr01cel01 active
DATA_CD_03_qr01cel01 active
DATA_CD_04_qr01cel01 active
DATA_CD_05_qr01cel01 active
DATA_CD_06_qr01cel01 active
DATA_CD_07_qr01cel01 active
DATA_CD_08_qr01cel01 active
DATA_CD_09_qr01cel01 active
highredint1 active
highredint2 active
highredint3 active
jgd active
RECO_CD_00_qr01cel01 active
RECO_CD_01_qr01cel01 active
RECO_CD_02_qr01cel01 active
RECO_CD_03_qr01cel01 active
RECO_CD_04_qr01cel01 active
RECO_CD_05_qr01cel01 active
RECO_CD_06_qr01cel01 active
RECO_CD_07_qr01cel01 active
RECO_CD_08_qr01cel01 active
RECO_CD_09_qr01cel01 active
SYSTEMDG_CD_00_qr01cel01 active
SYSTEMDG_CD_01_qr01cel01 active
SYSTEMDG_CD_02_qr01cel01 active
SYSTEMDG_CD_03_qr01cel01 active
SYSTEMDG_CD_04_qr01cel01 active
SYSTEMDG_CD_05_qr01cel01 active
SYSTEMDG_CD_06_qr01cel01 active
SYSTEMDG_CD_07_qr01cel01 active
SYSTEMDG_CD_08_qr01cel01 active
SYSTEMDG_CD_09_qr01cel01 active

Details of Grid Disk for individual grid disk and its cell disk can be displayed by using the below command

CellCLI> list griddisk DATA_CD_00_qr01cel01 detail
name: DATA_CD_00_qr01cel01
availableTo:
cellDisk: CD_00_qr01cel01
comment:
creationTime: 2010-12-09T21:11:49-05:00
diskType: HardDisk
errorCount: 0
id: 0000012c-ce0c-6c96-0000-000000000000
offset: 128M
size: 688M
status: active

CellCLI>


Now lets have a look at ASMCMD which is part of our Database machine and see how our disk groups are mapped to Storage Server.

ASMCMD> lsdsk -G DATA
Path
o/192.168.1.11/DATA_CD_00_qr01cel01
o/192.168.1.11/DATA_CD_01_qr01cel01
o/192.168.1.11/DATA_CD_02_qr01cel01
o/192.168.1.11/DATA_CD_03_qr01cel01
o/192.168.1.11/DATA_CD_04_qr01cel01
o/192.168.1.11/DATA_CD_05_qr01cel01
o/192.168.1.11/DATA_CD_06_qr01cel01
o/192.168.1.11/DATA_CD_07_qr01cel01
o/192.168.1.11/DATA_CD_08_qr01cel01
o/192.168.1.11/DATA_CD_09_qr01cel01
o/192.168.1.12/DATA_CD_00_qr01cel02
o/192.168.1.12/DATA_CD_01_qr01cel02
o/192.168.1.12/DATA_CD_02_qr01cel02
o/192.168.1.12/DATA_CD_03_qr01cel02
o/192.168.1.12/DATA_CD_04_qr01cel02
o/192.168.1.12/DATA_CD_05_qr01cel02
o/192.168.1.12/DATA_CD_06_qr01cel02
o/192.168.1.12/DATA_CD_07_qr01cel02
o/192.168.1.12/DATA_CD_08_qr01cel02
o/192.168.1.12/DATA_CD_09_qr01cel02
o/192.168.1.13/DATA_CD_00_qr01cel03
o/192.168.1.13/DATA_CD_01_qr01cel03
o/192.168.1.13/DATA_CD_02_qr01cel03
o/192.168.1.13/DATA_CD_03_qr01cel03
o/192.168.1.13/DATA_CD_04_qr01cel03
o/192.168.1.13/DATA_CD_05_qr01cel03
o/192.168.1.13/DATA_CD_06_qr01cel03
o/192.168.1.13/DATA_CD_07_qr01cel03
o/192.168.1.13/DATA_CD_08_qr01cel03
o/192.168.1.13/DATA_CD_09_qr01cel03
ASMCMD> exit

You can see that all ASM Disks are mapped to GRID Disks, and a disk group DATA is created out of it.


Lets start with the Storage Index and Smart Scan technology by login into the Database Machine and Oracle Database for our test scenario.
For the test case, i have created a user and provided necessary privileges to it.

[oracle@qr01db01 ~]$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Thu Oct 27 07:46:52 2011

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create user ritesh identified by pass;

User created.

Elapsed: 00:00:00.27
SQL> grant dba to ritesh;

Grant succeeded.

Elapsed: 00:00:00.01


Lets login to Ritesh User and create a table which contains 10 Million rows, since we are using connect by level to generate 10 million
rows, we need to have a high PGA memory. Instead of setting PGA, i have changed the workarea_size_policy to manual and used
sort_area_size to 1GB.


SQL> conn ritesh/pass

Connected.

SQL> alter session set sort_area_size=1073741824;

Session altered.

Elapsed: 00:00:00.00

SQL> alter session set workarea_size_policy=MANUAL;

Session altered.

Elapsed: 00:00:00.00

SQL> create table oracledba_emp1 as
select rownum t1,rownum+1 t2 ,'Testing for Storage Indexes' t3
from dual connect by level<10000000; Table created. Elapsed: 00:00:43.48 SQL> select bytes/1024/1024 as gb from user_segments where segment_name='ORACLEDBA_EMP1';

GB
----------
496

Elapsed: 00:00:00.41


The table that i have created is nearly 500 MB at the moment, now the below query would demonstrate the use of storage index and smart
scan. Everytime when we run an operation in a session and we want to verify whether the operation is using the feature of exadata or now we could run this query and see how much physicall I/O is avoided to be accessed or to be travelled.

SQL> select
2 name,value/1024/1024 as stat_value
3 from v$mystat s, v$statname n
4 where s.statistic# = n.statistic# and n.name in (
5 'cell physical IO bytes saved by storage index',
6 'cell physical IO interconnect bytes returned by smart scan')
/

NAME STAT_VALUE
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 0
cell physical IO interconnect bytes returned by smart scan .000328064

Elapsed: 00:00:00.33

SQL> save sess_wait;
Created file sess_wait.sql
SQL> @ sess_wait

NAME STAT_VALUE
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 0
cell physical IO interconnect bytes returned by smart scan .000328064

Elapsed: 00:00:00.16

Now lets run a query against the table, note that there is no physical index being created on the table. lets see how the query behaves.


SQL> select * from ORACLEDBA_EMP1 where T1=4711;

T1 T2 T3
---------- ---------- ---------------------------
4711 4712 Testing for Storage Indexes

Elapsed: 00:00:20.01
SQL> @ sess_wait

NAME STAT_VALUE
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 0
cell physical IO interconnect bytes returned by smart scan .072860718

You can see that there is no use of storage index and the query has taken nearly 20 seconds to finish. Now lets run the query again.

SQL> select * from ORACLEDBA_EMP1 where T1=4711;

T1 T2 T3
---------- ---------- ---------------------------
4711 4712 Testing for Storage Indexes

Elapsed: 00:00:00.06
SQL> @ sess_wait

NAME STAT_VALUE
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 485.015625
cell physical IO interconnect bytes returned by smart scan .147186279

You can see that when we executed the query for the second time, then the query took only .06 seconds to finish. Query against a 500 MB table without a index lead to an improvement of nearly 1000% with exadata because of Storage Indexes. The value of 485 MB visible in by session statistics shows that storage index has saved at least 485 MB of data for a physical read.

Note : Storage Index takes time to create and they are created at storage server, thats why its being told that Exadata Storage Server is a database aware storage.

Now lets disable the feature of storage index of Exadata and see how it behaves in Non-Exadata environment. I am doing this by setting up an Underscore parameter.

SQL> conn / as sysdba
Connected.
SQL> col NAME format a40
SQL> col VALUE format a10
SQL> col DESCRIPTION format a40
SQL> col TYPE forma a10
SQL> col DEFLT forma a10
SQL> set lines 140
SQL> select a.ksppinm name,
2 b.ksppstvl value,
3 b.ksppstdf deflt,
4 decode(a.ksppity, 1,'boolean', 2,'string', 3,'number', 4,'file', a.ksppity) type,
5 a.ksppdesc description
6 from
7 sys.x$ksppi a,sys.x$ksppcv b
8 where
9 a.indx = b.indx
10 and (a.ksppinm like '\_%storageidx_disabled%' escape '\' or
11 12 a.ksppinm like '\_%cell_smart_scan%' escape '\'
13 ) order by name;

NAME VALUE DEFLT TYPE DESCRIPTION
---------------------------------- ---------- ---------- ---------- --------------------------
_allow_cell_smart_scan_attr TRUE TRUE boolean Allow checking

smart_scan_capable Attr
_kcfis_storageidx_disabled FALSE TRUE boolean Don't use storage index
optimization on the
storage cell

Now lets see if we disable the storage index feature by setting the undocumented parameter, how system behaves thereafter.

dbm1> conn ritesh/pass
Connected.
dbm1> select * from ORACLEDBA_EMP1 where T1=4711; -- With Storage Index
T1 T2 T3
---------- ---------- ---------------------------
4711 4712 Testing for Storage Indexes

Elapsed: 00:00:00.14

dbm1> @ sess_wait
NAME STAT_VALUE
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 485.015625
cell physical IO interconnect bytes returned by smart scan .001792908

Elapsed: 00:00:00.01

dbm1> alter session set "_kcfis_storageidx_disabled"=TRUE; -- Storage Index Disabled Here

Session altered.

Elapsed: 00:00:00.00
dbm1> select * from ORACLEDBA_EMP1 where T1=4711; -- Without Storage Index
T1 T2 T3
---------- ---------- ---------------------------
4711 4712 Testing for Storage Indexes

Elapsed: 00:00:11.36

dbm1> @ sess_wait
NAME STAT_VALUE
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 485.015625
cell physical IO interconnect bytes returned by smart scan .074325562

Elapsed: 00:00:00.00


dbm1> alter session set "_kcfis_storageidx_disabled"=FALSE; -- Enable Storage Indexes

Session altered.

Elapsed: 00:00:00.00
dbm1> select * from ORACLEDBA_EMP1 where T1=4711; -- With Storage Indexes

T1 T2 T3
---------- ---------- ---------------------------
4711 4712 Testing for Storage Indexes

Elapsed: 00:00:00.06
dbm1> @ sess_wait

NAME STAT_VALUE
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 970.03125
cell physical IO interconnect bytes returned by smart scan .076118469

Elapsed: 00:00:00.01

Storage Index also works with bind variables, have a check with that.

dbm1> @ sess_wait

NAME STAT_VALUE
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 970.03125
cell physical IO interconnect bytes returned by smart scan .076118469

Elapsed: 00:00:00.01

dbm1> variable b number
dbm1> exec :b:=1541;

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03

dbm1> select * from ORACLEDBA_EMP1 where T1=:b;
T1 T2 T3
---------- ---------- ---------------------------
1541 1542 Testing for Storage Indexes

Elapsed: 00:00:00.05

dbm1> @ sess_wait
NAME STAT_VALUE
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 1,455.0469
cell physical IO interconnect bytes returned by smart scan .0779

Elapsed: 00:00:00.01

Now lets see a small glimpse of smart scan feature

dbm1> create table oracledba_emp2 as select * from oracledba_emp1;

Table created.

Elapsed: 00:00:40.34
dbm1> @ sess_wait
NAME STAT_VALUE
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 1,455.0469
cell physical IO interconnect bytes returned by smart scan 440.5917

Elapsed: 00:00:00.00
dbm1>

Oracle 12c Cloud Enterprise Manager.

Sunday, October 9, 2011

Truncate privilege to user b on tables of user A


Request: Need to provide truncate privilege to user B on table A.test_table.

1) Connect to user A

2) Create procedure to truncate table as below:

create or replace procedure truncate_test_table
as
l_sql varchar2(2500); 
begin
l_sql := 'TRUNCATE TABLE A.TEST_TABLE'; 
execute immediate l_sql; 
end; 

2) Grant the procedure to User B:

grant execute on A.truncate_test_table to b;

3) Try executing the procedure from User B as below:

begin
A.truncate_test_table; 
end; 

Sharing a Folder from HostOS to Vmware Guest OS


I tried to share a folder from my windows machine(HOST OS) to Linux (VMWare Guest OS).
I breaked my head for more than an hour and found an article from vmware site.

1) Share the folder in windows (Right click the folder click share button and provide access to any windows user) which you want to share with Linux.
2)In Linux machine (VMWare Guest OS) login as root and issue below command.

mkdir /mnt/docs
mount -t smbfs -o username=<Windows system user account>,password=<password> //<Win_Hostname>/docs /mnt/docs

Now you can see the shared folder under /mnt/docs.

Eventhogh this is simple i wasted 1+ hour for this,so thought of sharing........

Wednesday, August 10, 2011

Oracle 11g: Access Control List and ORA-24247

Oracle 11g: Access Control List and ORA-24247

From 11g Oracle offers more fine-grained access control for users to access external network resources With ACL’s.

Prior 11g:
The packages UTL_MAIL, UTL_SMTP, UTL_HTTP, UTL_TCP etc. allow communication beyond the database server to the outside world, but when granted access, all hosts can be accessed.
This can be interpreted as a security flaw as no login is required when using UTL_TCP.
For example. DBA’s are advised to revoke the execute privileges from public on these kind of packages.

Since Oracle 11g:
The Access Control List is introduced. You not only can control who has access to these packages by granting, but now you can also control which resources they can call.
For instance, when a user is granted to send emails using UTL_MAIL, you can also control that he/she is only able to send through a specified mail server.
At first this looks like a obstacle (ORA-24247), but since the Voyager worm struck Oracle databases a year ago, it is introduced as an extra security measurement.

I faced the same scenario with my client,
Client complained saying his package is not working after the DB is upgraded to 11gR2 and throws below error.


ORA-24247: network access denied by access control list (ACL)
Here are athe steps i followed to resolve this issue.
NOTE: Package,DB and host name

[oracle@unixprod]:../oracle>
[oracle@unixprod]:../oracle>. oraenv
ORACLE_SID = [PROD1] ? PROD1
The Oracle base for ORACLE_HOME=/local/oracle/product/11.2.0.2 is /local/oracle/product/11.2.0.2
[oracle@unixprod]:../oracle>sqlplus '/as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Tue Aug 9 15:32:24 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select name from v$database;

NAME
---------
PROD1

SQL> conn user1/******
Connected.
SQL> show user
USER is "USER1"

SQL> exec END_REPORT.send_email('1111','ramshankar80@gmail.com','123','123');
BEGIN END_REPORT.send_email('1011','ram.shankar@am.sony.com','123','123'); END;

*
ERROR at line 1:
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at "USER1.END_REPORT", line 4275
ORA-06512: at line 1
SQL> desc END_REPORT
..............
PROCEDURE SEND_EMAIL
Argument Name                  Type                    In/Out Default?
------------------------------ ----------------------- ------ --------
I_FISCAL_DATE                  VARCHAR2                IN
I_EMAIL_ID                     VARCHAR2                IN
I_ERROR_CODE                   VARCHAR2                IN
I_ERROR_MESSAGE                VARCHAR2                IN
..................
SQL> !

[oracle@ubetud17]:../oracle>[oracle@ubetud17]:../oracle>oerr ora 24247
24247, 00000, "network access denied by access control list (ACL)"
// *Cause:    No access control list (ACL) has been assigned to the target host or the privilege necessary to access the target host has not
//                been granted to the user in the access control list.
// *Action:   Ensure that an access control list (ACL) has been assigned to the target host and the privilege necessary to access the
//            target host has been granted to the user.
[oracle@ubetud17]:../oracle>exit
SQL> show user
USER is "USER1"
SQL> conn /as sysdba
Connected.
SQL> begin
dbms_network_acl_admin.create_acl (
acl         => 'utl_mail.xml',
description => 'Allow mail to be send',
principal   => 'USER1',
is_grant    => TRUE,
privilege   => 'connect'
);
commit;
end;
/
  2    3    4    5    6    7    8    9   10   11

PL/SQL procedure successfully completed.

SQL> SQL> begin
dbms_network_acl_admin.add_privilege (
acl       => 'utl_mail.xml',
principal => 'USER1',
is_grant  => TRUE,
privilege => 'resolve'
);
commit;
end;
/
  2    3    4    5    6    7    8    9   10

PL/SQL procedure successfully completed.

SQL> SQL> begin
dbms_network_acl_admin.assign_acl(
acl  => 'utl_mail.xml',
host => '<smtp host name or IP address>'
);
commit;
end;
/
  2    3    4    5    6    7    8
PL/SQL procedure successfully completed.


SQL> conn user1
Connected.
SQL> show user
USER is "USER1"

SQL> exec END_REPORT.send_email('1111','ramshankar80@gmail.com','123','123');

PL/SQL procedure successfully completed.

SQL>

Cleaning Up the Access Control List
BEGIN
  DBMS_NETWORK_ACL_ADMIN.unassign_acl (
    acl         => 'utl_mail.xml',
    host        => 'smtp_host_name');
  COMMIT;
END;
/

Delete a privilege from Access Control List
BEGIN
  DBMS_NETWORK_ACL_ADMIN.delete_privilege (
    acl         => 'utl_mail.xml',
    principal   => 'utl_mail_p',
    is_grant    => TRUE,
    privilege   => 'connect');
  COMMIT;
END;
/
Drop an Access Control List
BEGIN
  DBMS_NETWORK_ACL_ADMIN.DROP_ACL (
    acl         => 'utl_mail.xml');
  COMMIT;
END;
/

Qery to view the Access Control List details:

col principal for a40
col acl for a40
set pages 340
set lines 300
select acl , principal , privilege , is_grant from DBA_NETWORK_ACL_PRIVILEGES;
select acl , host , lower_port , upper_port from DBA_NETWORK_ACLS;

Send_mail Procedure

CREATE PROCEDURE send_mail_test
(sender IN VARCHAR2
,recipient IN VARCHAR2
,subject IN VARCHAR2
,message IN VARCHAR2)
IS
mailhost VARCHAR2(30) := 'smtp_host_server';
mail_conn utl_smtp.connection;
crlf VARCHAR2(2) := CHR(13) || CHR(10);
mesg VARCHAR2(32000);
BEGIN
mail_conn := utl_smtp.open_connection(mailhost, 25);
mesg := 'Date: ' || TO_CHAR(SYSDATE, 'dd Mon yy hh24:mi:ss') || crlf
||'From: <' || sender || '>' || crlf ||
'Subject: '|| subject || crlf ||
'To: ' || recipient || crlf ||
'' || crlf || message;
utl_smtp.helo(mail_conn, mailhost);
utl_smtp.mail(mail_conn, sender);
utl_smtp.rcpt(mail_conn, recipient);
utl_smtp.data(mail_conn, mesg);
utl_smtp.quit(mail_conn);
EXCEPTION
WHEN OTHERS THEN
NULL;
END;

exec send_mail_test (' oracle@host_name.com','ramshankar80@gmail.com','test subject','test body');

Thursday, June 23, 2011

ORA-01031: insufficient privileges error even after granting select to the view.

Today i had a peculiar issue, An user is not able to access the view from other schema,
Let me explain the scenario.....
Say there are three users
user_A
user_B
user_C
requirement is to grant select privilege to user_C for the View on user_B whose base table is in user_B.
Even though i have given 'Grant select on user_a.table to user_c;' and 'Grant select on user_b.view to user_c;'
when user_c issues,
select * from user_b.view; it throws  ORA-01031: insufficient privileges

SQL> select * from user_b.view;
select * from user_b.view
                  *
ERROR at line 1:
ORA-01031: insufficient privileges
Finally I got it to work. It fails when the view is being executed from a third schema. For that to work you have to add the grant option:
grant select on user_a.table to user_b with grant option;
After this my view worked fine.

Tuesday, June 14, 2011

ORA-00600: internal error code, arguments: [2662], [a], [b], [c], [d], [], [], []


Recently I have faced an ORA-600 error, below is the details.....

ORA-00600: internal error code, arguments: [2662], [a], [b], [c], [d], [], [], []
ORA-00600: internal error code, arguments: [2662], [2728], [3808184538], [2728], [3808343262], [4207314], [], []
ORA-00600: internal error code, arguments: [2662], [2728], [3808384312], [2728], [3808417933], [8388617], [], []
ORA-00600: internal error code, arguments: [2663], [2728], [3808404316], [2728], [3808417933], [], [], []

This is due to mismatch of SCN (Current and one in the UGA)

If you open the database using resetlogs then the SCN in column d will change,so open without resetlogs;

if recovery is needed just do recover database instead of recove database using backup controlfile;


Bounce the database mutliple times if the difference is small.
of set the below event to increment the SCN so that multiple bounce will help you open the database without any errrors.

alter session set events '10015 trace name adjust_scn level 10';

Wednesday, May 25, 2011

Running Segment Advisor for table and tablespace

Running Segment Advisor for table and tablespace
Here table is SCOTT.EMP and tablespace is USERS.

Steps to run Segment Advisor for SCOTT.EMP table

DECLARE
  l_object_id  NUMBER;
BEGIN
  -- Create a segment advisor task for the SCOTT.EMP table.
  DBMS_ADVISOR.create_task (
    advisor_name      => 'Segment Advisor',
    task_name         => 'EMP_SEGMENT_ADVISOR',
    task_desc         => 'Segment Advisor For EMP');


  DBMS_ADVISOR.create_object (
    task_name   => 'EMP_SEGMENT_ADVISOR',
    object_type => 'TABLE',
    attr1       => 'SCOTT',
    attr2       => 'EMP',
    attr3       => NULL,
    attr4       => 'null',
    attr5       => NULL,
    object_id   => l_object_id);


  DBMS_ADVISOR.set_task_parameter (
    task_name => 'EMP_SEGMENT_ADVISOR',
    parameter => 'RECOMMEND_ALL',
    value     => 'TRUE');


  DBMS_ADVISOR.execute_task(task_name => 'EMP_SEGMENT_ADVISOR');

Steps to run Segment Advisor for USERS tablespace.

  -- Create a segment advisor task for the USERS tablespace.

 DBMS_ADVISOR.create_task (
    advisor_name      => 'Segment Advisor',
    task_name         => 'USERS_SEGMENT_ADVISOR',
    task_desc         => 'Segment Advisor For USERS');


  DBMS_ADVISOR.create_object (
    task_name   => 'USERS_SEGMENT_ADVISOR',
    object_type => 'TABLESPACE',
    attr1       => 'USERS',
    attr2       => NULL,
    attr3       => NULL,
    attr4       => 'null',
    attr5       => NULL,
    object_id   => l_object_id);


  DBMS_ADVISOR.set_task_parameter (
    task_name => 'USERS_SEGMENT_ADVISOR',
    parameter => 'RECOMMEND_ALL',
    value     => 'TRUE');


  DBMS_ADVISOR.execute_task(task_name => 'USERS_SEGMENT_ADVISOR');
END;
/


Execute below query for the findings.
-- Display the findings.
SET LINESIZE 250
COLUMN task_name FORMAT A20
COLUMN object_type FORMAT A20
COLUMN schema FORMAT A20
COLUMN object_name FORMAT A30
COLUMN object_name FORMAT A30
COLUMN message FORMAT A40
COLUMN more_info FORMAT A40

SELECT f.task_name,
       f.impact,
       o.type AS object_type,
       o.attr1 AS schema,
       o.attr2 AS object_name,
       f.message,
       f.more_info
FROM   dba_advisor_findings f
       JOIN dba_advisor_objects o ON f.object_id = o.object_id AND f.task_name = o.task_name
WHERE  f.task_name IN ('EMP_SEGMENT_ADVISOR', 'USERS_SEGMENT_ADVISOR')
ORDER BY f.task_name, f.impact DESC;

Wednesday, April 20, 2011

Cloning ORACLE HOME in Oracle 10g


Cloning Oracle 10g ORACLE_HOME
I had to apply PSU Patch for few databases running in Common ORACLE_HOME ,where few database should not be patched.
I thought of installing the new oracle for the same and i found it is time consuming, when i googled i found in 10g there is an option to clone the existing home.
Below are the steps to clone the ORACLE_HOME:
1. Copy the Oracle_HOME to the destination location (whether it be on the same or a remote server). When you need to rename the directory where the current installation is located, you can do this as well.
If necessary, take the central inventory as well.
2. Check the existence of the ORACLE_HOME in the Central Inventory
$ pwd
/local/oracle/product/oraInventory/ContentsXML
$grep "HOME NAME" inventory.xml
<HOME NAME="OUIHome3" LOC="/local/oracle/product/10.2.0.4" TYPE="O" IDX="1"/>
<HOME NAME="OUIHome7" LOC="/local/oracle/product/10.2.0.5.2" TYPE="O" IDX="2"/>
$
3. (Conditional) Detach the ORACLE_HOME using runInstaller from the copied ORACLE_HOME (Do this if needed)
$ORACLE_HOME/oui/bin/runInstaller -detachhome ORACLE_HOME=/local/oracle/product/10.2.0.5.2
Starting Oracle Universal
Installer...
No pre-requisite checks found in oraparam.ini, no
system pre-requisite checks will be executed.
The inventory pointer is located at
/var/opt/oracle/oraInst.loc
The inventory is located at
/local/oracle/product/oraInventory'
DetachHome' was successful.
$
4. Check the Inventory to see if the ORACLE_HOME was removed (verify the REMOVED=”T” option is added to the ‘HOME NAME’ tag)
$ cd /local/oracle/product/oraInventory/ContentsXML
$ grep "HOME NAME" inventory.xml
$
5. cd /local/oracle/product
cp -r 10.2.0.5.2 10.2.0.5.2a
6. Reregister the ORACLE_HOME
Make sure nothing by the current user is running, because it will do a relink as part of the procedure.
$ cd $ORACLE_HOME/clone/bin
$ perl clone.pl ORACLE_HOME="/local/oracle/product/10.2.0.5.2a" ORACLE_HOME_NAME="OraDb10g_home1"
./runInstaller -silent -clone -waitForCompletion "ORACLE_HOME=/local/oracle/product/10.2.0.5.2a" "ORACLE_HOME_NAME=OraDb10g_home1" -noConfig -nowait
Starting Oracle Universal Installer...
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-11-07_03-21-26PM.
Please wait ...Oracle Universal Installer, Version 10.2.0.3.0 Production
Copyright (C) 1999, 2006, Oracle. All rights reserved.
You can find a log of this install session at:
/local/oracle/product/oraInventory/logs/cloneActions2008-11-07_03-21-26PM.log
....................................................................................................
100% Done.
Installation in progress (Fri Nov 07 15:21:37 CET 2008)
.................................................................................
81% Done.
Install successful
Linking in progress (Fri Nov 07 15:21:49 CET 2008)
Link successful
Setup in progress (Fri Nov 07 15:22:56 CET 2008)
Setup successful
End of install phases.(Fri Nov 07 15:23:03 CET 2008)
WARNING:
The following configuration
scripts need to be executed as the "root" user.
#!/bin/sh
#Root script to run
/local/oracle/product/10.2.0.5.2a/root.sh
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts
The cloning of OraDb10g_home1 was successful.
Please check '/local/oracle/product/oraInventory/logs/cloneActions2008-11-07_03-21-26PM.log' for more details.
$

7. Run Root.sh
$ su -
Password:
#
# /local/oracle/product/10.2.0.5a/root.sh
Running Oracle10 root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /local/oracle/product/10.2.0.5a
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying coraenv to /usr/local/bin
...
Entries will be added to the /var/opt/oracle/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
#
8. The clone is ready now.
NOTE : If you have an ORACLE_HOME with a number of patches (like I had with the 10.2.0.5.2 patch set) these will automatically be taken with the clone.

Steps to speed up your export/import process


Steps to speed up your export/import process:

There are different ways and tool provided by oracle to move/migrate the data from one database to other database or within the database. With the latest version of oracle like 9i,10g, 11i you have rman and other tools to help carry out the data migration/move, But still there are times when we need to use export/import to move/migrate the data.

Traditionally export/import is the last options used for migration/move of data. One of the important reasons was its performance. Below are the various options that can be used to improve the performance and other things like how to do compress export.

Use direct export.

Direct=y is one of the option that can be used to speed up the export process.
Eg. Exp system/ file=myexport.dmp full=y direct=y feedback=10000 log=myexport.log

Use buffer parameter to improve the exp/import process.

Eg. Exp system/ file=myexport.dmp full=y direct=y feedback=10000 buffer=65536 log=myexport.log

Steps to do compress export.

export DIR=/opt/oracle/exports/QCTRP1
export ORACLE_SID=QCTRP1
export ORACLE_HOME=/opt/oracle/product/9.2
export PATH=$ORACLE_HOME/bin/$PATH

# Create a pipe

rm -f $DIR/exp_pipe.dmp

mknod $DIR/exp_pipe.dmp p

# Start compress export

/usr/bin/compress < $DIR/exp_pipe.dmp >$DIR/exp_pipe.dmp.Z &

# Sleep for some time

#sleep 5

# Start the export

exp system@TDPROD file=$DIR/exp_pipe.dmp owner=mytest log=$DIR/exp_$ORACLE_SID.log

Steps to do export/import using single thread.

Cd $ORACLE_HOME/rdbms/lib
Make –f ins_rdbms.mk singletask
# make -f ins_rdbms.mk expst
# make -f ins_rdbms.mk impst
# make -f ins_rdbms.mk sqlldrst
# mv expst $ORACLE_HOME/bin/
# mv impst $ORACLE_HOME/bin/
# mv sqlldrst $ORACLE_HOME/bin/

$ORACLE_HOME/bin/expst system file=exp.dmp full=y log=exp.log

Please note the above steps/procedure is not recommended/supported by oracle. Test this in test environment and do it at your own risk. I have tried and test it. I helped in improving the performance of my export.

Script to do export/import directly without writing to disk.

cat expfinal.sh

#!/bin/bash -x

export DIR=/opt/oracle/exports/QCTRP1
export ORACLE_HOME=/opt/oracle/product/9.2
export PATH=$ORACLE_HOME/bin:$PATH

rm -f $DIR/exp_pipe

mknod $DIR/exp_pipe p

mknod $DIR/imp_pipe.dmp p

echo invoke import
imp mytest@TDPROD file=$DIR/exp_pipe fromuser=mytest touser=second log=imp_test.log imp_log.out &

exp mytest@TDPROD owner=MYTEST file=$DIR/exp_pipe statistics=none < href="mailto:oracle@unixram">oracle@unixram >

Please note that this has been tried and tested by doing export and import from the target database. The above script can be modified and there are various other things that can be done. Even though its been tested, advice to test this in test environment first. The advantage of using these steps is as follows

· The export/import process was approx 70% faster than the conventional export/import.
· This process does not require space of the f/s. This will be of grate use if the database size is very big and f/s space is a big constraint since the size of the export will be in GB.

ORACLE DATA PUMP in Oracle 10g


Oracle Data Pump is a feature of Oracle Database 10g that enables very fast bulk data and metadata movement between Oracle databases. Oracle Data Pump provides new high-speed, parallel Export and Import utilities (expdp and impdp) as well as a Web-based Oracle Enterprise Manager interface.
EXPORT USING DATAPUMP.
USER SCOTT for this example..password TIGER.
By default dumps and logs created in $ORACLE_HOME/rdbms/log area.
To avoid using the disk on which the ORACLE_HOME resides...

1. Create 2 directories on SERVER...example
mkdir /u01/ORACLE/bozo/datapump and /u01/ORACLE/bozo/pumplogs
2. Create directories with same path in database.
( The physical directories on server must exist for the 2 commands below to work.)
sql> create directory dump_dir as '/u01/ORACLE/bozo/datapump';
...All dumps are sent to this area.
sql> create directory log_dir as '/u01/ORACLE/bozo/pumplogs';
...All logs are sent to this area.

sql> grant read,write on directory dump_dir to SCOTT;
---user exporting needs write priv and user importing needs read priv.
Grant succeeded.

sql> grant read,write on directory log_dir to SCOTT;
Grant succeeded.

ESTIMATE SIZE OF EXPORT
-------NO EXPORT OF DATA,ONLY ESTIMATES SIZE OF DUMP:

test10:/opt/oracle> expdp scott/tiger logfile=log_dir:full1.log estimate_only=y
Export: Release 10.2.0.1.0 - Production on Monday, 17 October, 2005 9:10:14
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": SCOTT/******** logfile=log_dir:full1.log estimate_only=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. estimated "SCOTT"."DEPT" 64 KB
. estimated "SCOTT"."DEPTBKUP" 64 KB
. estimated "SCOTT"."EMPBKUP" 64 KB
Total estimation using BLOCKS method: 384 KB
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 09:10:28
Above command does not export data..only estimates size of export dump.

SCHEMA EXPORT: EXPORTS DATA.

test10:/opt/oracle>expdp scott/tiger logfile=log_dir:full1i.log dumpfile=dump_dir:schema.dmp parallel=2 filesize=2G

EXPORT OVER DATABASE LINK
test11:/u02/ORACLE/test10/pumplogs>expdp scott/tiger@test10 full=y directory=dump_dir NETWORK_LINK=bozo dumpfile=dump_dir:full.dmp logfile=log_dir:full.log

FULL DATABASE EXPORT

expdp system/temp full=y dumpfile=dump_dir:full.dmp logfile=log_dir:full3.log parallel=2 filesize=2G

TO REMOVE JOB FROM ANOTHER SESSION..

sql> select * from dba_datapump_jobs;

expdp scott/tiger attach=SYS_EXPORT_FULL_01
Export: Release 10.1.0.2.0 - Production on Thursday, 30 December, 2004 17:29
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options
Job: SYS_EXPORT_FULL_01
Owner: BH
Operation: EXPORT
Creator Privs: FALSE
GUID: 45DD2D9C04D8457C874C4AF0ADC93E6E
Start Time: Thursday, 30 December, 2004 17:29
Mode: SCHEMA
Instance: orcl
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND bh/******** DIRECTORY=MYDIR DUMPFILE=e.dmp LOGFILE=e.log JOB_NAME=BHEXP
DATA_ACCESS_METHOD AUTOMATIC
ESTIMATE BLOCKS
INCLUDE_METADATA 1
LOG_FILE_DIRECTORY MYDIR
LOG_FILE_NAME e.log
TABLE_CONSISTENCY 0
USER_METADATA 1
State: IDLING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: D:\DP\e.dmp
bytes written: 4,096
Worker 1 Status:
State: UNDEFINED
Export> kill_job
Are you sure you wish to stop this job ([y]/n): y
D:\>
sql> select * from dba_datapump_jobs;
no rows selected
If you do not have the dump file or the dump file is corrupted then the only way left is to drop the master table of the job from the schema.
SQL> conn scott/tiger
Connected.
SQL> drop table SCOTT.SYS_EXPORT_FULL_01 PURGE;
SQL> select * from dba_datapump_jobs;
no rows selected

How to Perform Split IMPORT in Unix environment?


#Split_Import
############################# BEFORE YOU START ################################################
#Purpose: If the export dump files are in split state you can run this script to do the import#
#Please put the password for SYSTEM #
#check the directory where you have kept the export dump files #
#check the directory where import logfiles will be created #
###############################################################################################
#!/bin/ksh
#
# splitimport
#
# Tue Dec 14 10:04:07 NFT 1999 Olle Welinder
#
trap 'rm -f $PIPES' EXIT QUIT
#
set -x
export NLS_LANG=AMERICAN_AMERICA.UTF8
export ORACLE_SID=ORADB
PREFIX=ORADB
DUMPDIR=/export
FILELIST=$(ls ${DUMPDIR}/${PREFIX}*)
SPLITP="/tmp/split_pipe"
COMPRP="/tmp/compress_pipe"
PIPES="$SPLITP $COMPRP"
ORAUSER=system
ORAPASSWD=
#
# Create named pipes
for a in $PIPES;do
mknod $a p
done
#
cat $FILELIST > $SPLITP &
sleep 5
uncompress < ${SPLITP} > ${COMPRP} &
sleep 60
#
imp ${ORAUSER}/${ORAPASSWD} file=${COMPRP} fromuser=USER1,USER2 touser=USER1, USER2 commit=y analyze=n ignore=n rows=y log=/export/USER_imp.log

How to Perform Split EXPORT in UNIX environment?


#Split_Export
#!/bin/ksh
#
# splitexport
# Tue Dec 14 08:48:13 NFT 1999 Olle Welinder
#
set -x
trap 'rm -f $PIPES' EXIT QUIT
#
#
export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
export ORACLE_SID=CFORD01
ORAUSER=system
ORAPASSWD=$1
SPLITP="/tmp/split_pipe"
COMPRP="/tmp/compress_pipe"
PIPES="$SPLITP $COMPRP"
INFILE=$1
FILESIZE=${2:-500}
PREFIX=${3:-"expEBCORDER01"}
DUMPDIR=${4:-"/export/54682"}
#if [ "${INFILE}" = "" ];then
# read INFILE?"Enter name of expfile==>"
#fi
#
# Create named pipes
for a in $PIPES;do
mknod $a p
done
#
#
nohup split -b ${FILESIZE}m ${SPLITP} "$DUMPDIR/${PREFIX}" &
nohup compress < ${COMPRP} > ${SPLITP} &
exp ${ORAUSER}/${ORAPASSWD} file=${COMPRP} compress=n statistics=none owner=ebcorder01 direct=true recordlength=65535 buffer=
10000000 log=/export/54682/exp_ebcorder01.log

Wednesday, March 23, 2011

Formatted CRS_STAT output

Here is the important information for Crs_stat output from my oracle tutor.


At Many time we require to see all the content of CRS Resources, unfortunately resource information are adjusted in the width
provided for crsctl.

Create this script in $ORA_CRS_HOME/bin/crs_stat_new

#!/usr/bin/ksh
export `which crs_stat|xargs cat |grep ORA_CRS_HOME=`
RSC_KEY=$1
QSTAT=-u
AWK=/usr/bin/awk
$AWK \
'BEGIN {printf "%-45s %-10s %-18s\n", "HA Resource", "Target", "State";
printf "%-45s %-10s %-18s\n", "-----------", "------", "-----";}'
$ORA_CRS_HOME/bin/crs_stat $QSTAT | $AWK \
'BEGIN { FS="="; state = 0; }
$1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1};
state == 0 {next;}
$1~/TARGET/ && state == 1 {apptarget = $2; state=2;}
$1~/STATE/ && state == 2 {appstate = $2; state=3;}
state == 3 {printf "%-45s %-10s %-18s\n", appname, apptarget, appstate; state=0;}'
Execute it on any of the Nodes.

chmod 755 $ORA_CRS_HOME/bin/crs_stat_new

See the change in output..


Saturday, January 15, 2011

Spooling the query result in Excel format

Spooling the query result in Excel format

set term off
set echo off
set pages 0 head on emb on newp none
set markup html on
spool on
spool <Name_of_Excel_File>.xls
<select query>select * tab;
spool off
set markup html off spool off
exit

Getting explaing plan of the Currrent running in the session

Getting explaing plan of the Currrent running in the session

Run sid.sql and then run plans.sql <SID> to check the explain plan
--------------------------------------sid.sql ----------------------------------------
#List currently running session IDs
SET LINES 80 LONG 65536
CLEAR column
COLUMN username FORMAT A10 WRAP
COLUMN prog_event FORMAT A35 WRAP
COLUMN run_time FORMAT A10 JUSTIFY RIGHT
COLUMN sid FORMAT A4 NEW_VALUE sid
COLUMN status FORMAT A10
ACCEPT search_string PROMPT "Search for: "
SELECT  to_char(s.sid) AS sid
,       s.username || chr(10) || s.osuser AS username
,       s.status || chr(10) || 'PID:' || p.spid AS status
,       lpad(
                to_char(
                        trunc(24*(sysdate-s.logon_time))
                ) ||
                to_char(
                        trunc(sysdate) + (sysdate-s.logon_time)
                ,       ':MI:SS'
                )
        , 10, ' ') AS run_time
,       s.program ||  chr(10) || s.event AS prog_event
FROM    v$session s
JOIN    v$process p ON (p.addr = s.paddr)
WHERE   s.username <> 'DBSNMP'
AND     audsid != sys_context('USERENV','SESSIONID')
AND     upper(
                s.osuser || '|' ||
                s.program || '|' ||
                s.event || '|' ||
                s.sid || '|' ||
                s.username || '|' ||
                p.spid
        ) LIKE upper('%&search_string.%')
ORDER BY
        sid
/
Only the SELECT statement above is necessary if running from a GUI tool. The other commands and the chr(10) concatenation simply format the output for SQL*Plus.
<EOF>--------------------------------------sid.sql----------------------------------------
--------------------------------------plans.sql----------------------------------------
#List the EXPLAIN PLAN for a currently running session
SELECT  p.plan_table_output
FROM    v$session s
,       table(dbms_xplan.display_cursor(s.sql_id, s.sql_child_number)) p
where   s.sid = &1
<EOF>--------------------------------------plans.sql----------------------------------------

Friday, January 14, 2011

11g RAC Administration and Maintenance Tasks and Utilities

Task List:



Checking CRS Status:

The below two commands are generally used to check the status of CRS. The first command lists the status of CRS
on the local node where as the other command shows the CRS status across all the nodes in Cluster.

crsctl check crs <<-- for the local node
crsctl check cluster <<-- for remote nodes in the cluster
[root@node1-pub ~]# crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
[root@node1-pub ~]#
Checking Viability of CSS across nodes:

crsctl check cluster
For this command to run, CSS needs to be running on the local node. The "ONLINE" status for remote node says that CSS is running on that node.
When CSS is down on the remote node, the status of "OFFLINE" is displayed for that node.


[root@node1-pub ~]# crsctl check cluster
node1-pub    ONLINE
node2-pub    ONLINE
Viewing Cluster name:

I use below command to get the name of Cluster. You can also dump the ocr and view the name from the dump file.

ocrdump -stdout -keyname SYSTEM | grep -A 1 clustername | grep ORATEXT | awk '{print $3}'
[root@node1-pub ~]# ocrdump -stdout -keyname SYSTEM | grep -A 1 clustername | grep ORATEXT | awk '{print $3}'
test-crs
[root@node1-pub ~]#
OR
ocrconfig -export /tmp/ocr_exp.dat -s online
for i in `strings /tmp/ocr_exp.dat | grep -A 1 clustername` ; do if [ $i != 'SYSTEM.css.clustername' ]; then echo $i; fi; done


[root@node1-pub ~]# ocrconfig -export /tmp/ocr_exp.dat -s online
[root@node1-pub ~]# for i in `strings /tmp/ocr_exp.dat | grep -A 1 clustername` ; do if [ $i != 'SYSTEM.css.clustername' ]; then echo $i; fi; done
test-crs
[root@node1-pub ~]#
OR
Oracle creates a directory with the same name as Cluster under the $ORA_CRS_HOME/cdata. you can get the cluster name from this directory as well.

[root@node1-pub ~]# ls /u01/app/crs/cdata
localhost  test-crs
Viewing No. Of Nodes configured in Cluster:

The below command can be used to find out the number of nodes registered into the cluster.
It also displays the node's Public name, Private name and Virtual name along with their numbers.

olsnodes -n -p -i
[root@node1-pub ~]# olsnodes -n -p -i
node1-pub       1       node1-prv       node1-vip
node2-pub       2       node2-prv       node2-vip
Viewing Votedisk Information:

The below command is used to view the no. of Votedisks configured in the Cluster.
crsctl query css votedisk

[root@node1-pub ~]# crsctl query css votedisk
 0.     0    /u02/ocfs2/vote/VDFile_0
 1.     0    /u02/ocfs2/vote/VDFile_1
 2.     0    /u02/ocfs2/vote/VDFile_2
Located 3 voting disk(s).
[root@node1-pub ~]#
[root@node1-pub ~]# crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
[root@node1-pub ~]#
Viewing OCR Disk Information:

The below command is used to view the no. of OCR files configured in the Cluster. It also displays the version of OCR
as well as storage space information. You can only have 2 OCR files at max.

ocrcheck
[root@node1-pub ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3848
         Available space (kbytes) :     258272
         ID                       :  744414276
         Device/File Name         : /u02/ocfs2/ocr/OCRfile_0
                                    Device/File integrity check succeeded
         Device/File Name         : /u02/ocfs2/ocr/OCRfile_1
                                    Device/File integrity check succeeded
 
         Cluster registry integrity check succeeded
 
Various Timeout Settings in Cluster:

Disktimeout:
    Disk Latencies in seconds from node-to-Votedisk. Default Value is 200. (Disk IO)
Misscount:
    Network Latencies in second from node-to-node (Interconnect). Default Value is 60 Sec (Linux) and 30 Sec in Unix platform. (Network IO)
    Misscount < Disktimeout

NOTE: Do not change them without contacting Oracle Support. This may cause logical corruption to the Data.

IF
  (Disk IO Time > Disktimeout) OR (Network IO time > Misscount)
THEN
   REBOOT NODE
ELSE
   DO NOT REBOOT
END IF;

crsctl get css disktimeout
crsctl get css misscount
crsctl get css  reboottime

[
root@node1-pub ~]# crsctl get css disktimeout
200

[root@node1-pub ~]# crsctl get css misscount
Configuration parameter misscount is not defined. <<<<< This message indicates that the Misscount is not set maually and it is set to it's
Default Value On Linux, it is default to 60 Second. If you want to chang it, you can do that as below. (Not recommended)

[root@node1-pub ~]# crsctl set css misscount 100
Configuration parameter misscount is now set to 100.
[root@node1-pub ~]# crsctl get css misscount
100
The below command sets the value of misscount back to its Default values:
 crsctl unset css misscount
[root@node1-pub ~]# crsctl unset css misscount
[root@node1-pub ~]# crsctl get css  reboottime
3
Add/Remove OCR file in Cluster:

Removing OCR File
(1) Get the Existing OCR file information by running ocrcheck utility.
[root@node1-pub ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3852
         Available space (kbytes) :     258268
         ID                       :  744414276
         Device/File Name         : /u02/ocfs2/ocr/OCRfile_0 <-- OCR
                                    Device/File integrity check succeeded
         Device/File Name         : /u02/ocfs2/ocr/OCRfile_1 <-- OCR Mirror
                                    Device/File integrity check succeeded
 
         Cluster registry integrity check succeeded
 
(2) The First command removes the OCR mirror (/u02/ocfs2/ocr/OCRfile_1). If you want to remove the OCR
      file
(/u02/ocfs2/ocr/OCRfile_1) run the next command.

ocrconfig -replace ocrmirror
ocrconfig -replace ocr
[root@node1-pub ~]# ocrconfig -replace ocrmirror
[root@node1-pub ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3852
         Available space (kbytes) :     258268
         ID                       :  744414276
         Device/File Name         : /u02/ocfs2/ocr/OCRfile_0 <<-- OCR File
                                    Device/File integrity check succeeded

                                    Device/File not configured  <-- OCR Mirror not existed any more

         Cluster registry integrity check succeeded
Adding OCR

You need to add OCR or OCR Mirror file in a case where you want to move the existing OCR file location to the different Devices.
The below command add ths OCR mirror file if OCR file alread exists.

(1) Get the Current status of OCR:

 
[root@node1-pub ~]# ocrconfig -replace ocrmirror
[root@node1-pub ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3852
         Available space (kbytes) :     258268
         ID                       :  744414276
         Device/File Name         : /u02/ocfs2/ocr/OCRfile_0 <<-- OCR File
                                    Device/File integrity check succeeded

                                    Device/File not configured  <-- OCR Mirror does not exist

         Cluster registry integrity check succeeded

As You can see, I only have one OCR file but not the second file which is OCR Mirror.
So, I can add second OCR (OCR Mirror) as below command.

ocrconfig -replace ocrmirror <File name>
[root@node1-pub ~]# ocrconfig -replace ocrmirror /u02/ocfs2/ocr/OCRfile_1
[root@node1-pub ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3852
         Available space (kbytes) :     258268
         ID                       :  744414276
         Device/File Name         : /u02/ocfs2/ocr/OCRfile_0
                                    Device/File integrity check succeeded
         Device/File Name         : /u02/ocfs2/ocr/OCRfile_1
                                    Device/File integrity check succeeded

         Cluster registry integrity check succeeded
You can have at most 2 OCR devices (OCR itself and its single Mirror) in a cluster. Adding extra Mirror gives you below error message
 
[root@node1-pub ~]# ocrconfig -replace ocrmirror /u02/ocfs2/ocr/OCRfile_2
PROT-21: Invalid parameter
[root@node1-pub ~]#
Add/Remove Votedisk file in Cluster:

Adding Votedisk:

Get the existing Vote Disks associated into the cluster. To be safe, Bring crs cluster stack down on all the nodes
but one on which you are going to add votedisk from.

(1) Stop CRS on all the nodes in cluster but one.


[root@node2-pub ~]# crsctl stop crs

(2) Get the list of Existing Vote Disks

crsctl query css votedisk

[root@node1-pub ~]# crsctl query css votedisk
 0.     0    /u02/ocfs2/vote/VDFile_0
 1.     0    /u02/ocfs2/vote/VDFile_1
 2.     0    /u02/ocfs2/vote/VDFile_2
Located 3 voting disk(s).
(3) Backup the VoteDisk file

Backup the existing votedisks as below as oracle:
dd if=/u02/ocfs2/vote/VDFile_0 of=$ORACLE_BASE/bkp/vd/VDFile_0
[root@node1-pub ~]# su - oracle
[oracle@node1-pub ~]$ dd if=/u02/ocfs2/vote/VDFile_0 of=$ORACLE_BASE/bkp/vd/VDFile_0
41024+0 records in
41024+0 records out
[oracle@node1-pub ~]$
(4) Add an Extra Votedisk into the Cluster:

    If it is a OCFS, then touch the file as oracle. On raw devices, initialize the raw devices using "dd" command

touch /u02/ocfs2/vote/VDFile_3 <<-- as oracle
crsctl add css votedisk /u02/ocfs2/vote/VDFile_3 <<-- as oracle
crsctl query css votedisks
[root@node1-pub ~]# su - oracle
[oracle@node1-pub ~]$ touch /u02/ocfs2/vote/VDFile_3
[oracle@node1-pub ~]$ crsctl add css votedisk /u02/ocfs2/vote/VDFile_3
Now formatting voting disk: /u02/ocfs2/vote/VDFile_3.
Successful addition of voting disk /u02/ocfs2/vote/VDFile_3.
(5) Confirm that the file has been added successfully:

[root@node1-pub ~]# ls -l /u02/ocfs2/vote/VDFile_3
-rw-r-----  1 oracle oinstall 21004288 Oct  6 16:31 /u02/ocfs2/vote/VDFile_3
[root@node1-pub ~]# crsctl query css votedisks
Unknown parameter: votedisks
[root@node1-pub ~]# crsctl query css votedisk
 0.     0    /u02/ocfs2/vote/VDFile_0
 1.     0    /u02/ocfs2/vote/VDFile_1
 2.     0    /u02/ocfs2/vote/VDFile_2
 3.     0    /u02/ocfs2/vote/VDFile_3
Located 4 voting disk(s).
[root@node1-pub ~]#
Removing Votedisk:

Removing Votedisk from the cluster is very simple. Tthe below command removes the given votedisk from cluster configuration.

crsctl delete css votedisk /u02/ocfs2/vote/VDFile_3

[root@node1-pub ~]# crsctl delete css votedisk /u02/ocfs2/vote/VDFile_3
Successful deletion of voting disk /u02/ocfs2/vote/VDFile_3.
[root@node1-pub ~]#

[root@node1-pub ~]# crsctl query css votedisk
 0.     0    /u02/ocfs2/vote/VDFile_0
 1.     0    /u02/ocfs2/vote/VDFile_1
 2.     0    /u02/ocfs2/vote/VDFile_2
Located 3 voting disk(s).
[root@node1-pub ~]#
Backing Up OCR

Oracle performs physical backup of OCR devices every 4 hours under the default backup direcory $ORA_CRS_HOME/cdata/<CLUSTER_NAME>
and then it rolls that forward to Daily, weekly and monthly backup. You can get the backup information by executing below command.

ocrconfig -showbackup

[root@node1-pub ~]# ocrconfig -showbackup
node2-pub     2007/09/03 17:46:47     /u01/app/crs/cdata/test-crs/backup00.ocr
 
node2-pub     2007/09/03 13:46:45     /u01/app/crs/cdata/test-crs/backup01.ocr
 
node2-pub     2007/09/03 09:46:44     /u01/app/crs/cdata/test-crs/backup02.ocr
 
node2-pub     2007/09/03 01:46:39     /u01/app/crs/cdata/test-crs/day.ocr
 
node2-pub     2007/09/03 01:46:39     /u01/app/crs/cdata/test-crs/week.ocr
[root@node1-pub ~]#

Manually backing up the OCR
ocrconfig -manualbackup <<--Physical Backup of OCR
The above command backs up OCR under the default Backup directory. You can export the contents of the OCR using below command (Logical backup).
ocrconfig -export /tmp/ocr_exp.dat -s online <<-- Logical Backup of OCR
Restoring OCR

The below command is used to restore the OCR from the physical backup. Shutdown CRS on all nodes.
ocrconfig -restore <file name>

Locate the avialable Backups

[root@node1-pub ~]# ocrconfig -showbackup
 
node2-pub     2007/09/03 17:46:47     /u01/app/crs/cdata/test-crs/backup00.ocr
 
node2-pub     2007/09/03 13:46:45     /u01/app/crs/cdata/test-crs/backup01.ocr
 
node2-pub     2007/09/03 09:46:44     /u01/app/crs/cdata/test-crs/backup02.ocr
 
node2-pub     2007/09/03 01:46:39     /u01/app/crs/cdata/test-crs/day.ocr
 
node2-pub     2007/09/03 01:46:39     /u01/app/crs/cdata/test-crs/week.ocr
 
node1-pub     2007/10/07 13:50:41     /u01/app/crs/cdata/test-crs/backup_20071007_135041.ocr

Perform Restore from previous Backup
[root@node2-pub ~]# ocrconfig -restore /u01/app/crs/cdata/test-crs/week.ocr

The above command restore the OCR from week old backup.
If you have logical backup of OCR (taken using export option), then You can import it with the below command.

ocrconfig -import /tmp/ocr_exp.dat
Restoring Votedisks

·         Shutdown CRS on all the nodes in Cluster.
·         Locate the current location of the Votedisks
·         Restore each of the votedisks using "dd" command from the previous good backup of Votedisk taken using the same "dd" command.
·         Start CRS on all the nodes.
crsctl stop crs
crsctl query css votedisk
dd if=<backup of Votedisk> of=<Votedisk file> <<-- do this for all the votedisks
crsctl start crs
Changing Public and Virtual IP Address:

Current Config                                               Changed to
Node 1:

Public IP:       216.160.37.154                              192.168.10.11
VIP:             216.160.37.153                              192.168.10.111
subnet:          216.160.37.159                              192.168.10.0
Netmask:         255.255.255.248                             255.255.255.0
Interface used:  eth0                                        eth0
Hostname:        node1-pub.hingu.net                         node1-pub.hingu.net

Node 2:Public IP:       216.160.37.156                              192.168.10.22
VIP:             216.160.37.157                              192.168.10.222
subnet:          216.160.37.159                              192.168.10.0
Netmask:         255.255.255.248                             255.255.255.0
Interface used:  eth0                                        eth0
Hostname:        node1-pub.hingu.net                         node2-pub.hingu.net
=======================================================================
(A)
Take the Services, Database, ASM Instances and nodeapps down on both the Nodes in Cluster. Also disable the nodeapps, asm and database instances to prevent them from restarting in case if this node gets rebooted during this process.
srvctl stop service -d test
srvctl stop database -d test
srvctl stop asm -n node1-pub
srvctl stop asm -n node2-pub
srvctl stop nodeapps -n node1-pub,node1-pub2
srvctl disable instance -d test -i test1,test2
srvctl disable asm -n node1-pub
srvctl disable asm -n node2-pub
srvctl disable nodeapps -n node1-pub
srvctl disable nodeapps -n node2-pub

(B)
Modify the /etc/hosts and/or DNS, ifcfg-eth0 (local node) with the new IP values
on All the Nodes
(C)
Restart the specific network interface in order to use the new IP.
ifconfig eth0 down
ifconfig eth0 up

Or, you can restart the network.
CAUTION: on NAS, restarting entire network may cause the node to be rebooted.
(D)
Update the OCR with the New Public IP.
In case of public IP, you have to delete the interface first and then add it back with the new IP address.
As oracle user, Issue the below command:
oifcfg delif -global eth0
oifcfg setif -global eth0/192.168.10.0:public

(E)
Update the OCR with the New Virtual IP.
Virtual IP is part of the nodeapps and so you can modify the nodeapps to update the Virtual IP information.
As privileged user (root), Issue the below commands:
srvctl modify nodeapps -n node1-pub -A 192.168.10.111/255.255.255.0/eth0 <-- for Node 1
srvctl modify nodeapps -n node1-pub -A 192.168.10.222/255.255.255.0/eth0 <-- for Node 2

(F)
Enable the nodeapps, ASM, database Instances for all the Nodes.
srvctl enable instance -d test -i test1,test2
srvctl enable asm -n node1-pub
srvctl enable asm -n node2-pub
srvctl enable nodeapps -n node1-pub
srvctl enable nodeapps -n node2-pub

(G)
Update the listener.ora file on each nodes with the correct IP addresses in case if it uses the IP address instead of the hostname.
(H)
Restart the Nodeapps, ASM and Database instance
srvctl start nodeapps -n node1-pub
srvctl start nodeapps -n node2-pub
srvctl start asm -n node1-pub
srvctl start asm -n node2-pub
srvctl start database -d test
=======================================================================