Tuesday, November 16, 2010

MASTER NOTE ON ORACLE DATAPUMP

Master Note for Data Pump (Doc ID 1264715.1)

Concepts/Definitions

Oracle Data Pump technology enables very high-speed movement of data and metadata from one database to another. Oracle Data Pump is made up of three distinct parts:

• The command-line clients, expdp and impdp
• The DBMS_DATAPUMP PL/SQL package (also known as the Data Pump API)
• The DBMS_METADATA PL/SQL package (also known as the Metadata API)

The Data Pump is a server based utility vs. the traditional Export / Import which are client based utilities. The Oracle Data Pump is not compatible with the Export / Import functionality.

The Oracle Data Pump can also use NETWORK_LINK functionality to move data and metadata from a remote database without a dumpfile using the network technology between the two databases.

Diagnosing

This Master Note is not intended to be a complete diagnostic guide with Data Pump. However, a few key articles are included below:


Setup / Configuration / Syntax

Users need certain privileges to run Data Pump. See Note 351598.1 for details on minimum requirements for this utility. All messages regarding work in progress, completed, and errors encountered can be written to a log file using the Data Pump LOGFILE parameter. This log file should be your first place to check for problems with the Data Pump.

You can also include the undocumented parameter METRICS=y to include additional logging information about number of objects and the time it took to process them in the log file.

Error messages that are report in the log file do not automatically indicate failures in Data Pump job. Some are reported as warning messages and informational. For example, messages about objects already existing and being skipped. In that case, you may need to adjust the IMPDP command parameters to recreate those objects or append data to those existing objects.

Other resources:

Note 266875.1 Export/Import DataPump Parameter DIRECTORY - How to Specify a Directory
Note 430221.1 How To Reload Datapump Utility EXPDP/IMPDP

Search the Oracle Knowledge Base using keywords "how to datapump" for a variety of How To articles to help with parameter syntax for commonly required Data Pump jobs.

In some cases, the performance of the EXPDP or IMPDP utilities may be slower than the traditional EXP / IMP utilities.

Refer to Note 286496.1 for pointers on tracing long running Data Pump jobs to investigate where bottlenecks or performance issues are occurring. The Data Pump can parallelize certain operations. However, this does not always mean the job will be faster than the traditional EXP / IMP tools. Refer to Note 365459.1 for more details on performance improvements and restrictions on parallelism with Data Pump.


Characterset and Compatibility Issues

Unlike previous tools, the Data Pump uses the characterset of the source database to ensure proper conversion to the target database. There can still be issues with character loss in the conversion process.

Note 227332.1 NLS considerations in Import/Export - Frequently Asked Questions
Note 457526.1 Possible data corruption with Impdp if importing NLS_CHARACTERSET is different from exporting NLS_CHARACTERSET
Note 436240.1 ORA-2375 ORA-12899 ORA-2372 Errors While Datapump Import Done From Single Byte Characterset to Multi Byte Characterset Database
Note 553337.1 Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions [Video]
Note 864582.1 Examples using Data pump VERSION parameter and its relationship to database compatible parameter


Performance Issues

The Data Pump may be performing slower than expected. When investigating performance issues like this, it is important to eliminate factors outside the Data Pump utility first. Is overall performance on the database slow? If so this could be the root issue for the slow Data Pump job. Is overall performance impacted currently on the server? Again the slow Data Pump job could be "victim" of issues with CPU or memory on the server.

Determine if there are bottlenecks with CPU utilization and/or memory usage on the Operating System. In some cases, stress for these resources can lead to paging issue and that can be the root issue of the performance issues with Data Pump.

Investigating this area is different from platform to platform. Check with your OS Administrator for assistance on analysis at the server level.

At the database level, investigate performance information using tools like AWR or ADDM. Refer to Note 748642.1 for pointers on generating and use the AWR tool. Note 842884.1 provides high level tips and hints about interpreting data in the AWR report. Focus on "top waits", and I/O information.  Note 22908.1 discusses latch contention.

If the performance issues are narrowed down to the Data Pump utility, refer to articles below:

Note 552424.1 Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ?
Note 457526.1 Possible data corruption with Impdp if importing NLS_CHARACTERSET is different from exporting NLS_CHARACTERSET
Note 227332.1 NLS considerations in Import/Export - Frequently Asked Questions
Note 436240.1 ORA-2375 ORA-12899 ORA-2372 Errors While Datapump Import Done From Single Byte Characterset to Multi Byte Characterset Database
Note 553337.1 Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions [Video]
Note 864582.1 Examples using Data pump VERSION parameter and its relationship to database compatible parameter

Common Issues

Bug 7362589 indicates with 11g performance can be very slow when exporting a small proportion of objects relative to the number in the database.
Bug 8845859 indicates issues with 11g performance on partition exports. See Note 1050907.1

Note 812864.1 discusses problems prior to 10.2.0.4 with REMAP_SCHEMA

Data Pump is slower when using TABLE_EXISTS_ACTION=TRUNCATE on 10g with Index Organized Table (IOT) objects exist. This is expected behavior. See Note 780784.1 for more details.

Prior to 10.2.0.4, Data Pump export loses global indexes for local domain indexes. Bug 5152232 is documented in Note 781759.1.

There is some confusion about parallel operations with indexes and Data Pump. Note 402511.1 documents how this works more completely.

Exporting Flashback archive tables will throw ORA-1426 errors. See Note 742739.1.

Sunday, October 31, 2010

Understanding the role of SHARED_POOL_RESERVED_SIZE

Understanding the role of SHARED_POOL_RESERVED_SIZE
(SHARED POOL FRAGMENTATION (ORA-04031) , applicable to Oracle 9i)

ORA-04031 error is returned if single chunk of space in shared Pool is
not available in order to allocate the space for the object.

The Message you will get is the following:
04031, 00000, "unable to allocate %s bytes of shared memory.

This error usually happens due to inadequate sizing of Shared
Pool/Shared Pool reserved Size or due to heavy fragmentation in
Shared Pool.

SQL> select * from v$sgastat where name like '%free%';

POOL        NAME                            BYTES
----------- --------------------------     ----------
shared pool free memory                   14668032
java pool   free memory                   67108864


Even though we have 14MB of free memory available in Shared Pool, we
are getting this ORA-04031.
This is because, we are trying to make large allocation and we are not
getting a biggest chunk of contiguous memory in shared_pool and the
Shared pool is fragmented.

In the following case, we are trying to allocate 3.59MB
(LAST_FAILURE_SIZE) and we have a contiguous space of only .92MB in
SHARED POOL RESERVE. This information we can get from
V$SHARED_POOL_RESERVED table using the following query:

We need to look at request_failures, last_failure_size & free_space.

SELECT FREE_SPACE/1024/1024,USED_SPACE/1024/1024,LAST_FAILURE_SIZE/1024/1024,REQUESTS,REQUEST_FAILURES
FROM V$SHARED_POOL_RESERVED;

FREE_SPACE   USED_SPACE    LAST_FAILURE_SIZE  REQUESTS    REQUEST_FAILURES
======================================================================
.92MB                3.88MB                   3.59MB
 141111624           453882


Our goal is to make the value of REQUEST_FAILURES to 0.

Find out the current values of SHARED_POOL_SIZE &
SHARED_POOL_RESERVED_SIZE parameter in the init.ora file.

SQL> show parameter shared_pool%
NAME                                     TYPE             VALUE
------------------------------------------     --------               -------
shared_pool_reserved_size       big integer      4194304   ( 4MB)
shared_pool_size                     big integer    83886080  (80MB)

In order to achieve it, Oracle recommends to set the
shared_pool_reserved_size to atleast 10% of the shared_pool_size
parameter. In this case it should be set to 8MB instead of 4MB.

If we set shared_pool_reserved_size as 8MB in the init.ora file, 8MB
of 80MB shared_pool will be reserved for large
allocations in the shared_pool.

Conclusion:

Oracle check the shared pool reserved list for space when there is not
enough contiguous block of sufficient size
on the shared pool free list.  This means that until the shared pool
becomes relatively full (or fragmented), Oracle will
not utilize the shared pool reserved list. So if you size the
SHARE_POOL_RESERVED_SIZE properly, you may avoid getting ORA-04031
error
and if you allocate too much of space to SHARED_POOL_RESERVED_SIZE,
you may end up wasting the memory also.