Thursday, October 29, 2015

Datapump scenario's

Few scenarios on datapump 

Scenario:We are trying to exporting database / schema / table and your data pump directory don't have enough space to hold the dump files. If some one ask you to know the size of the dumpfile before exporting ? how we are going to know about  the size of the dumpfile before export ? we use following syntax 

Syntax: expdp schema/password estimate_only=y directory=directoryname schemas=schemaname 

In the above syntax, i want to know about the size of the dumpfile created when i export whole schema, you can check for database, schema or table 

Scenario: we are trying to export database and recommended to perform consistent backup to avoid  data consistency errors ( ORA-1555 ) 

Syntax: expdp schema/password directory=directoryname dumpfile=dumpfile.dmp logfile=logfile.log schemas=schemaname flashback_scn=scnnumberfromdatabase 

Scenario: Application DBA is running export job on database. we received few alerts stating that database cpu went high and asked us to know the status on export - datapump job on database ? what you are going to do in this scenario ?

Here we have to understand, there is any data dictionary view which tells about the datapump operations? yes, of-course
we are going to get information from view " dba_datapump_jobs " AND user_datapump_jobs 

This View tells about all active datapump jobs on database 

Syntax: expdp schema/password directory=directoryname dumpfile=dumpfile.dmp logfile=logfile.log schemas=world flashback_scn=1234567 job_name=export-schema 

we can identify different datapump jobs using job_name 

We can monitor datapump jobs using v$session_longops and check progress of datapump jobs
select sid,serial#,sofar,totalwork  from v$session_longops;

select x.job_name,b.state,b.job_mode,b.degree
, x.owner_name,z.sql_text, p.message
, p.totalwork, p.sofar
, round((p.sofar/p.totalwork)*100,2) done
, p.time_remaining
from dba_datapump_jobs b
left join dba_datapump_sessions x on (x.job_name = b.job_name)
left join v$session y on (y.saddr = x.saddr)
left join v$sql z on (y.sql_id = z.sql_id)
left join v$session_longops p ON (p.sql_id = y.sql_id)
WHERE y.module='Data Pump Worker'
AND p.time_remaining > 0;

select    sid,    serial#
from    v$session s,    dba_datapump_sessions d
where     s.saddr = d.saddr;


Scenario:we are exporting schema and few tables on world schema and we want to improve performance of export jobs?  This is challenge we have on board 

To overcome this scenario we used parallel option to improve the performance of datapump jobs 

syntax: expdp system/password@tns directory=diretoryname dumpfile=dumpfile.dmp logfile=logfile.log schemas=world parallel=4 

parallel must be integer depend up on number of cores available on the server
In most of the scenarios this will improve performance of datapump jobs.  But we faced another scenario in our environment  Even we used parallel option, export of the table is very slow . we gathered stats on table and increased performance of the datapump job :)  :) can some one explain how stats and datapump job are co-related ? 


Scenario: when we are exporting schema depend on the requirement, we export whole schema or we export only DDL or DML of the schema. Many of us will get confused what is DDL backup and DML backup using datapump. we use following parameters to export only DDL and DML on database 

content=all  | All loads all the metadata as well as data from the source dump file.
content=metadata_only | METADATA_ONLY only creates database objects, no data is inserted. 
content=data_only | DATA_ONLY only loads row data into the tables no database objects are created.

For example to export only DDL of the world schema : syntax as follows 

expdp world/world directory=directoryname dumpfile=dumpfile.dmp logfile=logfile.log schemas=world content=metadata_only 

----Nikhil Tatineni--

Oracle In Memory --

Querys to monitor RAC

following few  Query's will help to find out culprits-  Query to check long running transaction from last 8 hours  Col Sid Fo...