Saturday, October 31, 2015

Oracle GoldenGate: Defgen utility

Hi Guys, welcome back 

Defgen Utility:  Using defgen utility definition file is created on source database. Replicat will use definition file to map source and target table. Defgen utility exists in Oracle goldengate home. 

Scenario's to use definition file: 

Emp table consists of 42 columns in schema world  on database db11g  and you want to replicate only 5 columns to table emp in  schema india - database oval9p

Replicat param file with out definition file:

replicat param files as follows

-- GGSCI> add replicat rnlggsm1, exttrail /opt/app/db11g/ggs/trails/m9, checkpointtable ggsuser.nspckpt
replicat rnlggsm1
SETENV (ORACLE_HOME="/usr/local/opt/oracle/product/11204")
SETENV (ORACLE_SID="db11g")
userid ggsuser, password xxxxxx
assumetargetdefs
reperror (default, discard)
discardfile /opt/app/oval009p/ggs/dsc/rnlggsm1.dsc, megabytes 200, append
discardrollover at 21:00
----REPERROR (1403, DISCARD)
----BATCHSQL BATCHERRORMODE, BATCHTRANSOPS 100000
map world.emp, target world.emp;

assumetargetdefs: This parameter will help replicat to map source and target table. when we state this parameter in replicat  param file. Replicat will assume structure of source table and target table is same 

Replicat param file with definition file:

replicat param files as follows

-- GGSCI> add replicat rnlggsm1, exttrail /opt/app/db11g/ggs/trails/m9, checkpointtable ggsuser.nspckpt
replicat rnlggsm1
SETENV (ORACLE_HOME="/usr/local/opt/oracle/product/11204")
SETENV (ORACLE_SID="db11g")
userid ggsuser, password xxxxxx
sourcedefs ./dirdef/def-world.def
reperror (default, discard)
discardfile /opt/app/ggs/dsc/rnlggsm1.dsc, megabytes 200, append
discardrollover at 21:00
---REPERROR (1403, DISCARD)
---BATCHSQL BATCHERRORMODE, BATCHTRANSOPS 100000
map world.emp, target india.emp;

In the above paramfile, emp table in world schema have 42 columns and target table emp in india schema have only 5 columns. Here structure of source and target table is different,where replicat will map source and target table using defintion file. 

------When we create defintion file using defgen utility, defintion of source table is loaded into .def file
------Steps to create definition file

Step1: Create .prm file in dirprm directory under goldengate home

vi  /opt/app/db11g/ggs/dirprm/world.prm 

defsfile ./dirdef/def-world.def
userid ggsuser password xxxxxx
table world.emp 

Step2:  Run the defgen file from goldengate home 

./defgen paramfile /opt/app/db11g/ggs/dirprm/world.prm 

when you run defgen utility, .def file is created in dirdef directory under goldengate home. after scp or sftp .def file created under dirdef directory from source to target goldengate home /dirdef directory and add parameters in replicat paramfile as follows

userid ggsuser, password xxxxxx
sourcedefs ./dirdef/def-world.def
map world.emp, target india.emp;

sourcedefs: defintion file is created on the source side 


---Nikhil Tatineni ---
---Oracle in memory ---
Oracle is my realm. Living in it 

How to Trace Goldengate process?

Scenario:  Goldengate process is running very slow, Goldengate is blocking itself. We want to trace  goldengate replicat. When we trace, goldengate process write process information to trace 


Turn on Tracing 

Here in this scenario, I am tracing replicat 

GGSCI (tnc301) 5> send replicat RNLGGSM1, trace rnlggsm1.trc

Sending trace request to REPLICAT RNLGGSM1 ...
Trace file /usr/local/opt/oracle//112106/rnlggsm1.trc opened.

Turn off Tracing 

GGSCI (tnc301) 6> send replicat RNLGGSM1, trace off

Sending trace request to REPLICAT RNLGGSM1 ...
Closing all trace files..


--------- 
Trace file output -- 

22:56:10.308 (19549288) exited READ_EXTRACT_RECORD (stat=0, seqno=1658, rba=270679289)
22:56:10.308 (19549288) processing record for M71152.NSP_LU
22:56:10.308 (19549288) * --- entering READ_EXTRACT_RECORD --- *
22:56:10.308 (19549288) exited READ_EXTRACT_RECORD (stat=0, seqno=1658, rba=270679663)
22:56:10.308 (19549288) processing record for M71152.NSP_LU
22:56:10.308 (19549288) * --- entering READ_EXTRACT_RECORD --- *
22:56:10.308 (19549288) exited READ_EXTRACT_RECORD (stat=0, seqno=1658, rba=270680037)
22:56:10.308 (19549288) processing record for M71152.NSP_LU
22:56:10.308 (19549288) * --- entering READ_EXTRACT_RECORD --- *
22:56:10.308 (19549288) exited READ_EXTRACT_RECORD (stat=0, seqno=1658, rba=270680413)
22:56:10.308 (19549288) processing record for M71152.NSP_LU
22:56:10.308 (19549288) mapping record
22:56:10.308 (19549288) entering perform_sql_statements (normal)
22:56:10.308 (19549288) entering execute_statement (op_type=15,NSPDBA.NSP_LU_GIS_GLID_STATUS)
22:56:10.308 (19549288) iotype:15 mode:0
22:56:10.308 (19549288) BIND val for col:3  :45 31
22:56:10.308 (19549288) BIND val for col:5  :0 0
22:56:10.308 (19549288) BIND val for col:0 :35 30 len:6 blen:0
22:56:10.308 (19549288) BIND val for col:1 :31 35 len:7 blen:0
22:56:10.308 (19549288) BIND val for col:2 :4e 0 len:1 blen:0
22:56:10.312 (19549291) OCI statement executed successfully...
22:56:10.312 (19549291) executed stmt (sql_err=0)
22:56:10.312 (19549291) exited perform_sql_statements (sql_err=0,recs output=1256695)
22:56:10.312 (19549291) * --- entering READ_EXTRACT_RECORD --- *
22:56:10.312 (19549291) exited READ_EXTRACT_RECORD (stat=0, seqno=1658, rba=270680791)
22:56:10.312 (19549291) processing record for M71152.NSP_LU
22:56:10.312 (19549291) * --- entering READ_EXTRACT_RECORD --- *
22:56:10.312 (19549291) exited READ_EXTRACT_RECORD (stat=0, seqno=1658, rba=270681169)
22:56:10.312 (19549291) processing record for M71152.NSP_LU
22:56:10.312 (19549291) * --- entering READ_EXTRACT_RECORD --- *
22:56:10.312 (19549291) exited READ_EXTRACT_RECORD (stat=0, seqno=1658, rba=270681541)
22:56:10.312 (19549291) processing record for M71152.NSP_LU
22:56:10.312 (19549291) * --- entering READ_EXTRACT_RECORD --- *
22:56:10.312 (19549291) exited READ_EXTRACT_RECORD (stat=0, seqno=1658, rba=270681915)
22:56:10.312 (19549291) processing record for M71152.NSP_LU
22:56:10.312 (19549291) mapping record
22:56:10.312 (19549291) entering perform_sql_statements (normal)
22:56:10.312 (19549291) entering execute_statement (op_type=15,NSPDBA.NSP_LU_GIS_GLID_STATUS)
22:56:10.312 (19549291) iotype:15 mode:0
22:56:10.312 (19549291) BIND val for col:3  :4e 31
22:56:10.312 (19549291) BIND val for col:5  :0 0
22:56:10.312 (19549291) BIND val for col:0 :38 31 len:6 blen:0
22:56:10.312 (19549291) BIND val for col:1 :31 37 len:7 blen:0
22:56:10.312 (19549291) BIND val for col:2 :4e 0 len:1 blen:0
22:56:10.334 (19549314) OCI statement executed successfully...
22:56:10.334 (19549314) executed stmt (sql_err=0)
22:56:10.334 (19549314) exited perform_sql_statements (sql_err=0,recs output=1256696)
22:56:10.336 (19549315) accepted connection from mipd301.sfdc.sbc.com:60190
22:56:10.336 (19549315) IPC_read from mipd301.sfdc.sbc.com:60190(144.155.227.120)

SUMMARY STATISTICS

General statistics:
99.26%  Checking messages (includes checkpointing)
         0.00%  Checking periodic tasks
 0.00%  Waiting for more data
 0.00%  Converting ASCII header to internal
 0.00%  Converting ASCII data to internal
 0.02%  Reading input records
 0.64%  Writing output records (replicate_io)
         0.00%  Mapping columns
         0.00%  Outputting data records
         0.63%  Performing SQL statements
 0.00%  Performing BATCHSQL statements
 0.61%  Performing actual DB op
 0.00%  Preparing SQL statements
 0.00%  Performing transaction commits
 0.00%  Checkpointing

Closing Trace File at 2015-09-10 Thu EDT 22:56:10

------Tracing helps to analyze process info of the goldengate processes |
------When you are raising Oracle SR in goldengate environment, Oracle guy recommend you to send trace info the goldengate process 

---NIKHIL TATINENI 
---Oracle in memory ---- 


Friday, October 30, 2015

WHAT IS LAG IN OGG ?



GGSCI (tnc61.ffdc.com) 11> info all

Program              Status            Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     ENTNEW        3702:00:10   00:00:02    
EXTRACT     STARTING    EXTORD7      00:00:02      4582:06:49  
EXTRACT     STOPPED     PEXTORD7     00:00:00      4582:07:01  
EXTRACT     STOPPED     PXTNEW         00:00:00      4582:07:01  
REPLICAT    STOPPED     REORD117      00:00:00      4582:08:55  

LAG IN GOLDENGATE: Extract, Pump and Replicat in goldengate

Lag at Extract: lag is the difference in seconds between the time the record is captured on source table and database timestamp


Time since ckpt on extract: Time since extract collected last transaction on database, if the database has any long running transaction, we can see lag on this column

There are different types of checkpoints that Golden gate maintains for each process. You can see them using INFO <EXTRACT/PUMP/REPLICAT>, SHOWCH.

For example, for an extract, you will see the following read checkpoints:
- startup checkpoint
- recovery checkpoint
- current checkpoint
- BR recovery checkpoints


Lag at Replicat: lag is the difference in seconds record applied to the target table and timestamp of the record on remote trail


Time since ckpt on replicat:  Time since replicat applied last transaction on target table 


Lag at pump: lag is the difference in seconds record extracted from local trial and timestamp on the local trail 





Oracle GoldenGate: logdump utility


Logdump: Its time to learn logdump now

Scenario: where we are using goldengate and  replicating data at transaction level across instances, if there is any mismatch of data in the tables between source and target instances, We can use logdump utility to analyze the data on the trail from where goldengate Extract process is extracting  data from online redo logs

As we enabled supplemental logging on the table, for every committed transaction, along with changed columns primary keys columns will be logged into the online redo logs. we know that, extract will collect committed transactions from online redo logs and write to the local trails in local trail directory.

using logdump utility, we ensure primary keys + unique keys + changed columns are logged into the local trail / in local trail directory.
we will investigate using logdump.
you know logdump utility exists in oracle goldengate home

For investigation, we need to know checkpoint information about extract
we will get checkpoint information as follows

GGSCI (tnc61.ffdc.com) 3> info ENTNEW, showch

EXTRACT    ENTNEW    Last Started 2015-09-30 02:59   Status STOPPED
Checkpoint Lag       3605:41:54 (updated 95:30:07 ago)
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2015-05-02 21:17:54
                     SCN 0.2410959 (2410959)


Current Checkpoint Detail:
Read Checkpoint #1
  Oracle Integrated Redo Log
  Startup Checkpoint (starting position in the data source):
    Timestamp: 2015-03-19 07:42:58.000000
    SCN: Not available

  Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
    Timestamp: 2015-05-02 21:17:54.000000
    SCN: 0.2410958 (2410958)
  Current Checkpoint (position of last record read in the data source):
    Timestamp: 2015-05-02 21:17:54.000000
    SCN: 0.2410959 (2410959)
  BR Previous Recovery Checkpoint:
    Timestamp: 2015-03-26 18:47:36.861436
    SCN: Not available
  BR Begin Recovery Checkpoint:
    Timestamp: 2015-03-27 02:47:52.000000
    SCN: 0.2315296 (2315296)
  BR End Recovery Checkpoint:
    Timestamp: 2015-03-27 02:47:52.000000
    SCN: 0.2315296 (2315296)

Write Checkpoint #1
  GGS Log Trail
  Current Checkpoint (current write position):
    Sequence #: 5
    RBA: 1470
    Timestamp: 2015-09-30 02:59:48.283266
    Extract Trail: /u01/app/trails/ba
    Trail Type: EXTTRAIL

Header:
  Version = 2
  Record Source = A
  Type = 13
  # Input Checkpoints = 1
  # Output Checkpoints = 1

File Information:
  Block Size = 2048
  Max Blocks = 100
  Record Length = 2048
  Current Offset = 0

Configuration:
  Data Source = 3
  Transaction Integrity = 1
  Task Type = 0

Status:
  Start Time = 2015-09-30 02:59:11
  Last Update Time = 2015-09-30 02:59:48
  Stop Status = G
  Last Result = 0

From above output, extract is writing to trail with seqno 5 and RBA 1470 and writing to Extract Trail: /u01/app/trails/ba

GGSCI (tnc61.ffdc.com) 5> sh ls -ltr /u01/app/trails/ba*

-rw-r----- 1 oracle dba 1470 Sep 30 02:59 /u01/app/trails/ba000005
-rw-r----- 1 oracle dba 1470 Sep 30 02:59 /u01/app/trails/ba000004

From above I have required information, to investigate the trail using logdump 

Open the "trail ba with seqno 5" and position to RBA 1470 using write checkpoint information from the extract 

GHDR ON : to read the header information of the trail
DETAIL DATA: to display column info  

[oracle@tnc61 goldengate]$ ls -ltr logdump
-rwxr-x--- 1 oracle dba 11622063 Aug  7  2014 logdump
[oracle@tnc61 goldengate]$ 
[oracle@tnc61 goldengate]$ ./logdump

Oracle GoldenGate Log File Dump Utility for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1
Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

Logdump 1 >open /u01/app/trails/ba000005
Current LogTrail is /u01/app/trails/ba000005 
Logdump 1 >open /u01/app/trails/ba000005
Current LogTrail is /u01/app/trails/ba000005 
Logdump 2 >
Logdump 2 >ghdr on
Logdump 3 >detail data
Logdump 4 >
Logdump 4 >pos 1470
Reading forward from RBA 1470 
Logdump 5 >n

This is my test server 
I don't have enough data on trial to show you guys
I will update this document and we will take logdump 


---Oracle in Memory --
---Nikhil Tatineni ------








Oracle GoldenGate: How to find out Transaction INFO?


 we want to know what kind of transaction  extract is working on database? which program it is calling ?

Step1:
-------
Find out the extract info and get XID using below command from ggsci 

GGSCI> send extract extractname, showtrans duration 30 MIN
Sending SHOWTRANS request to EXTRACT ENTNEW ...
Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 6, RBA 2239791
-----------------------------------------------------------
XID: 1.8.209025
Items: 0
Extract: ENTNEW
Redo Thread: 1
Start Time: 2015-26-22:00:10:06

SCN: 0.2239791 (2239791)
Redo Seq: 6
Redo RBA: 1470

Status: Running


Step2: 
-------
using  XID  find out ADDR from the v$transaction on database 
select ADDR  from v$transaction where xidusn='XID';


STEP3: 
--------
using ADDR find out the SID from database;

select sid,serial#,event,machine,sql_id,seconds_in_wait,prev_sql_id,module,program,action from v$session where taddr='ADDR';


STEP4: 
---------
using SQL_ID from step 3 find out the sql which is taking longtime on gg env 

select
hash_value, address,
executions,buffer_gets, disk_reads,
round(buffer_gets/decode(executions, 0, 1, executions), 1) avg_gets,
round(disk_reads/decode(executions, 0, 1, executions), 1) avg_disk,
last_load_time,
module,
sql_fulltext
from v$sqlarea
where sql_id='&sql_id';

From step4 we will SQL text and SQL_ID for long running transaction :) 
depend on skip or wait for transaction to complete on database 

GGSCI > send extract ENTNEW, skiptrans 1.8.209025



------Oracle in memory ---
------Nikhil Tatineni ---

Goldengate Replicat is running very slow


Hi Friends :) Hope you are good ! Today I want to discuss about performance of replicat running on Oracle goldengate home

Scenario: Replicat is running very slow 

Initial Investigation:
Replicat works on target database, where it applies the records from remote trail directory to target table. when it reads records from remote trail directory, using the information on the trail, it creates the consistent sql to apply to the target table. If the sql made by the replicat is not using Indexes or optimizer is generating different execution plans by excluding index's or may be blocking sessions or wait events running on database. These are the different things where we need to look into

There is no blocking sessions 
Immediately took AWR report and found DB TIME is quiet higher than usual and checked for  " Top foreground events by total wait time " found db file sequential read. Hint, may be replicat is using bad index. 
later on checked for SQL's ordered by elapsed time, then found my replicat on top of it 
I got SQL ID and tried to run SQL tuning advisor on it.  Results as follows

-----------
Plan hash value: 748041570

---------------------------------------------------------------------------------------------
| Id  | Operation          | Name| Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |  |     1 |    39 |   184K  (2)| 00:36:49 |
|   1 |  UPDATE            |  |       |       |            |          |
|*  2 |   TABLE ACCESS FULL|  |     1 |    39 |   184K  (2)| 00:36:49 |
---------------------------------------------------------------------------------------------

Root cause: I found replicat is performing full table-scan to update the record and creating lag in the environment.

Potential Fix: created index on predicate columns as suggestions from SQL tuning advisor improved performance of replicat by 100% 


-----when we add supplemental logging, primary keys will be logged into the online redo, here we have to understand keys should be the same for both source table and target table. If the keys are different where we can see these kind of issues in environment -----

---Happy weekend ---
--Oracle in memory --
--Nikhil Tatineni ---

Goldengate Replicat's blocking itself

Scenario: Multiple replicats working on same table and blocking itself 
Replicats are not moving, creating 14 hours lag. Data is out of sync between instances 


Root-cause
we are replicating very large volume tables and where we split replicat 1 into 6  using range function to avoid lag on replicat. so here, all 6 replicats working on same table. i.e 6 replicats will perform concurrent operation on the block. current operations on the block is controlled by storage parameter on database INIT Trans. on our database it is only 1. It is not allowing replicats to perform concurrent operations on the block resulting in 14 - 17 hour lag. 

Potential fix: 
Finally we increased storage parameter INIT trans parameter for the table from 1 to 10 
where replicat can perform concurrent operations on the table, which resolved this issue 

I strongly recommend, when we are splitting replicats to handle or replicat large volume data, I recommend to look into storage level parameter " INIT Trans " for the table. If the init trans parameter is n on the table. I recommend to split  single replicat into n+1 


---Nikhil Tatineni
--Oracle in memory 

GoldenGate Extract : Bounded Recovery


Scenario: Placed lag monitoring scripts on goldengate servers. One of our extract is running with 17 hour lag ! We are engaged  :) 

Initial investigation on database and checked for long running transactions and blocking sessions on database. Database is clear  
Then started looking at goldengate components
Checked for report file generated by the extract found process information as follows 

2015-10-05 07:59:03  INFO    OGG-01738  BOUNDED RECOVERY: CHECKPOINT: for object pool 1: p8606_Redo Thread 1: start=SeqNo: 118841, RBA: 25235984, SCN: 3326.2439638626 (14287500865122), Timestamp: 2015-10-05 07:59:02.000000, end=SeqNo: 118841, RBA: 25249280, SCN: 3326.2439638675 (14287500865171), Timestamp: 2015-10-05 07:59:02.000000, Thread: 1.
2015-10-05 07:59:03  INFO    OGG-01738  BOUNDED RECOVERY: CHECKPOINT: for object pool 2: p8606_Redo Thread 2: start=SeqNo: 117802, RBA: 84960272, SCN: 3326.2439638649 (14287500865145), Timestamp: 2015-10-05 07:59:02.000000, Thread: 2, end=SeqNo: 117802, RBA: 84960768, SCN: 3326.2439638649 (14287500865145), Timestamp: 2015-10-05 07:59:02.000000, Thread: 2.
2015-10-05 07:59:03  INFO    OGG-01738  BOUNDED RECOVERY: CHECKPOINT: for object pool 3: p8606_Redo Thread 3: start=SeqNo: 118013, RBA: 16379408, SCN: 3326.2439638623 (14287500865119), Timestamp: 2015-10-05 07:59:01.000000, Thread: 3, end=SeqNo: 118013, RBA: 16379904, SCN: 3326.2439638623 (14287500865119), Timestamp: 2015-10-05 07:59:01.000000, Thread: 3.

Bounded Recovery happens when there are long running transaction or batch jobs that runs beyond the default bounded recovery time (4 hours)  
You know that extract collects only committed transactions from database but in the case of batch jobs, they run for long time on database and  have both committed and uncommitted transactions 
If the extract is waiting for commit for longer time and mean while uncommitted/committed data is flushed into the archive logs / flash recovery area on database 
At this point if the archive logs are deleted by RMAN, the transactions performed by the batch jobs will be lost 
To avoid these problems, Oracle / GG team developed the methodology to save those transactions to the disk / goldengate home directory BR 
It will save all long running transaction / uncommitted records info in BR directory under goldengate home, where extract reads records beyond whenever commit is performed 

Read checkpoints on database 
The current checkpoint and the recovery checkpoint are read checkpoints which keeps the redo log positions related to the extract process:
current checkpoint " up to here all committed transactions are written to trail by extract 
&
recovery checkpoint " is the position when the   old or long or batch processing transaction starts 
write checkpoints in the trail
When the extract stops, the cached data will be lost. When GoldenGate extract restarts, it has to go back to the recovery checkpoint to read the redo log files, discard all committed transactions before the current checkpoint, and recover all of the cached data for open 

Root Cause: 
I believe, in this process extract is struck in finding a record during reading records from BR directory. Restarted extract,It performed Bounded  recovery, lag is cleared 

GGSCI (tnc569) 3> info all
Program     Status      Group                Lag at Chkpt            Time since Chkpt
MANAGER     RUNNING
EXTRACT     RUNNING     EXTGRP     00:00:00      00:00:03


Oracle in memory --- :) 
Nikhil Tatineni --- 



Thursday, October 29, 2015

Datapump scenario's

Few scenarios on datapump 

Scenario:We are trying to exporting database / schema / table and your data pump directory don't have enough space to hold the dump files. If some one ask you to know the size of the dumpfile before exporting ? how we are going to know about  the size of the dumpfile before export ? we use following syntax 

Syntax: expdp schema/password estimate_only=y directory=directoryname schemas=schemaname 

In the above syntax, i want to know about the size of the dumpfile created when i export whole schema, you can check for database, schema or table 

Scenario: we are trying to export database and recommended to perform consistent backup to avoid  data consistency errors ( ORA-1555 ) 

Syntax: expdp schema/password directory=directoryname dumpfile=dumpfile.dmp logfile=logfile.log schemas=schemaname flashback_scn=scnnumberfromdatabase 

Scenario: Application DBA is running export job on database. we received few alerts stating that database cpu went high and asked us to know the status on export - datapump job on database ? what you are going to do in this scenario ?

Here we have to understand, there is any data dictionary view which tells about the datapump operations? yes, of-course
we are going to get information from view " dba_datapump_jobs " AND user_datapump_jobs 

This View tells about all active datapump jobs on database 

Syntax: expdp schema/password directory=directoryname dumpfile=dumpfile.dmp logfile=logfile.log schemas=world flashback_scn=1234567 job_name=export-schema 

we can identify different datapump jobs using job_name 

We can monitor datapump jobs using v$session_longops and check progress of datapump jobs
select sid,serial#,sofar,totalwork  from v$session_longops;

select x.job_name,b.state,b.job_mode,b.degree
, x.owner_name,z.sql_text, p.message
, p.totalwork, p.sofar
, round((p.sofar/p.totalwork)*100,2) done
, p.time_remaining
from dba_datapump_jobs b
left join dba_datapump_sessions x on (x.job_name = b.job_name)
left join v$session y on (y.saddr = x.saddr)
left join v$sql z on (y.sql_id = z.sql_id)
left join v$session_longops p ON (p.sql_id = y.sql_id)
WHERE y.module='Data Pump Worker'
AND p.time_remaining > 0;

select    sid,    serial#
from    v$session s,    dba_datapump_sessions d
where     s.saddr = d.saddr;


Scenario:we are exporting schema and few tables on world schema and we want to improve performance of export jobs?  This is challenge we have on board 

To overcome this scenario we used parallel option to improve the performance of datapump jobs 

syntax: expdp system/password@tns directory=diretoryname dumpfile=dumpfile.dmp logfile=logfile.log schemas=world parallel=4 

parallel must be integer depend up on number of cores available on the server
In most of the scenarios this will improve performance of datapump jobs.  But we faced another scenario in our environment  Even we used parallel option, export of the table is very slow . we gathered stats on table and increased performance of the datapump job :)  :) can some one explain how stats and datapump job are co-related ? 


Scenario: when we are exporting schema depend on the requirement, we export whole schema or we export only DDL or DML of the schema. Many of us will get confused what is DDL backup and DML backup using datapump. we use following parameters to export only DDL and DML on database 

content=all  | All loads all the metadata as well as data from the source dump file.
content=metadata_only | METADATA_ONLY only creates database objects, no data is inserted. 
content=data_only | DATA_ONLY only loads row data into the tables no database objects are created.

For example to export only DDL of the world schema : syntax as follows 

expdp world/world directory=directoryname dumpfile=dumpfile.dmp logfile=logfile.log schemas=world content=metadata_only 

----Nikhil Tatineni--

Oracle In Memory --

Querys to monitor RAC

following few  Query's will help to find out culprits-  Query to check long running transaction from last 8 hours  Col Sid Fo...