Saturday, March 21, 2015

cardinality & Cardinality Feedback !

What is cardinality ?

cardinality when we are talking to database modeling, a table can have one to one relationship or it can have many to many relationship or one to many relationship
& cardinality of table, and also cardinality defines the number of unique values in the table.

what is cardinality Feedback?

Cardinality Feedback is a process whereby the optimizer automatically improve plans for repeatedly executed queries where the optimizer may not have been able to generate a completely accurate cardinality estimate in the plan. The optimizer may miscalculate cardinality estimates for a variety of reasons, such as missing or inaccurate statistics, or complex predicates. Whatever the reason, cardinality feedback may be able to help.

when comes to sql - when a sql query is parsed, optimizer calculates statistics as possible, complex predicates ( columns used in where condition), joins and generates execution plan.  For some reasons  joins used in predicate can cause estimate cardinality is inaccurate

on first execution of sql statement an execution plan is generated by optimizer. during the plan optimization, certain values are noted and the cursor is produced monitored by optimizer. After the execution actual, cardinality is compared with predicted cardinality and then stores for later use. When the query is executed again, as it stored accurate values, predicts accurate values and optimizer generates different execution plan, which is more accurate 

After few executions of query’s, one plan will be picked out and used for all subsequent executions.

optimizer_dynamic_sampling=0 

default value  is 2 on our oracle databases 

----- Nikhil Tatineni -------

SQL Plan Baselines

SQL Plan Baselines are a new feature in Oracle Database relaease 11g that helps to prevent repeatedly used SQL statements  to generate different execution plans

DBA_SQL_PLAN_BASELINES view tell about what are base lines stores on oracle database 

select sql_handle, sql_text, plan_name, origin, enabled, accepted, fixed, autopurge from dba_sql_plan_baselines;

FOR AUTOMATIC CAPTURE OF SQL BASE LINES, we can enable following parameter on database
OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=TRUE;

when we set this parameter to True, automatic plan capture is enabled on database. SPM repository is populated for every repeated sql on database. To identify repeated sql’s on database, the optimizer will log the repeated sql into the statement log after first time sql statement is compiled. If the same sql is fired again on database, the optimizer identify the sql from statement log and uses the execution plan history of sql statement and reproduce execution plan by using SQL text, outline, bind variables and compilation environment. The current cost of the sql statement is added to SQL BASE LINE and marked as accepted  

If the statistics on the table are changed, again same sql is fired on database, A new baselines is loaded into SPM and this plan marked for verification. 

SQL IS ISSUED
           |
GENERATE EXECUTION PLAN 
           |                         YES                                                       YES 
IS THIS SQL TRACKED? —————> DOES SQL PLAN BASELINE EXIST ————> EXECUTE PLAN
           | if No                                                            |  No
Add entry in sql Log                             CREATE  SQL BASE LINE 
           |                                                                    |
 execute this plan                                  Execution this Plan 

Starting from 11g, baselines are manually loaded for statements directly from the cursor cache into the SQL Management Base. Loading plans directly from the cursor cache can be extremely useful if application SQL has been tuned by hand using hints. Since it is unlikely the application SQL can be changed to include the hint, by capturing the tuned execution plan as a SQL plan baseline you can ensure that the application SQL will use that plan in the future.

we need to identify the sql’s to load plans for statements directly from the cursor cache into the SQL Management Base

when a sql is fired on the database, sql is loaded into the shared sql are in SGA, cursor is opened same time. find out the SQL_ID for the sql statement from view V$SQL; using DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE we will load sql base lines into SQL management base . 

Then find the SQL_ID for the statement in the V$SQL view. 
SQL> SELECT sql_id, sql_fulltext FROM V$SQL WHERE sql_text LIKE '%SELECT prod_name, SUM(%';

SQL_ID               SQL_FULLTEXT
 -------------        ——————————————————— 
74hnd835n81yv  select SQL_ID, SQL_FULLTEXT from v$SQL chj6q8z7ykbyy SELECT PROD_NAME,SUM(AMOUNT_SOLD)

SQL> variable cnt number; 
SQL> EXECUTE :cnt :=DBMS_SPM.LOAD_PLAN_FROM_CURSOR_CACHE( sql_id=>'chj6q8z7ykbyy');

we can enable and disable the plans loaded in SQL Management Base using DBMS_SPM.ALTER_SQL_PLAN_BASELINE 
In-order to disable the plan, we need to get SQL_HANDLE & PLAN_NAME from dba_sql_plan_baselines; 

SQL> variable cnt number;
 SQL> exec :cnt :=DBMS_SPM.ALTER_SQL_PLAN_BASELINE(
 SQL_HANDLE => 'SYS_SQL_bf5c9b08f72bde3e’, 
 PLAN_NAME => 'SQL_PLAN_byr4v13vkrrjy42949306’,
 ATTRIBUTE_NAME => 'enabled’,
 ATTRIBUTE_VALUE => 'NO'); 

SQL> SELECT sql_handle, sql_text, plan_name, enabled FROM dba_sql_plan_baselines; 


To check the detailed execution plan for any SQL plan baseline you can use the procedure DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE.

select * from table (dbms_plan.display_sql_plan_baseline(
sql_handle =>'SYS_SQL_bf5c9b08f72bde3e’,
plan_name =>'SQL_PLAN_byr4v13vkrrjy42949306’,
format => ‘basic’));

The SQL management base (SMB) is a part of the data dictionary that resides in the SYSAUX tablespace. It stores statement logs, plan histories, SQL plan baselines, and SQL profiles. To allow weekly purging of unused plans and logs, the SMB uses automatic space management.

what is the percentage of space allocated in sysaux tablespace for SQL BASELINES OR SQL profiles and how long they are stored can be know known from "DBA_SQL_MANAGEMENT_CONFIG" view on database;

columns parameter_name
columns parameter_value 

we can configure plan_retention_weeks & space_budget_percent  to manage SQL plan base lines or profiles in sysaux tablespace 

1) Disk Space Usage:-

Disk space used by the SMB is regularly checked against a limit based on the size of the SYSAUX tablespace. By default, the limit for the SMB is no more than 10% of the size of SYSAUX. The allowable range for this limit is between 1% and 50%.
A weekly background process measures the total space occupied by the SMB. When the defined limit is exceeded, the process writes a warning to the alert log. The database generates alerts weekly until one of the following conditions is met:
A weekly background process measures the total space occupied by the SMB. When the defined limit is exceeded, the process writes a warning to the alert log. The database generates alerts weekly until one of the following conditions is met:
  • The SMB space limit is increased
  • The size of the SYSAUX tablespace is increased
  • The disk space used by the SMB is decreased by purging SQL management objects (SQL plan baselines or SQL profiles)
  • To change the percentage limit, use the CONFIGURE procedure of the DBMS_SPM package. The following example changes the space limit to 30%:
  • SQL> DBMS_SPM.CONFIGURE(‘space_budget_percent’,30);
2) Purge Policy :-

A weekly scheduled purging task manages the disk space used by SQL plan management. The task runs as an automated task in the maintenance window.

The database purges plans not used for more than 53 weeks, as identified by the LAST_EXECUTED timestamp stored in the SMB for that plan. The 53-week period ensures plan information is available during any yearly SQL processing. The unused plan retention period can range between 5 and 523 weeks (a little more than 10 years).

To configure the retention period, use the CONFIGURE procedure of the DBMS_SPM PL/SQL package. The following example changes the retention period to 105 weeks:

BEGIN 
DBMS_SPM.CONFIGURE( ‘plan_retention_weeks’,105); 
END; 
/

---- Nikhil Tatineni -----

Tuesday, February 17, 2015

ERROR OGG-00446 e_###.prm: error 13 (Permission denied) opening redo log


Scenario:We installed goldengate on the database server z11.kc.com as oracle-goldengate-user ggs01, we configured Extract, pump against source database and replicats  on target database. successfully replicat's and pump came up during initial START and extract's are abended with following error  2014-12-14 10:48:21  


ERROR   OGG-00446: Oracle GoldenGate Capture for Oracle, e_###.prm:  error 13 (Permission denied) opening redo log /fs01/talent9/redo01.log for sequence 76Not able to establish initial position for begin time 2014-12-14 10:48:21   10:11:38. 2014-12-14 10:48:21   ERROR   OGG-01668  Oracle GoldenGate Capture for Oracle, e_###.prm:  PROCESS ABENDING.


During initial investigation, I found following above error in report file. As we configured classic extract, you that classic extract works on online redo/archive log on database. when we started extract's, extracts started accessing transaction log (redo log)


Resolution:By looking at above error, permission denied, i closely looked at permission's on online redo log file directory, which is owned by Different user oracle and dba group. later on we worked with SA and changed perimissions 665 on redo logfile directory. After extract came up automatically from where it is abended :) 

----Nikhil Tatineni -----




Goldengate Extract is not moving?

Hi guys,
when your extract is not collecting transactions against source table (or) it is not moving against database ? 


To find out resolution..Kindly investigate following steps
step1: check trandata is enabled for your source table?
ggsci> dblogin userid ggsuser password XXXXXggsci> info trandata schema.tablename


step2: check there is any long running transactions on your database or blocking sessions, try to skip transactions using XID 
After making sure these 2 steps
using logdump utility, kindly validate extract is collecting data and writing data on  local trail

Thank you --- Nikhil Tatineni ---

Sunday, February 1, 2015

SPLITING OracleGoldenGate REPLICAT

Hey guys !! Welcome back..

I have seen many of my friends facing issues with replicating huge volume tables using goldengate, Here we go, I want to discuss, how goldengate replicat split helps and improve the performance of replication.

In order to split replicat 1 into many,  Table involved in replication should have primary key and unique key should be on same column, If a table have primary key and unique key On different column, REPLICAT WILL ABEND BY GENERATING A LOT OF DISCARDS, REPLICAT'S  involved in splitting WILL COMPETE EACH OTHER IN APPLYING SAME  RECORD ON THE TRAIL to the target table.

Resolution:
we need to add handlecollisions to replicat paramfile and restart the replicat from abend 

Steps to split replicat 1 into many, we split replicat using  FILTER and RANGE parameters.  In this scenario, We are splitting replicat 1 into 4, each replicat logically work ( perform DML) on 1/4 th of base table; 

steps to split replicat as follows

STEP1: STOP REPLICAT

GGSCI (eniya.varman.com) 4> STOP RTECVE
Sending STOP request to REPLICAT RTECVE ...
Request processed.

STEP2: collect RBA  and sequence number of remote trail from the old replicat !! 

GGSCI (eniya.varman.com) 19> INFO RTECVE

REPLICAT   RTECVE    Last Started 2015-02-01 13:51   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:00:23 ago)
Log Read Checkpoint  File /u01/app/goldengate/remotetrail/zz000004
                     2015-02-01 19:02:16.537343  RBA 2292

STEP3: PREPARE THE NEW OGG REPLICAT PROCESSES 

-- Handles all tables WITHOUT PK /UI
-- GGSCI> add replicat rtecve_1, exttrail /u01/app/goldengate/remotetrail/zz, checkpointtable ggsuser.ckpt
—- ALTER rtecve_1, extseqno 4 extrba 2292 
replicat rtecve_1
assumetargetdefs
sourcedefs ./dirdef/teacher.def
------structure of source table and target table is same 
userid ggsuser, password oracle
reportcount every 24 hours, rate
discardfile /u01/app/goldengate/discardtrail/rtecve_1.dsc, megabytes 10, append
discardrollover at 06:00
map nspdba.teacher, target nspdba.teacher, FILTER(@RANGE(1,4));

-- Handles all tables WITHOUT PK /UI
-- GGSCI> add replicat rtecve_2, exttrail /u01/app/goldengate/remotetrail/zz, checkpointtable ggsuser.ckpt
—- ALTER rtecve_2, extseqno 4 extrba 2292 
replicat rtecve_2
assumetargetdefs
sourcedefs ./dirdef/teacher.def
------structure of source table and target table is same 
userid ggsuser, password oracle
reportcount every 24 hours, rate
discardfile /u01/app/goldengate/discardtrail/rtecve_2.dsc, megabytes 10, append
discardrollover at 06:00
map nspdba.teacher, target nspdba.teacher, FILTER(@RANGE(2,4));

-- Handles all tables WITHOUT PK /UI
-- GGSCI> add replicat rtecve_3, exttrail /u01/app/goldengate/remotetrail/zz, checkpointtable ggsuser.ckpt
—- ALTER rtecve_3, extseqno 4 extrba 2292 
replicat rtecve_3
assumetargetdefs
sourcedefs ./dirdef/teacher.def
------structure of source table and target table is same 
userid ggsuser, password oracle
reportcount every 24 hours, rate
discardfile /u01/app/goldengate/discardtrail/rtecve_3.dsc, megabytes 10, append
discardrollover at 06:00
map nspdba.teacher, target nspdba.teacher, FILTER(@RANGE(4,4));

-- Handles all tables WITHOUT PK /UI
-- GGSCI> add replicat rtecve_4, exttrail /u01/app/goldengate/remotetrail/zz, checkpointtable ggsuser.ckpt
—- ALTER rtecve_4, extseqno 4 extrba 2292 
replicat rtecve_4
assumetargetdefs
sourcedefs ./dirdef/teacher.def
------structure of source table and target table is same 
userid ggsuser, password oracle
reportcount every 24 hours, rate
discardfile /u01/app/goldengate/discardtrail/rtecve_4.dsc, megabytes 10, append
discardrollover at 06:00
map nspdba.teacher, target nspdba.teacher, FILTER(@RANGE(3,4));

REPLICAT    STOPPED     RTECVE      00:00:00      00:12:35    
REPLICAT    STOPPED     RTECVE_1    00:00:00      00:04:57    
REPLICAT    STOPPED     RTECVE_2    00:00:00      00:03:39    
REPLICAT    STOPPED     RTECVE_3    00:00:00      00:02:53    
REPLICAT    STOPPED     RTECVE_4    00:00:00      00:00:25  

STEP 4: START NEW REPLICAT’S

GGSCI (eniya.varman.com) 48> start RTECVE_1
Sending START request to MANAGER ...
REPLICAT RTECVE_1 starting

GGSCI (eniya.varman.com) 49> start RTECVE_2
Sending START request to MANAGER ...
REPLICAT RTECVE_2 starting

GGSCI (eniya.varman.com) 50> start RTECVE_3
Sending START request to MANAGER ...
REPLICAT RTECVE_3 starting


GGSCI (eniya.varman.com) 51> start RTECVE_4
Sending START request to MANAGER ...
REPLICAT RTECVE_4 starting

STEP 5: VALIDATE ALL PROCESSES AND PARAMETER’S USED IN REPLICATS !!

REPLICAT    RUNNING     RTECVE_1    00:00:00      00:00:04    
REPLICAT    RUNNING     RTECVE_2    00:00:00      00:00:09    
REPLICAT    RUNNING     RTECVE_3    00:00:00      00:00:07    
REPLICAT    RUNNING     RTECVE_4    00:00:00      00:00:02 

STEP 6: REMOVE THE OLD REPLICAT 

GGSCI (eniya.varman.com) 8> dblogin userid ggsuser@eniya, password oracle
Successfully logged into database.

GGSCI (eniya.varman.com) 9> delete RTECVE
Deleted REPLICAT RTECVE.


-------Thank you


I went through this pdf, this is really intresting, kindly go through

http://www.oracle11ggotchas.com/articles/Defining%20Multiple%20Replicats%20to%20Increase%20GoldenGate%20Performance.pdf


ERROR OGG-01028 Formatting error on: table

Scenario:
Configured brand new goldengate processes for  NEW table EWT_RWT_EMP,  Started  configured replicat, pump and extract. Extract is abended on source oracle Goldengate home, 

EXTRACT     ABENDED     EWTEMP        00:00:00      00:00:02

During initial investigation, we found following error in report file

2014-08-24 15:02:04  INFO-OGG-01517  Position of first record processed Sequence 10140, RBA 500086800, SCN 3230. 3920648400, Aug 22, 2014 1:21:38 PM.
TABLE resolved (entry hr.ewt_rwt_id,emp):
table "HR"."EWT_RWT_EMP", keycols (ewt_rwt_id,emp);
Using the following key columns for source table hr.ewt_rwt_id,emp: ewt_rwt_id,emp

Source Context :
SourceModule            : [er.redo.ora]
SourceID: [/scratch/aime1/adestore/views/aime1_staxj04/oggcore/OpenSys/src/app/er/redo/oracle/redoora.c]
SourceFunction        : [REDOORA_handle_format_error(file_def *, hdr_info_t *, char *)]
SourceLine              : [9058]

2014-08-24 15:02:04  ERROR   OGG-01028  Formatting error on: table name ewt_rwt_id,emp, rowid AADlmAAAHAAIQ6EAAA, XID 12.8.385427, position (Seqno 10140, RBA 500103096). Error converting timestamp with timezone from Oracle to ASCII format for column DISC_DT.
***********************************************************************
*                   ** Run Time Statistics **                         *
***********************************************************************
Report at 2014-08-24 15:02:04 (activity since 2014-08-24 15:02:04)
Output to u02/app/ggs/trails/bb:
No records extracted.

Resolution:

Added following parameter into extract and restart the extract 
TRANLOGOPTIONS INCLUDEREGIONID


EXTRACT     RUNNING     EWTEMP       00:00:00      00:00:02
started successfully !!

Oracle in-memory !!


ERROR OGG-01232 Receive TCP params error: TCP/IP error 73

Scenario:
we have compliant from application team, data is not getting replicated for one of the critical table to target instances,  During initial investigation,  we found  Ogg pump is abended  on source side and also we found following error in report file, ogg monitoring is disabled on servers


we found pump is abended on source side,


EXTRACT     ABENDED     PEMPRI     00:00:00      09:26:59

GGSCI> view report PEMPRI

2014-10-05 10:38:06  INFO    OGG-01226  Socket buffer size set to 27985 (flush size 27985).
Source Context :  SourceModule            : [ggnet.tpcrtn]
  SourceID                : [/scratch/aime1/adestore/views/aime1_staxj04/oggcore/OpenSys/src/gglib/ggnet/tcprtn.c]
  SourceFunction          : [opt_settle(int, char *, int32_t, char *)]
  SourceLine              : [3129]
2014-10-05 10:43:06  ERROR   OGG-01232  Receive TCP params error: TCP/IP error 73 (Connection reset by peer), endpoint:sfo.com.
2014-10-05 10:43:06  ERROR   OGG-01668  PROCESS ABENDING.

Resolution
check goldengate manager process  is  running on target  goldengate home / server

GGSCI> Info mgr
  • If the manager is not running on target goldengate home, start the manager and start abended pump on source side
  • If the manager is running, still the pump is abended state, i recommend you to bounce the associated replicat on target goldengate home. Replicat process is stale and can hold lock on the  remote trail 
  • If still pump is not coming up check RBA of the pump with size of the local trail on source server

GGSCI (sfo) 19> info PEMPRI
EXTRACT    PEMPRI   Last Started 2014-10-05 10:38   Status ABENDED
Checkpoint Lag       00:00:00 (updated 09:18:45 ago)
Log Read Checkpoint  File /u02/app/ggs/trails/ba000083
                     2014-10-05 01:18:48.000000  RBA 447538549

GGSCI (sfo) 20> exit
ggs01@sfo> cd /u02/app/ggs/trails/
ggs01@
sfo> ls -ltr ba*


-rw-r-----    1 ggs01    ggsoper   447538549 Oct 05 01:45 ba000083
-rw-r-----    1 ggs01    ggsoper    10543247 Oct 05 10:38 ba000084

when you compared the size of the trail and RBA is same, we can conclude that pump is unable to perform ETROLLOVER, Now you can forcefully ETROLLOVER the pump to next trail 

GGSCI> alter PEMPRI, ETROLLOVER

GGSCI (sfo) 19> info PEMPRI
EXTRACT    PEMPRI   Last Started 2014-10-05 10:38   Status RUNNING
Checkpoint Lag       00:00:00 (updated 09:18:45 ago)
Log Read Checkpoint  File /u02/app/ggs/trails/ba000084
                     2014-10-05 01:18:48.000000  RBA 1168

Now we can see, pump is up and running, and we can see  RBA "RBA 1168" is increasing !
! -- Oracle In-memory !!!

Querys to monitor RAC

following few  Query's will help to find out culprits-  Query to check long running transaction from last 8 hours  Col Sid Fo...