Saturday, August 27, 2016

ERROR OGG-01161 Bad column index (17)

Scenario # Replicat got abended with "ERROR   OGG-01161  Bad column index (17) " 

Source Context :
SourceModule   : [ggstd.conv.endian]
SourceID           : [/scratch/aime1/adestore/views/aime1_staxj04/oggcore/OpenSys/src/gglib/ggstd/lecnv.c]
SourceFunction  : [convCompSQL(char *, file_def *, rowlen_t)]
SourceLine         : [533]
2016-07-13 06:48:42  ERROR   OGG-01161  Bad column index (17) specified for table world.andhra_pradesh, max columns = 17.
Last log location read:
     FILE              :     /opt/ggs/trails/yn000063
     SEQNO         :     63
     RBA               :     93183
     TIMESTAMP  :     2016-07-09 05:06:28.026375
     EOF               :      NO
     READERR     :       0

Resolution #  
create definition file using defgen utility, definition of source table is loaded into .def file and place definition file on target

Steps to create definition file
Step1: Create .prm file in dirprm directory under golden gate home

vi  /opt/app/db11g/ggs/dirprm/world.prm
defsfile ./dirdef/def-world.def

userid ggsuser password xxxxxx
table world.andhra_pradesh

Step2:  Run the defgen file from goldengate home 
./defgen paramfile /opt/app/db11g/ggs/dirprm/world.prm 

when you run defgen utility, .def file is created in dirdef directory under goldengate home. after scp or sftp .def file created under dirdef directory from source to target golden gate home /dirdef directory and add parameters in replicat paramfile as follows

Replicat param file with definition file:

-- GGSCI> add replicat rnlggsm1, exttrail 
/opt/ggs/trails/yn, checkpointtable ggsuser.nspckpt
replicat rnlggsm1
SETENV (ORACLE_HOME="/usr/local/opt/oracle/product/11204")
SETENV (ORACLE_SID="db11g")
userid ggsuser, password xxxxxx
sourcedefs ./dirdef/def-world.def

reperror (default, discard)
discardfile /opt/app/ggs/dsc/rnlggsm1.dsc, megabytes 200, append
discardrollover at 21:00
---REPERROR (1403, DISCARD)
---BATCHSQL BATCHERRORMODE, BATCHTRANSOPS 100000
map world.andhra_pradesh, target india.andhra_pradesh;

Issue is resolved --

--Nikhil Tatineni--
--GoldenGate--


Friday, August 26, 2016

RAC Patching # opatch auto

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Applying opatch using response file # 

[root@tnc1 stage]# opatch auto /u01/stage/23274134 -ocmrf /u01/stage/first_node_file.rsp
Executing /u01/app/11.2.0.4/grid/perl/bin/perl /u01/app/11.2.0.4/grid/OPatch/crs/patch11203.pl -patchdir /u01/stage -patchn 23274134 -ocmrf /u01/stage/first_node_file.rsp -paramfile /u01/app/11.2.0.4/grid/crs/install/crsconfig_params

This is the main log file: /u01/app/11.2.0.4/grid/cfgtoollogs/opatchauto2016-08-26_13-35-20.log
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u01/app/11.2.0.4/grid/cfgtoollogs/opatchauto2016-08-26_13-35-20.report.log
2016-08-26 13:35:20: Starting Clusterware Patch Setup
Using configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_params
Stopping RAC /u01/app/oracle/product/11.2.0.4/db_1 ...
Stopped RAC /u01/app/oracle/product/11.2.0.4/db_1 successfully
patch /u01/stage/23274134/23054359  apply successful for home  /u01/app/oracle/product/11.2.0.4/db_1 
patch /u01/stage/23274134/23054319/custom/server/23054319  apply successful for home  /u01/app/oracle/product/11.2.0.4/db_1 
Stopping CRS...
Stopped CRS successfully
patch /u01/stage/23274134/23054359  apply successful for home  /u01/app/11.2.0.4/grid 
patch /u01/stage/23274134/23054319  apply successful for home  /u01/app/11.2.0.4/grid 
patch /u01/stage/23274134/22502505  apply successful for home  /u01/app/11.2.0.4/grid 
Starting CRS...
Installing Trace File Analyzer
CRS-4123: Oracle High Availability Services has been started.
Starting RAC /u01/app/oracle/product/11.2.0.4/db_1 ...
Started RAC /u01/app/oracle/product/11.2.0.4/db_1 successfully
opatch auto succeeded. 

#  +++++++++++      ++++++++++++++ +++++++++++++ +++++++++++++
This is the main log file: /u01/app/11.2.0.4/grid/cfgtoollogs/opatchauto2016-08-26_13-35-20.report.log

Steps executed to apply "opatch auto" in RAC cluster as follows # 

[root@tnc1 ~]# tail -100f /u01/app/11.2.0.4/grid/cfgtoollogs/opatchauto2016-08-26_13-35-20.report.log
***********  Configuration Data  ***********
* It shows only those targets that will be patched in this session *

crs_home=/u01/app/11.2.0.4/grid      owner=oracle      opatch_ver=11.2.0.3.12
rac_home=/u01/app/oracle/product/11.2.0.4/db_1      owner=oracle      opatch_ver=11.2.0.3.12

*********** Steps to be executed as owner unless specified as root ***********

1: /u01/app/oracle/product/11.2.0.4/db_1/OPatch/opatch prereq CheckComponents -ph /u01/stage/23274134/23054359 -oh /u01/app/oracle/product/11.2.0.4/db_1

2: /u01/app/oracle/product/11.2.0.4/db_1/OPatch/opatch prereq CheckComponents -ph /u01/stage/23274134/23054319/custom/server/23054319 -oh /u01/app/oracle/product/11.2.0.4/db_1

3: /u01/app/oracle/product/11.2.0.4/db_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /u01/stage/23274134/23054359 -oh /u01/app/oracle/product/11.2.0.4/db_1

4: /u01/app/oracle/product/11.2.0.4/db_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /u01/stage/23274134/23054319/custom/server/23054319 -oh /u01/app/oracle/product/11.2.0.4/db_1

5: /u01/app/11.2.0.4/grid/OPatch/opatch prereq CheckComponents -ph /u01/stage/23274134/23054359 -oh /u01/app/11.2.0.4/grid

6: /u01/app/11.2.0.4/grid/OPatch/opatch prereq CheckComponents -ph /u01/stage/23274134/23054319 -oh /u01/app/11.2.0.4/grid

7: /u01/app/11.2.0.4/grid/OPatch/opatch prereq CheckComponents -ph /u01/stage/23274134/22502505 -oh /u01/app/11.2.0.4/grid

8: /u01/app/11.2.0.4/grid/OPatch/opatch prereq CheckConflictAgainstOH -ph /u01/stage/23274134/23054359 -oh /u01/app/11.2.0.4/grid

9: /u01/app/11.2.0.4/grid/OPatch/opatch prereq CheckConflictAgainstOH -ph /u01/stage/23274134/23054319 -oh /u01/app/11.2.0.4/grid

10: /u01/app/11.2.0.4/grid/OPatch/opatch prereq CheckConflictAgainstOH -ph /u01/stage/23274134/22502505 -oh /u01/app/11.2.0.4/grid

11: /u01/app/oracle/product/11.2.0.4/db_1/bin/emctl stop dbconsole

12: /u01/app/oracle/product/11.2.0.4/db_1/bin/emctl stop agent

13: /u01/app/oracle/product/11.2.0.4/db_1/OPatch/opatch prereq CheckApplicable -ph /u01/stage/23274134/23054359 -oh /u01/app/oracle/product/11.2.0.4/db_1

14: /u01/app/oracle/product/11.2.0.4/db_1/OPatch/opatch prereq CheckApplicable -ph /u01/stage/23274134/23054319/custom/server/23054319 -oh /u01/app/oracle/product/11.2.0.4/db_1

15: /u01/app/oracle/product/11.2.0.4/db_1/bin/srvctl stop home -o /u01/app/oracle/product/11.2.0.4/db_1 -s /u01/app/oracle/product/11.2.0.4/db_1/srvm/admin/stophome.txt -n tnc1 -f

16: /u01/stage/23274134/23054319/custom/server/23054319/custom/scripts/prepatch.sh -dbhome /u01/app/oracle/product/11.2.0.4/db_1

17: /u01/app/oracle/product/11.2.0.4/db_1/OPatch/opatch napply /u01/stage/23274134/23054359 -local -silent -ocmrf /u01/stage/first_node_file.rsp -oh /u01/app/oracle/product/11.2.0.4/db_1 -invPtrLoc /u01/app/oracle/product/11.2.0.4/db_1/oraInst.loc

18: /u01/app/oracle/product/11.2.0.4/db_1/OPatch/opatch napply /u01/stage/23274134/23054319/custom/server/23054319 -local -silent -ocmrf /u01/stage/first_node_file.rsp -oh /u01/app/oracle/product/11.2.0.4/db_1 -invPtrLoc /u01/app/oracle/product/11.2.0.4/db_1/oraInst.loc

19: /u01/stage/23274134/23054319/custom/server/23054319/custom/scripts/postpatch.sh -dbhome /u01/app/oracle/product/11.2.0.4/db_1

20: /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl -unlock : run as root

21: /sbin/fuser -k /u01/app/11.2.0.4/grid/bin/crsctl.bin : run as root

22: /u01/app/11.2.0.4/grid/OPatch/opatch prereq CheckApplicable -ph /u01/stage/23274134/23054359 -oh /u01/app/11.2.0.4/grid

23: /u01/app/11.2.0.4/grid/OPatch/opatch prereq CheckApplicable -ph /u01/stage/23274134/23054319 -oh /u01/app/11.2.0.4/grid

24: /u01/app/11.2.0.4/grid/OPatch/opatch prereq CheckApplicable -ph /u01/stage/23274134/22502505 -oh /u01/app/11.2.0.4/grid

25: /u01/app/11.2.0.4/grid/OPatch/opatch napply /u01/stage/23274134/23054359 -local -silent -ocmrf /u01/stage/first_node_file.rsp -oh /u01/app/11.2.0.4/grid -invPtrLoc /u01/app/11.2.0.4/grid/oraInst.loc

26: /u01/app/11.2.0.4/grid/OPatch/opatch napply /u01/stage/23274134/23054319 -local -silent -ocmrf /u01/stage/first_node_file.rsp -oh /u01/app/11.2.0.4/grid -invPtrLoc /u01/app/11.2.0.4/grid/oraInst.loc

27: /u01/app/11.2.0.4/grid/OPatch/opatch napply /u01/stage/23274134/22502505 -local -silent -ocmrf /u01/stage/first_node_file.rsp -oh /u01/app/11.2.0.4/grid -invPtrLoc /u01/app/11.2.0.4/grid/oraInst.loc

28: /u01/app/11.2.0.4/grid/bin/emctl start dbconsole

29: /u01/app/11.2.0.4/grid/rdbms/install/rootadd_rdbms.sh : run as root

30: /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl -patch : run as root

31: /u01/app/oracle/product/11.2.0.4/db_1/bin/emctl start dbconsole

32: /u01/app/oracle/product/11.2.0.4/db_1/bin/emctl start agent

33: /u01/app/oracle/product/11.2.0.4/db_1/bin/srvctl start home -o /u01/app/oracle/product/11.2.0.4/db_1 -s /u01/app/oracle/product/11.2.0.4/db_1/srvm/admin/stophome.txt -n tnc1

### Post Installation Steps ### After ...
Patch 23274134 - Oracle Grid Infrastructure Patch Set Update 11.2.0.4.160719 (Jul2016) (Includes Database PSU 11.2.0.4.160719)

SQL> @$ORACLE_HOME/rdbms/admin/catbundle.sql psu apply
PL/SQL procedure successfully completed.

" catbundle.sql " will update "dba_registry_history" View 

SQL> column version format a10
SQL> column action_time format a30
SQL> column action format a10
SQL> select action_time,action,version,id from dba_registry_history;
ACTION_TIME                    ACTION     VERSION            ID
------------------------------ ---------- ---------- ----------
24-AUG-13 12.03.45.119862 PM   APPLY      11.2.0.4            0
19-JUL-16 03.56.57.399886 PM   APPLY      11.2.0.4            0
25-AUG-16 03.42.09.382630 PM   APPLY      11.2.0.4       160419
26-AUG-16 10.04.27.756443 PM   APPLY      11.2.0.4       160719
4 rows selected.

oracle@tnc1 ~]$ cd $ORACLE_BASE/cfgtoollogs/catbundle 
[oracle@tnc1 catbundle]$ ls -ltr
total 32
-rw-r--r-- 1 oracle oinstall 26377 Aug 26 22:04 catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log
-rw-r--r-- 1 oracle oinstall  2476 Aug 26 22:04 catbundle_PSU_TEST09_APPLY_2016Aug26_22_04_27.log
[oracle@tnc1 catbundle]$

[oracle@tnc1 catbundle]$ grep ERROR *
catbundle_PSU_TEST09_APPLY_2016Aug26_22_04_27.log:SQL> Rem    SQL_IGNORABLE_ERRORS: NONE
catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log: 82        RAISE_APPLICATION_ERROR(-20000, 'INVALID_PATH during openScriptFile');
catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log: 84        RAISE_APPLICATION_ERROR(-20000, 'INVALID_MODE during openScriptFile');
catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log: 86        RAISE_APPLICATION_ERROR(-20000,
catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log: 89        RAISE_APPLICATION_ERROR(-20000,
catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log:107        RAISE_APPLICATION_ERROR(-20000,
catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log:110        RAISE_APPLICATION_ERROR(-20000,
catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log:112      WHEN UTL_FILE.WRITE_ERROR THEN
catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log:113        RAISE_APPLICATION_ERROR(-20000,
catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log:114                                'WRITE_ERROR during insertScriptFile');
catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log:123        RAISE_APPLICATION_ERROR(-20000,
catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log:125      WHEN UTL_FILE.WRITE_ERROR THEN
catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log:126        RAISE_APPLICATION_ERROR(-20000,
catbundle_PSU_TEST09_GENERATE_2016Aug26_22_04_24.log:127                                'WRITE_ERROR during closeScriptFile');

 # Compile Invalid Objects - - -

SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql 

+++Nikhil Tatineni +++

TFA Collector # 11.2.0.4 RAC new Feature

using TFA Collector - Tool for Enhanced Diagnostic Gathering (Doc ID 1513912.2)
Trace File Analyzer Collector (TFA) is a diagnostic collection utility to simplify diagnostic data collection on Oracle Clusterware/Grid Infrastructure, RAC and Single Instance Database systems.  TFA is similar to the diag collection utility packaged with Oracle Clusterware in that it collects and packages diagnostic data

TFA Collector Simplifies diagnostic data collection and with a single command performs clusterwide diagnostic collection is performed with a single command executed from a single node Diagnostic data uploads to Support are reduced by a factor of 10x or more in most cases and Diagnostic files “trimmed” around the incident time. Collected diagnostics are consolidated on a single node and increased efficiency of admin staff

To check TFA Status #
[oracle@tnc1 bin]$ $GRID_HOME/bin/tfactl print status

[oracle@tnc1 bin]$ cd ..
[oracle@tnc1 grid]$ cd crs/
[oracle@tnc1 crs]$ cd install/
[oracle@tnc1 install]$ ls -ltr

To Install TFA tool 
[oracle@tnc1 install]$ ls -ltr tfa_setup.sh
-rwxr-xr-x 1 root oinstall 15343311 Mar 11 04:35 tfa_setup.sh

To print configuration # 
[oracle@tnc1 install]$ $GRID_HOME/bin/tfactl print config


To check Errors in RAC Cluster # 
[oracle@tnc1 install]$ $GRID_HOME/bin/tfactl print errors
Total Errors found in database: 0
DONE

--References # 
https://juliandontcheff.wordpress.com/2013/08/28/oracle-database-11-2-0-4-new-features/
http://www.hhutzler.de/blog/trace-file-analyzer-collector-tfa-collector/
Linux: Oracle RAC Node Reboot Hangs on TFA Process (Doc ID 1983567.1)

--Nikhil Tatineni--
-- RAC 11.2.0.4 # 

Sunday, August 21, 2016

Extract DDL of Table

 Using metadata package DBMS_METADATA to extract DDL of the table 

SQL> set long 9000
SQL> select dbms_metadata.get_ddl ('TABLE','EMP_INFO') from dual;

--Nikhil Tatineni--
--Database --

Global Index and Local Index

Local Index's are created using Oracle table partitions. 
Partitioned table can be by range, hash and list 
create local index on a partitioned table, it automatically creates index partitions as many as in the table partitions. whenever new partition is created automatically index is created 
Local partitioned index creates one to one relationship between the table partitions. The key value for the table partition and the index partition must be identical 

Advantages #
> Local Index’s are very easy to maintain 
> we will go with Local Index’s where drop old partitions and add new partitions  to the table 
> more faster execution plans using partition pruning 

step 1 # create partition table # populate data 
step 2 # create local index on partition key 

CREATE TABLE emp_info
(
emp_id NUMBER NOT NULL,
join_date DATE NOT NULL,
email VARCHAR2(100)
)
PARTITION BY RANGE (join_date)
(
PARTITION emp_info_p0 VALUES LESS THAN (TO_DATE('01-JAN-2011', 'DD-MON-YYYY')) TABLESPACE tnc_stage,
PARTITION emp_info_p1 VALUES LESS THAN (TO_DATE('01-JAN-2012', 'DD-MON-YYYY')) TABLESPACE tnc_stage,
PARTITION emp_info_p2 VALUES LESS THAN (TO_DATE('01-JAN-2013', 'DD-MON-YYYY')) TABLESPACE tnc_stage,
PARTITION emp_info_p3 VALUES LESS THAN (TO_DATE('01-JAN-2014', 'DD-MON-YYYY')) TABLESPACE tnc_stage,
PARTITION emp_info_p4 VALUES LESS THAN (TO_DATE('01-JAN-2015', 'DD-MON-YYYY')) TABLESPACE tnc_stage
);

populate the data into partition table #

declare
begin
for i in 1..100000
loop
insert into emp_info values (i,'13-APR-2010','xyz'||i);
end loop;
end;
/

declare
begin
for i in 100001..200000
loop
insert into emp_info values (i,'13-APR-2011','xyz'||i);
end loop;
end;

declare
begin
for i in 200001..300000
loop
insert into emp_info values (i,'13-APR-2012','xyz'||i);
end loop;
end;
/

declare
begin
for i in 300001..400000
loop
insert into emp_info values (i,'13-APR-2013','xyz'||i);
end loop;
end;
/

declare
begin
for i in 400001..500000
loop
insert into emp_info values (i,'13-APR-2014','xyz'||i);
end loop;
end;
/

populating data completed on partition table #
check row count each partition in user_tab_partitions.NUM_ROWS. 
row count shows Null defines Online statists gathering are off in 11g but not in 12c 

> select TABLE_NAME,PARTITION_NAME,NUM_ROWS from user_tab_partitions where table_name='EMP_INFO';
> select TABLE_NAME,PARTITION_NAME,NUM_ROWS from user_tab_partitions where table_name='EMP_INFO';
TABLE_NAME       PARTITION_NAME NUM_ROWS
------------------------------ ------------------------------ ----------
EMP_INFO       EMP_INFO_P0
EMP_INFO       EMP_INFO_P1
EMP_INFO       EMP_INFO_P2
EMP_INFO       EMP_INFO_P3
EMP_INFO       EMP_INFO_P4

gather stats manually in 11g as follows
SQL> EXEC DBMS_STATS.gather_table_stats('DELTA','EMP_INFO');
PL/SQL procedure successfully completed.


SQL>  select TABLE_NAME,PARTITION_NAME,NUM_ROWS from user_tab_partitions where table_name='EMP_INFO';
TABLE_NAME       PARTITION_NAME NUM_ROWS
------------------------------ ------------------------------ ----------
EMP_INFO       EMP_INFO_P0   100000
EMP_INFO       EMP_INFO_P1   100000
EMP_INFO       EMP_INFO_P2   100000
EMP_INFO       EMP_INFO_P3   100000
EMP_INFO       EMP_INFO_P4   100000

STEP2 # 
Creating Local Index # on partition table 
>create index partition_local_idx on EMP_INFO (join_date) local;

Querys Used to check partition index's 
>select INDEX_NAME,INDEX_type ,GLOBAL_STATS from user_indexes where index_name='partition_local_idx’;
> select index_name, partition_name from user_ind_partitions where index_name='partition_local_idx';

SQL> select index_name, partition_name from user_ind_partitions where index_name='PARTITION_LOCAL_IDX';
INDEX_NAME       PARTITION_NAME
------------------------------ ------------------------------
PARTITION_LOCAL_IDX       EMP_INFO_P0
PARTITION_LOCAL_IDX       EMP_INFO_P1
PARTITION_LOCAL_IDX       EMP_INFO_P2
PARTITION_LOCAL_IDX       EMP_INFO_P3
PARTITION_LOCAL_IDX       EMP_INFO_P4

Online gather stats for index are enabled in 10g and 11g 
No need to gather stats in recent versions of oracle 

Global Index # To be continued -- 

References #
http://www.oraclebuffer.com/oracle/oracle-12c-global-index-maintenance-is-now-asynchronous/

logdump: Scan for Timestamp



Logdump 267 >sfts 2016-07-21 11:00:00

Scan for timestamp >= 2016/07/21 15:00:00.000.000 GMT
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    59  (x003b)   IO Time    : 2016/07/21 11:00:31.999.422
IOType     :    15  (x0f)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :     179323       AuditPos   : 111691280
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/07/21 11:00:31.999.422 FieldComp            Len    59 RBA 161945260
Name: GGADMIN.HEARTBEAT
After  Image:                                             Partition 4   GU s
 0001 000c 0000 0008 6f76 616c 3030 3970 000e 0004 | ........delta09....
 ffff 0000 000f 001f 0000 3230 3136 2d30 372d 3231 | ..........2016-07-21
 3a31 313a 3030 3a33 322e 3233 3136 3936 3030 30   | :11:00:32.231696000
Column     1 (x0001), Len    12 (x000c)
 0000 0008 6f76 616c 3030 3970                     | ....delta09
Column    14 (x000e), Len     4 (x0004)
 ffff 0000                                         | ....
Column    15 (x000f), Len    31 (x001f)
 0000 3230 3136 2d30 372d 3231 3a31 313a 3030 3a33 | ..2016-07-21:11:00:3
 322e 3233 3136 3936 3030 30                       | 2.231696000


--Nikhil Tatineni--

--Goldengate -- 

Thursday, August 18, 2016

RAC Instance Specific Parameters

Parameter file #
*.audit_file_dest='/u01/app/oracle/admin/mipd09/adump'
*.audit_trail='db'
*.cluster_database=TRUE
*.compatible='11.2.0.4.0'
*.control_files='+STAGE/mipd09/controlfile/current.261.917625349','+STAGE/mipd09/controlfile/current.260.917625349'
*.db_block_size=8192
*.db_create_file_dest='+STAGE'
*.db_domain=''
*.db_name='mipd09'
*.db_recovery_file_dest='+STAGE'
*.db_recovery_file_dest_size=3145728000
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=mipd09XDB)'
mipd091.instance_number=1
mipd092.instance_number=2
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=300
*.pga_aggregate_target=314572800
*.processes=150
*.local_listener=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.90)(PORT=1521))
*.remote_listener='tnc-scan.localdomain:1521'
*.remote_login_passwordfile='exclusive'
*.sga_target=629145600
mipd091.thread=1
mipd092.thread=2
mipd091.undo_tablespace='UNDOTBS2'
mipd092.undo_tablespace='UNDOTBS1'

Instance-Specific Parameters # 


--Nikhil Tatineni--



Sunday, August 14, 2016

Adaptive query optimization : 12c

Adaptive Query Optimization or Real Time Query Optimization # 

After Parse transformation, Query will enter into optimizer . optimizer will generate different execution plans based on selectivity, cardinality and cost  i.e stats available on sysaux tablespace. After generating better execution plan by optimizer, oracle server process executes the query on database and Oracle Server process start processing the query using execution plan delivered by optimizer and before execution. Here in 12c, Optimizer starts comparing the data sets  depend on tables and type of SQL query and join made on base tables. Depend on volume of rows fetched in data set on base tables ( optimizer discovers cardinality mismatch ), Oracle waits to create and Delay in final Execution plan decision after parsing before execution and follow adaptive Query optimization.  The new behavior of 12c optimizer helps to adjust execution plan on run-time adjustments using statistics collector. Optimizer is smart in 12c 

In 10g by default it is set to 2, Dynamic sampling is enabled when table dons’t have stats on it. 
But in 12c functionality remains same and when ever there is cardinality mismatch during execution of sql query optimizer use adaptive statistics and changes execution plan 

set following parameters on database to enable adaptive query optimization #
optimizer_features_enable=12.1.0
optimizer_adaptive_reporting_only=true ( If this is set to true . Optimizer will not execute query optimization plan, it sticks with execution plan generated by cost based optimizer and  but it will store information, how it can help if we enabled enable adaptive query optimization 
optimizer_dynamic_sampling=11 in 12c (This helps optimizer to go with dynamic sampling or not )

dynamic sampling refer following link 


Friday, August 12, 2016

Managing Oracle Physical Standby Databases


Managing Physical Standby database #
we start MRP process on standby database in mount stage 
Start MRP on standby
sql> alter database recover managed standby database disconnect from session;

To check MRP is running on standby:) 
SELECT PROCESS from V$MANAGED_STANDBY where PROCESS like 'MRP%';

To Stop MRP # 
Sql> alter database recover managed standby database cancel; 

Monitoring STANDBY database 
To check what are the background process running on standby, query v$managed_standby 
Select 
   PROCESS,  
   SEQUENCE#, 
   STATUS 
From 
   V$MANAGED_STANDBY;

To check MRP is running on standby:) 
SELECT PROCESS from V$MANAGED_STANDBY where PROCESS like 'MRP%';

what are the logs  shipped from primary and applied on standby database ?
SELECT 'Last Applied  : ' Logs,
       TO_CHAR (next_time, 'DD-MON-YY:HH24:MI:SS') Time
  FROM v$archived_log
 WHERE sequence# = (SELECT MAX (sequence#)
                      FROM v$archived_log
                     WHERE applied = 'YES')
UNION
SELECT 'Last Received : ' Logs,
       TO_CHAR (next_time, 'DD-MON-YY:HH24:MI:SS') Time
  FROM v$archived_log
 WHERE sequence# = (SELECT MAX (sequence#) FROM v$archived_log);

On Standby site, query  “v$archive_dest_status” to find the last archived log received and applied on this site
Select 
   ARCHIVED_THREAD#, 
   ARCHIVED_SEQ#, 
   APPLIED_THREAD#,
   APPLIED_SEQ#
From 
   V$ARCHIVE_DEST_STATUS;

Query’s used to find out archive gap between primary and standby 
Views Used # v$archived_log & v$log_history 
sql> SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL.SEQUENCE# “Last Sequence Applied”, (ARCH.SEQUENCE# – APPL.SEQUENCE#) “Difference” FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH, (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

Views used to Check Database Errors ( v$Archived_gap & v$dataguard_status)
set pages 300 lines 300
column Timestamp Format a20
column Facility  Format a24
column Severity  Format a13
column Message   Format a80 trunc
Select to_char(timestamp,'YYYY-MON-DD HH24:MI:SS') Timestamp,Facility,Severity,error_code,message_num,Message from v$dataguard_status where severity in ('Error','Fatal') order by Timestamp;

sql > select  *  from v$ARCHIVE_GAP;

--Nikhil Tatineni--
--Standby -- 

Querys to monitor RAC

following few  Query's will help to find out culprits-  Query to check long running transaction from last 8 hours  Col Sid Fo...