Monday, May 9, 2016

Applying APRIL PSU "Patch Number : Patch 22738793


If we are applying APRIL PSU "Patch Number : 22646198: Component 11.2.0.4.160419 DB PSU + GI PSU 11.2.0.4.160419 (Apr2016) & 
Patch 23274134 - Oracle Grid Infrastructure Patch Set Update 11.2.0.4.160719 (Jul2016) (Includes Database PSU 11.2.0.4.160719)

Procedure is same as follows # 


Note: when you  unzip the Combo patch, if you don't see patches listed in readme.html download the patch separately 

Patch Can be applied in Rolling Fashion # 

STEP1: Before Applying Any Patch #

Take backup of  all Grid_home & Database_homes  in all nodes of cluster #

STEP2:
Make sure ocm.rsp " response file is created to apply patch in all nodes of cluster #

For patch 22646198/
11.2.0.4.160419 DB PSU + GI PSU 11.2.0.4.160419 (Apr2016)
This patch can be applied in rolling Fashion # There is No downTime Required for this Patch ##

STEP3: Stop EM Agent Processes Prior to Patching and Prior to Rolling Back the Patch as ORACLE HOME owner #
$ <ORACLE_HOME>/bin/emctl stop dbconsole


b)Validation of Oracle Inventory - check the consistency of inventory information for GI home and each database home
$ opatch lsinventory -detail -oh <GRID_HOME>
$ opatch lsinventory -detail -oh <ORACLE_HOME>

STEP4:
Check for one­-off patch conflict detection and resolution or Run it with Root User
Run it with Oracle / GRID  User:

FOR ORACLE_HOME
/u01/app/11.2.0/grid/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -oh /u01/app/oracle/product/11.2.0/db_1 -phBaseDir /stage/22646198

For GRID HOME
/u01/app/11.2.0/grid/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -oh /u01/app/11.2.0/grid -phBaseDir /stage/22646198
 
STEP5: Once we confirm, there is no conflicts, go ahead and apply patch to database home & grid home USING "ocm.rsp" file.


If GI Home and the Database Homes that are not shared and ACFS file system is not configured apply patch as follows: 
Syntax: opatch auto <Patch Location> -ocmrf <ocm response file>

# opatch auto /stage/22646198 -ocmrf /stage/apply_april_psu.rsp

OutPut as follows # Executing /u01/app/11.2.0/grid/perl/bin/perl /u01/app/oracle/product/11.2.0/db_1/OPatch/crs/patch11203.pl -patchdir /stage -patchn 22646198 -ocmrf /stage/apply_april_psu.rsp -paramfile /u01/app/11.2.0/grid/crs/install/crsconfig_params

This is the main log file: /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatchauto2016-05-09_10-41-31.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatchauto2016-05-09_10-41-31.report.log

2016-05-09 10:41:31: Starting Clusterware Patch Setup
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

Stopping RAC /u01/app/oracle/product/11.2.0/db_1 ...
Stopped RAC /u01/app/oracle/product/11.2.0/db_1 successfully

patch /stage/22646198/22502456  apply successful for home  /u01/app/oracle/product/11.2.0/db_1
patch /stage/22646198/22502549/custom/server/22502549  apply successful for home  /u01/app/oracle/product/11.2.0/db_1

Stopping CRS...
Stopped CRS successfully

patch /stage/22646198/22502456  apply successful for home  /u01/app/11.2.0/grid
patch /stage/22646198/22502549  apply successful for home  /u01/app/11.2.0/grid
patch /stage/22646198/22502505  apply successful for home  /u01/app/11.2.0/grid

Starting CRS...
Installing Trace File Analyzer
CRS-4123: Oracle High Availability Services has been started.

Starting RAC /u01/app/oracle/product/11.2.0/db_1 ...
Started RAC /u01/app/oracle/product/11.2.0/db_1 successfully
opatch auto succeede
d.

Things You can observer on Node 2 # and in alert log file #

[root@tnc2 tnc2]# crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.GRID.dg
               ONLINE  ONLINE       tnc2                                        
ora.LISTENER.lsnr
               ONLINE  ONLINE       tnc2                                        
ora.OVAL009P.dg
               ONLINE  ONLINE       tnc2                                        
ora.asm
               ONLINE  ONLINE       tnc2                     Started            
ora.gsd
               OFFLINE OFFLINE      tnc2                                        
ora.net1.network
               ONLINE  ONLINE       tnc2                                        
ora.ons
               ONLINE  ONLINE       tnc2                                        
ora.registry.acfs
               ONLINE  ONLINE       tnc2                                        
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       tnc2                                        
ora.cvu
      1        ONLINE  ONLINE       tnc2                                        
ora.oc4j
      1        ONLINE  ONLINE       tnc2                                        
ora.oval009.db
      1        OFFLINE OFFLINE                               Instance Shutdown  
      2        ONLINE  ONLINE       tnc2                     Open               
ora.scan1.vip
      1        ONLINE  ONLINE       tnc2                                        
ora.tnc1.vip
      1        ONLINE  INTERMEDIATE tnc2                     FAILED OVER        
ora.tnc2.vip
      1        ONLINE  ONLINE       tnc2                                

[oracle@tnc2 tnc2]$ pwd
/u01/app/11.2.0/grid/log/tnc2
[oracle@tnc2 tnc2]$ tail -100f alerttnc2.log
[crsd(15372)]CRS-2772:Server 'tnc1' has been assigned to pool 'ora.oval009'.
2016-05-09 09:32:35.749:
[crsd(15372)]CRS-2772:Server 'tnc2' has been assigned to pool 'Generic'.
2016-05-09 09:32:35.750:
[crsd(15372)]CRS-2772:Server 'tnc2' has been assigned to pool 'ora.oval009'.
2016-05-09 10:49:02.615:
[cssd(14888)]CRS-1625:Node tnc1, number 1, was manually shut down
2016-05-09 10:49:02.627:
[cssd(14888)]CRS-1601:CSSD Reconfiguration complete. Active nodes are tnc2 .
2016-05-09 10:49:02.792:
[crsd(15372)]CRS-5504:Node down event reported for node 'tnc1'.
2016-05-09 10:49:02.805:
[crsd(15372)]CRS-2773:Server 'tnc1' has been removed from pool 'Generic'.
2016-05-09 10:49:02.806:
[crsd(15372)]CRS-2773:Server 'tnc1' has been removed from pool 'ora.oval009'.

[root@tnc1 ~]# ps -ef | grep pmon
oracle   15691     1  0 10:59 ?        00:00:00 asm_pmon_+ASM1
oracle   17336     1  0 11:00 ?        00:00:00 ora_pmon_oval0091
root     18623 14908  0 11:04 pts/4    00:00:00 grep pmon

##### Once the patch is successfully  applied to both ORACLE & GRID_HOME, make sure Services are up & running on 1st node of the cluster #
After Apply the same step on all nodes of the cluster  # 


STEP6:
Run catbundle.sql scripts #
For each database instance running on the Oracle home being patched, connect to the database using SQL*Plus. Connect as SYSDBA and run the catbundle.sql script as follows
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> QUIT

STEP7:
Check for the list of patches applied to the database.
SQL> select action_time, patch_id, patch_uid, version, status, bundle_series, description from dba_registry_sqlpatch;

STEP8:
After running " catbundle.sql " step in all nodes of the cluster
run following command for patch validation # # 
[oracle@tnc1 ~]$ opatch lsinventory -all_nodes


---Nikhil Tatineni--

Querys to monitor RAC

following few  Query's will help to find out culprits-  Query to check long running transaction from last 8 hours  Col Sid Fo...