Zero-Downtime Oracle Grid Infrastructure Patching (ZDOGIP). (Doc ID 2635015.1)

To BottomTo Bottom

In this Document

  Goal
  Solution
  Overview
  1) Installing and Patching the Grid infrastructure (software only)
  2) Switching the Grid Infrastructure Home
  References

 

APPLIES TO: 

Oracle Database - Enterprise Edition - Version 19.8.0.0.0 and later
Information in this document applies to any platform.

GOAL

This document describes a procedure to install patches on Grid Infrastructure Home with Zero downtime.

SOLUTION

Overview

This document provides procedures to assist with patching grid infrastructure with Zero Impact on Database .

This is a new feature introduced in the 19.8 RU. The minimum Grid Infrastructure (GI) source version must be at least 19.8 and database running on the cluster should be 19c.

 

 If Grid Infrastructure OS drivers (AFD, ACFS, ADVM) are in use and the patch you are applying updates the drivers then a database instance stop and restart on the server patched will be required. In this case, you must rely on rolling patch installation.
  For example, the ACFS sub-patch of the Grid Infrastructure Release Update will update the AFD, ACFS, and ADVM drivers as well as one-off patches for ACFS so this would require the database instance stop/restart to implement the AFD/ACFS/ADVM driver changes.
  It is possible to utilize this patching operation even for situations involving the AFD/ACFS/ADVM component updates but running with the older version of drivers is not supported for an extended period (e.g. restart should be completed in 24 hours of patching).
   To update the OS drivers, the command "rootcrs.sh -updateosfiles" will perform the restart of the Clusterware that requires the DB instance restart on the local node.

In general, the following steps are involved:

1) Installing and Patching the Grid infrastructure (software only)

2) Switching the Grid Infrastructure Home

Existing environment.

Grid Infrastructure 19.8 running on Linux x86-64 with no ACFS/AFD configured.

 

[oracle@node1 crsconfig]$ opatch lspatches

31335188;TOMCAT RELEASE UPDATE 19.0.0.0.0 (31335188)
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31304218;ACFS RELEASE UPDATE 19.8.0.0.0 (31304218)
31281355;Database Release Update : 19.8.0.0.200714 (31281355)


 

[oracle@node2 ~]$ crsctl stat res -t 
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1            STABLE
               ONLINE  ONLINE       node2            STABLE
ora.chad
               ONLINE  ONLINE       node1            STABLE
               ONLINE  ONLINE       node2            STABLE
ora.helper
               ONLINE  ONLINE       node1            IDLE,STABLE
               ONLINE  ONLINE       node2            IDLE,STABLE
ora.net1.network
               ONLINE  ONLINE       node1            STABLE
               ONLINE  ONLINE       node2            STABLE
ora.ons
               ONLINE  ONLINE       node1            STABLE
               ONLINE  ONLINE       node2            STABLE
ora.proxy_advm
               OFFLINE OFFLINE      node1            STABLE
               OFFLINE OFFLINE      node2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       node1            STABLE
      2        ONLINE  ONLINE       node2            STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node2            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       node2            169.254.15.144 10.64
                                                             .222.238,STABLE
ora.OCRVFDG.dg(ora.asmgroup)
      1        ONLINE  ONLINE       node1            STABLE
      2        ONLINE  ONLINE       node2            STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       node1            Started,STABLE
      2        ONLINE  ONLINE       node2            Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       node1            STABLE
      2        ONLINE  ONLINE       node2            STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.node1.vip
      1        ONLINE  ONLINE       node1            STABLE
ora.node2.vip
      1        ONLINE  ONLINE       node2            STABLE
ora.cvu
      1        ONLINE  ONLINE       node2            STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       node2            Open,STABLE
ora.orcl.db
      1        ONLINE  ONLINE       node1            Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             home_1,STABLE
      2        ONLINE  ONLINE       node2            Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             home_1,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       node2            STABLE
ora.rhpserver
      1        ONLINE  ONLINE       node2            STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       node2            STABLE
----------------------------------------------------------------------

[oracle@node1 crsconfig]$ ps -ef | grep d.bin
root      1257     1  0 11:12 ?        00:04:30 /u01/app/19c/grid/bin/ohasd.bin reboot
root      1607     1  0 11:12 ?        00:02:18 /u01/app/19c/grid/bin/orarootagent.bin
oracle    1752     1  0 11:12 ?        00:02:26 /u01/app/19c/grid/bin/oraagent.bin
oracle    1801     1  0 11:12 ?        00:01:17 /u01/app/19c/grid/bin/mdnsd.bin
oracle    1803     1  0 11:12 ?        00:03:07 /u01/app/19c/grid/bin/evmd.bin
oracle    1897     1  0 11:12 ?        00:01:22 /u01/app/19c/grid/bin/gpnpd.bin

 

 

1) Installing and Patching the Grid infrastructure (software only)

Download 19.3. from the OTN Link

https://www.oracle.com/database/technologies/oracle19c-linux-downloads.html

LINUX.X64_193000_grid_home.zip

Download the 19.9 RU

 31750108 GI RELEASE UPDATE 19.9.0.0.0

Unzip the software to  destination Gridhome on node1

unzip -d LINUX.X64_193000_grid_home.zip  /u01/app/19.9.0.0/grid

Ensure the directory  "/u01/app/19.9.0.0/grid" is writable on other nodes for gridInfrastructure owner

Apply the patch and configure the software .

/u01/app/19.9.0.0/grid/gridSetup.sh -applyRU  31750108

Chose the option "Install Software only" and select all the nodes.

2) Switching the Grid Infrastructure Home

1.Run the gridSetup.sh from the target home

/u01/app/19.9.0.0/grid/gridSetup.sh -SwitchGridhome

It will launch the GUI 

During this phase,you should not select the automated root.sh option.We have to run manually with additional option listed below.



 

 

2. When prompted, run "root.sh -transparent -nodriverupdate" instead of root.sh.

When a patch contains driver updates, as any RU would for example, indeed –nodriverupdate must be used. With this option the drivers are not updated until later. To have that happen, one needs to run the root script with the “-updateosfiles” option, which will shut down everything (databases included) to patch the drivers. Note that it’s only needed if the drivers (ACFS/AFD/OKA/OLFS) are used. This feature is recommended for the configurations that do *not* have (ACFS/AFD/OKA/OLFS).

 

If you don't use the "-nodriverupdate" flag, root script would fail with below error

root@node1 ~]# /u01/app/19.9.0.0/grid/root.sh -transparent
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.9.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
LD_LIBRARY_PATH='/u01/app/19c/grid/lib:/u01/app/19.9.0.0/grid/lib:'
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node1/crsconfig/rootcrs_node1_2020-02-02_07-23-34PM.log
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node1/crsconfig/rootcrs_node1_2020-02-02_07-23-34PM.log
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node1/crsconfig/crs_prepatch_node1_2020-02-02_07-23-35PM.log
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node1/crsconfig/crs_prepatch_node1_2020-02-02_07-23-37PM.log
2020/04/20 19:23:59 CLSRSC-347: Successfully unlock /u01/app/19.9.0.0/grid
2020/04/20 19:24:00 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node1/crsconfig/crs_postpatch_node1_2020-02-02_07-24-01PM.log
2020/04/20 19:24:28 CLSRSC-908: The -nodriverupdate option is required for out-of-place Zero-Downtime Oracle Grid Infrastructure Patching.
Died at /u01/app/19.9.0.0/grid/crs/install/crspatch.pm line 548.
The command '/u01/app/19.9.0.0/grid/perl/bin/perl -I/u01/app/19.9.0.0/grid/perl/lib -I/u01/app/19.9.0.0/grid/crs/install /u01/app/19.9.0.0/grid/crs/install/rootcrs.pl  -transparent -dstcrshome /u01/app/19.9.0.0/grid -postpatch' execution failed

This error is due to the ACFS RU in the target home. The ZIP option will not update the ACFS/AFD/OKSA/OLFS drivers

 

Using Oracle Zero Downtime Patching with ACFS: 

When using Oracle Zero-Downtime Patching, the opatch inventory will display the new patch number, however, only the Oracle Grid Infrastructure user space binaries are actually patched; Oracle Grid Infrastructure OS kernel modules such as ACFS are not updated but will continue to run the pre-patch version. The procedure is not applicable for installing an ACFS oneoff fix with the target home 
The updated ACFS drivers will automatically get updated if the nodes in the cluster are being rebooted as part of an OS kernel update.
Otherwise, to update the ACFS drivers the user must stop the crs stack, run `rootcrs.sh -updateosfiles`, and restart the crs stack - on each node of the cluster; to subsequently verify the installed and running driver versions run 'crsctl query driver softwareversion' and 'crsctl query driver activeversion'' on each node of the cluster.

 

Rerun the root.sh with -nodriverupdate.

[root@node1 ~]# /u01/app/19.9.0.0/grid/root.sh -transparent -nodriverupdate
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.9.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
LD_LIBRARY_PATH='/u01/app/19c/grid/lib:/u01/app/19.9.0.0/grid/lib:'
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node1/crsconfig/rootcrs_node1_2020-02-02_07-30-48PM.log
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node1/crsconfig/rootcrs_node1_2020-02-02_07-30-48PM.log
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node1/crsconfig/crs_prepatch_node1_2020-02-02_07-30-49PM.log
2020/04/20 19:30:50 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node1/crsconfig/crs_postpatch_node1_2020-02-02_07-30-51PM.log
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3253194715].
2020/04/20 19:31:10 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd_dummy.service'
2020/04/20 19:31:10 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [3253194715].
2020/04/20 19:39:11 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2020/04/20 19:39:12 CLSRSC-672: Post-patch steps for patching GI home successfully completed.

While running "root.sh -transparent -nodriverupdate", verified the ORCL database stayed up.

 

[oracle@node1 crsconfig]$ ps -ef | grep d.bin
root      1257     1  0 11:12 ?        00:04:30 /u01/app/19c/grid/bin/ohasd.bin reboot
root      1607     1  0 11:12 ?        00:02:18 /u01/app/19c/grid/bin/orarootagent.bin
oracle    1752     1  0 11:12 ?        00:02:26 /u01/app/19c/grid/bin/oraagent.bin
oracle    1801     1  0 11:12 ?        00:01:17 /u01/app/19c/grid/bin/mdnsd.bin
oracle    1803     1  0 11:12 ?        00:03:07 /u01/app/19c/grid/bin/evmd.bin
oracle    1897     1  0 11:12 ?        00:01:22 /u01/app/19c/grid/bin/gpnpd.bin
oracle    1952  1803  0 11:12 ?        00:01:14 /u01/app/19c/grid/bin/evmlogger.bin -o /u01/app/19c/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /u01/app/19c/grid/log/[HOSTNAME]/evmd/evmlogger.log
oracle    1975     1  0 11:12 ?        00:03:12 /u01/app/19c/grid/bin/gipcd.bin
root      2233     1  0 11:12 ?        00:01:31 /u01/app/19c/grid/bin/cssdmonitor
root      2520     1  0 11:12 ?        00:01:32 /u01/app/19c/grid/bin/cssdagent
oracle    2539     1  1 11:12 ?        00:05:08 /u01/app/19c/grid/bin/ocssd.bin
root      2999     1  0 11:13 ?        00:03:16 /u01/app/19c/grid/bin/octssd.bi reboot
root      3106     1  0 11:13 ?        00:03:50 /u01/app/19c/grid/bin/crsd.bin reboot
root      3192     1  1 11:13 ?        00:04:59 /u01/app/19c/grid/bin/orarootagent.bin
oracle    3266     1  1 11:13 ?        00:05:48 /u01/app/19c/grid/bin/oraagent.bin
root     30618 29407  0 19:31 pts/1    00:00:00 /u01/app/19c/grid/bin/crsctl.bin stop crs -tgip -f
oracle   30700  4877  0 19:31 pts/4    00:00:00 grep --color=auto d.bin
[oracle@node1 crsconfig]$ ps -ef | grep pmon
oracle    4688     1  0 11:14 ?        00:00:02 asm_pmon_+ASM1
oracle    4926     1  0 11:14 ?        00:00:02 ora_pmon_ORCL1
oracle   30728  4877  0 19:31 pts/4    00:00:00 grep --color=auto pmon
[oracle@node1 crsconfig]$ ps -ef | grep pmon
oracle    4688     1  0 11:14 ?        00:00:02 asm_pmon_+ASM1
oracle    4926     1  0 11:14 ?        00:00:02 ora_pmon_ORCL1
oracle   30735  4877  0 19:31 pts/4    00:00:00 grep --color=auto pmon
[oracle@node1 crsconfig]$ ps -ef | grep pmon
oracle    4688     1  0 11:14 ?        00:00:02 asm_pmon_+ASM1
oracle    4926     1  0 11:14 ?        00:00:02 ora_pmon_ORCL1
oracle   30740  4877  0 19:31 pts/4    00:00:00 grep --color=auto pmon
[oracle@node1 crsconfig]$ ps -ef | grep pmon
oracle    4926     1  0 11:14 ?        00:00:02 ora_pmon_ORCL1==================>ASM1 instance is down.
oracle   30820  4877  0 19:31 pts/4    00:00:00 grep --color=auto pmon
[oracle@node1 crsconfig]$ ps -ef | grep pmon
oracle    4926     1  0 11:14 ?        00:00:02 ora_pmon_ORCL1==================>Database is up and running.
oracle   30838  4877  0 19:31 pts/4    00:00:00 grep --color=auto pmon

Grid started from new home

oracle@node1 crsconfig]$ ps -ef | grep d.bin
root     30811     1  0 19:31 ?        00:00:00 /u01/app/oracle/crsdata/node1/csswd/oracsswd.bin
root     31170 29407  1 19:32 pts/1    00:00:00 /u01/app/19.9.0.0/grid/bin/crsctl.bin start crs -wait -tgip
root     31188     1 23 19:32 ?        00:00:03 /u01/app/19.9.0.0/grid/bin/ohasd.bin reboot CRS_AUX_DATA=CRS_AUXD_TGIP=yes;_ORA_BLOCKING_STACK_LOCALE=AMERICAN_AMERICA.AL32UTF8
oracle   31273     1  4 19:32 ?        00:00:00 /u01/app/19.9.0.0/grid/bin/oraagent.bin
root     31283     1  4 19:32 ?        00:00:00 /u01/app/19.9.0.0/grid/bin/orarootagent.bin
oracle   31320     1  2 19:32 ?        00:00:00 /u01/app/19.9.0.0/grid/bin/mdnsd.bin
oracle   31323     1  4 19:32 ?        00:00:00 /u01/app/19.9.0.0/grid/bin/evmd.bin
oracle   31324     1  4 19:32 ?        00:00:00 /u01/app/19.9.0.0/grid/bin/gpnpd.bin
oracle   31431     1  9 19:32 ?        00:00:00 /u01/app/19.9.0.0/grid/bin/gipcd.bin
oracle   31505 31323  2 19:32 ?        00:00:00 /u01/app/19.9.0.0/grid/bin/evmlogger.bin -o /u01/app/19.9.0.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /u01/app/19.9.0.0/grid/log/[HOSTNAME]/evmd/evmlogger.log
root     31534     1  6 19:32 ?        00:00:00 /u01/app/19.9.0.0/grid/bin/cssdagent
oracle   31552     1 16 19:32 ?        00:00:00 /u01/app/19.9.0.0/grid/bin/ocssd.bin -P
oracle   31618  4877  0 19:32 pts/4    00:00:00 grep --color=auto d.bin
[oracle@node1 crsconfig]$ ps -ef | grep pmon
oracle    4926     1  0 11:14 ?        00:00:02 ora_pmon_ORCL1
oracle   31634  4877  0 19:32 pts/4    00:00:00 grep --color=auto pmon
[oracle@node1 crsconfig]$ ps -ef | grep pmon
oracle    4926     1  0 11:14 ?        00:00:02 ora_pmon_ORCL1
oracle   31961  4877  0 19:32 pts/4    00:00:00 grep --color=auto pmon
[oracle@node1 crsconfig]$ ps -ef | grep pmon

[oracle@node1 crsconfig]$ ps -ef | grep pmon
oracle    4926     1  0 11:14 ?        00:00:02 ora_pmon_ORCL1
oracle   32119     1  0 19:32 ?        00:00:00 asm_pmon_+ASM1
oracle   32227  4877  0 19:32 pts/4    00:00:00 grep --color=auto pmo
oracle    4926     1  0 11:14 ?        00:00:02 ora_pmon_ORCL1
oracle   32034  4877  0 19:32 pts/4    00:00:00 grep --color=auto pmon
[oracle@node1 crsconfig]$ ps -ef | grep pmon
oracle    4926     1  0 11:14 ?        00:00:02 ora_pmon_ORCL1
oracle   32081  4877  0 19:32 pts/4    00:00:00 grep --color=auto pmon
[oracle@node1 crsconfig]$ ps -ef | grep pmon
oracle    4926     1  0 11:14 ?        00:00:02 ora_pmon_ORCL1
oracle   32119     1  0 19:32 ?        00:00:00 asm_pmon_+ASM1
oracle   32195  4877  0 19:32 pts/4    00:00:00 grep --color=auto pmon

 

[oracle@node2 ~]$ crsctl stat res -t 
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1            STABLE
               ONLINE  ONLINE       node2            STABLE

 

ora.proxy_advm
               OFFLINE OFFLINE      node1            STABLE
               OFFLINE OFFLINE      node2            STABLE

 

Run the root.sh -transparent -nodriverupdate on other nodes

On node2.

 

[root@node2 ~]# /u01/app/19.9.0.0/grid/root.sh -transparent -nodriverupdate
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.9.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
LD_LIBRARY_PATH='/u01/app/19c/grid/lib:/u01/app/19.9.0.0/grid/lib:'
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node2/crsconfig/rootcrs_node2_2020-02-02_07-42-45PM.log
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node2/crsconfig/rootcrs_node2_2020-02-02_07-42-45PM.log
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node2/crsconfig/crs_prepatch_node2_2020-02-02_07-42-46PM.log
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node2/crsconfig/crs_prepatch_node2_2020-02-02_07-42-48PM.log
2020/04/20 19:43:08 CLSRSC-347: Successfully unlock /u01/app/19.9.0.0/grid
2020/04/20 19:43:10 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.
Using configuration parameter file: /u01/app/19.9.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/node2/crsconfig/crs_postpatch_node2_2020-02-02_07-43-11PM.log
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [3253194715].
2020/04/20 19:43:40 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd_dummy.service'
2020/04/20 19:47:08 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3633918477]


Validating logfiles...done
Patch 30557433 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/30557433/23305305/30557433_apply__MGMTDB_CDBROOT_2020Feb02_19_53_44.log (no errors)
Patch 30557433 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/30557433/23305305/30557433_apply__MGMTDB_PDBSEED_2020Feb02_19_56_03.log (no errors)
Patch 30557433 apply (pdb GIMR_DSCREP_10): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/30557433/23305305/30557433_apply__MGMTDB_GIMR_DSCREP_10_2020Feb02_19_56_02.log (no errors)
SQL Patching tool complete on Sun Feb  2 19:58:43 2020
2020/04/20 20:00:57 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2020/04/20 20:01:02 CLSRSC-672: Post-patch steps for patching GI home successfully completed.

 

While running "root.sh -transparent -nodriverupdate" on the second node, verified the ORCL database stayed up.

 

oracle    4621     1  0 11:07 ?        00:00:02 asm_pmon_+ASM2
oracle    6238     1  0 11:08 ?        00:00:02 ora_pmon_ORCL2
oracle   19278 10010  0 19:46 pts/2    00:00:00 grep --color=auto pmon

[oracle@node2 ~]$ ps -ef | grep pmon
oracle    6238     1  0 11:08 ?        00:00:02 ora_pmon_ORCL2===========>ASM2 down
oracle   20437 10010  0 19:46 pts/2    00:00:00 grep --color=auto pmon


[oracle@node2 ~]$ ps -ef | grep d.bin
root      1252     1  1 11:02 ?        00:06:55 /u01/app/19c/grid/bin/ohasd.bin reboot
root      1603     1  0 11:02 ?        00:02:32 /u01/app/19c/grid/bin/orarootagent.bin
oracle    1749     1  0 11:03 ?        00:03:21 /u01/app/19c/grid/bin/oraagent.bin
root     13335 12134  0 19:43 pts/1    00:00:00 /u01/app/19c/grid/bin/crsctl.bin stop crs -tgip -f
root     20528     1  0 19:47 ?        00:00:00 /u01/app/oracle/crsdata/node2/csswd/oracsswd.bin
oracle   20715 10010  0 19:47 pts/2    00:00:00 grep --color=auto d.bin

[oracle@node2 ~]$ ps -ef | grep pmon
oracle    6238     1  0 11:08 ?        00:00:02 ora_pmon_ORCL2
oracle   20767 10010  0 19:47 pts/2    00:00:00 grep --color=auto pmon
[oracle@node2 ~]$ ps -ef | grep d.bin=================================================================>CRS Down.

oracle   21010 10010  0 19:47 pts/2    00:00:00 grep --color=auto d.bin

[oracle@node2 ~]$ ps -ef | grep d.bin
root     20528     1  0 19:47 ?        00:00:00 /u01/app/oracle/crsdata/node2/csswd/oracsswd.bin
root     21192 12134 10 19:47 pts/1    00:00:00 /u01/app/19.9.0.0/grid/bin/crsctl.bin start crs -wait -tgip
root     21206     1  4 19:47 ?        00:00:00 /u01/app/19.9.0.0/grid/bin/ohasd.bin reboot CRS_AUX_DATA=CRS_AUXD_TGIP=yes;_ORA_BLOCKING_STACK_LOCALE=AMERICAN_AMERICA.AL32UTF8
oracle   21318 10010  0 19:47 pts/2    00:00:00 grep --color=auto d.bin

[oracle@node2 ~]$ ps -ef | grep d.bin
root     20528     1  0 19:47 ?        00:00:00 /u01/app/oracle/crsdata/node2/csswd/oracsswd.bin
root     21192 12134  3 19:47 pts/1    00:00:00 /u01/app/19.9.0.0/grid/bin/crsctl.bin start crs -wait -tgip
root     21206     1 16 19:47 ?        00:00:00 /u01/app/19.9.0.0/grid/bin/ohasd.bin reboot CRS_AUX_DATA=CRS_AUXD_TGIP=yes;_ORA_BLOCKING_STACK_LOCALE=AMERICAN_AMERICA.AL32UTF8
oracle   21447 10010  0 19:47 pts/2    00:00:00 grep --color=auto d.bin

[oracle@node2 ~]$ ps -ef | grep pmon
oracle    6238     1  0 11:08 ?        00:00:02 ora_pmon_ORCL2
oracle   22971     1  0 19:48 ?        00:00:00 asm_pmon_+ASM2
oracle   24169 10010  0 19:48 pts/2    00:00:00 grep --color=auto pmon
[oracle@node2 ~]$

 

CRS status after patching.

 

oracle@node1 crsconfig]$ opatch lspatches
  31780966;TOMCAT RELEASE UPDATE 19.0.0.0.0 (31780966)
  31772784;OCW RELEASE UPDATE 19.9.0.0.0 (31772784)
  31773437;ACFS RELEASE UPDATE 19.9.0.0.0 (31773437)
  31771877;Database Release Update : 19.9.0.0.201020 (31771877)

OPatch succeeded.


[oracle@node1 crsconfig]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3633918477].
[oracle@node1 crsconfig]$ ps -ef | grep d.bin
oracle    1958  4877  0 20:15 pts/4    00:00:00 grep --color=auto d.bin
oracle   11147     1  0 19:44 ?        00:00:00 /u01/app/19.9.0.0/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit
root     31188     1  2 19:32 ?        00:00:59 /u01/app/19.9.0.0/grid/bin/ohasd.bin reboot CRS_AUX_DATA=CRS_AUXD_TGIP=yes;_ORA_BLOCKING_STACK_LOCALE=AMERICAN_AMERICA.AL32UTF8
oracle   31273     1  0 19:32 ?        00:00:14 /u01/app/19.9.0.0/grid/bin/oraagent.bin

 

REFERENCES