Automation - Apply Database Release Update patch in Oracle 19c using Ansible



Intro: In this blog I will show you how to use Ansible to automate Oracle database patching. I applied the latest oracle 19c RU patch "Database Release Update 19.21.0.0.231017" on oracle 19c database and also rollback using Ansible.

Pre-Requisite: 

1. You should have one Ansible controller machine where ansible installed.

2. And DB server machine where 19c DB installed (you can have many DB server), I tested with one here.

Steps:

1. You can download/clone or fork the oracle_patching repository from my github account. 

Basically I used the original repo from here

2. Need to copy the zip file to ansible server and unzip it and changed as per your environment.

This is from my controller, I created an Ansible role ru-apply and Ansible playbook file patch.yml


[ansadmin@ansible-controller roles]$ pwd
/etc/ansible/roles
[ansadmin@ansible-controller roles]$ ls -lrt
total 0
drwxrwxr-x. 8 ansadmin ansadmin 161 Oct 31 16:43 ru-apply
[ansadmin@ansible-controller roles]$ cd ru-apply/
[ansadmin@ansible-controller ru-apply]$ ll
total 16
-rw-rw-r--. 1 ansadmin ansadmin 1682 Apr 20  2022 CHANGELOG.md
drwxrwxr-x. 2 ansadmin ansadmin   22 Oct 31 16:45 defaults
drwxrwxr-x. 2 ansadmin ansadmin   31 Oct 30 21:03 files
drwxrwxr-x. 2 ansadmin ansadmin   22 Apr 20  2022 meta
-rw-rw-r--. 1 ansadmin ansadmin   46 Oct 31 16:43 patch.yml
-rw-rw-r--. 1 ansadmin ansadmin   71 Apr 20  2022 README.md
drwxrwxr-x. 2 ansadmin ansadmin 4096 Oct 31 16:49 tasks
drwxrwxr-x. 2 ansadmin ansadmin  116 Oct 30 21:05 templates
drwxrwxr-x. 2 ansadmin ansadmin   22 Oct 31 16:50 vars
[ansadmin@ansible-controller ru-apply]$ cat patch.yml
---
- hosts: all
  roles:
   - role: ru-apply
[ansadmin@ansible-controller ru-apply]$

3. Copy the p6880880_190000_Linux-x86-64.zip (opatch upgrade file) and p35643107_190000_Linux-x86-64.zip (RU update patch file) to target DB server.

DB server:


[oracle@db-server u01]$ ls -lrt

drwxrwxr-x. 4 oracle oinstall         40 Oct 27 14:41 app
-rwxrwxr-x. 1 oracle oinstall  127774864 Oct 27 17:07 p6880880_190000_Linux-x86-64.zip
-rw-rw-r--. 1 oracle oinstall 1815725977 Oct 30 14:45 p35643107_190000_Linux-x86-64.zip
[oracle@db-server u01]$

4. Run the tree command and see the execution of .yml files to apply patch for oracle home.


[ansadmin@ansible-controller roles]$ tree ru-apply
ru-apply
├── CHANGELOG.md
├── defaults
│   └── main.yml
├── files
│   └── check_patches.sql
├── meta
│   └── main.yml
├── patch.yml
├── README.md
├── tasks
│   ├── apply_quarterly_patches.yml
│   ├── conflict_check.yml
│   ├── extract_files.yml
│   ├── get_dbinstance_list.yml
│   ├── main.yml
│   ├── rollback_si.yml
│   ├── run_catbundle.yml
│   ├── RU_si.yml
│   └── update_opatch.yml
├── templates
│   ├── patch_list.j2
│   ├── quarterly_patch_list.j2
│   ├── quarterly_patch_list.j2.old
│   └── srvctl_state.j2
└── vars
    └── main.yml

5. Role variables:

vars/main.yml = provide the patch details

patch_type: ru dictionary variable for information for 19.3.. like

opatch:
  19.3.0.0:
    version: 12.2.0.1.40
    filename: p6880880_190000_Linux-x86-64.zip

quarterly_patches:
 RU:
  19.3.0.0:
   OCT2023:
    patchversion: 19.21.0.0.231017
    filename: p35643107_190000_Linux-x86-64.zip
    patchid: 35643107
    si_patchid: 35643107


defaults/main.yml
patch_name: OCT2023 # Should match ru dictionary variable defined in vars/main.yml

shutdown_listener: true # set to false if patching an empty oracle home (ie new 12c install)
rollback_ru: false # set to true if rolling back current RU.

oracle_home: "/u01/app/oracle/product/19.0.0/dbhome_1"
oracle_base: /u01/app/oracle
tns_admin: "{{ oracle_home }}/network/admin"

oracle_stage_install: /u01
oracle_stage_base: /u01
oracle_install_type: si
oracle_version: 19.3.0.0
patch_type: RU
patch_directory: /u01


Required Inventory Variables
oracle_install_type: valid values are SI or RAC. Used to determine if host is a single instance database install or part of a RAC cluster.

oracle_stage_install: directory that contains all oracle install files

oracle_stage: directory to store logs and scripts used during playbook execution.

6. And here is the task list to execute for applying patch



task/main.yml

---
#role: ru-apply
#file: main.yml

- debug: var=patch_type

  #- name: Run pre-check tasks
  #include: pre_checks.yml
  #tags: pre_checks

- name: Extract RU and One-off patches
  include: extract_files.yml
  when: not rollback_ru
  tags: extract_files

- name: Update opatch
  include: update_opatch.yml
  tags: update_opatch


- name: Apply Quarterly Patch
  include: "{{ patch_type }}_{{ oracle_install_type }}.yml"
  become_user: "{{ oracle_user }}"
  when: not rollback_ru
  tags: apply

- name: Rollback Quarterly Patch
  include: "rollback_{{ oracle_install_type }}.yml"
  become_user: "{{ oracle_user }}"
  when: rollback_ru
  tags: rollback

7. Now lets test it so first we need to see we can ping/connect db servers from Ansible controller.


8. Before running ansible playbook to apply patch , lets do syntax-check.


9. Now do the dry run first before running.

[ansadmin@ansible-controller ru-apply]$ ansible-playbook patch.yml --check

10. And finally run the playbook to apply patch.


[ansadmin@ansible-controller ru-apply]$ ansible-playbook patch.yml
[DEPRECATION WARNING]: "include" is deprecated, use include_tasks/import_tasks instead. See https://docs.ansible.com/ansible-core/2.14/user_guide/playbooks_reuse_includes.html for details. This feature
will be removed in version 2.16. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.

PLAY [all] ***************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************************
[WARNING]: Platform linux on host db_server is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change the meaning of that
path. See https://docs.ansible.com/ansible-core/2.14/reference_appendices/interpreter_discovery.html for more information.
ok: [db_server]

TASK [ru-apply : debug] **************************************************************************************************************************************************************************************
ok: [db_server] => {
    "patch_type": "RU"
}

TASK [ru-apply : Create Patch directory] *********************************************************************************************************************************************************************
ok: [db_server]

TASK [ru-apply : debug] **************************************************************************************************************************************************************************************
ok: [db_server] => {
    "quarterly_patches[patch_type][oracle_version][patch_name].filename": "p35643107_190000_Linux-x86-64.zip"
}

TASK [ru-apply : Unzip Quarterly Patch] **********************************************************************************************************************************************************************
ok: [db_server]

TASK [ru-apply : Check opatch version in database home] ******************************************************************************************************************************************************
ok: [db_server]

TASK [ru-apply : debug] **************************************************************************************************************************************************************************************
ok: [db_server] => {
    "database_opatch_version": {
        "changed": false,
        "cmd": "/u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatch version",
        "delta": "0:00:00.503409",
        "end": "2023-11-01 01:50:26.895656",
        "failed": false,
        "msg": "",
        "rc": 0,
        "start": "2023-11-01 01:50:26.392247",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "OPatch Version: 12.2.0.1.40\n\nOPatch succeeded.",
        "stdout_lines": [
            "OPatch Version: 12.2.0.1.40",
            "",
            "OPatch succeeded."
        ]
    }
}

TASK [ru-apply : Update opatch in database home] *************************************************************************************************************************************************************
skipping: [db_server]

TASK [ru-apply : Apply Quarterly Patch] **********************************************************************************************************************************************************************
[DEPRECATION WARNING]: "include" is deprecated, use include_tasks/import_tasks/import_playbook instead. This feature will be removed in version 2.16. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
included: /etc/ansible/roles/ru-apply/tasks/RU_si.yml for db_server

TASK [ru-apply : RU SU | update opatch] **********************************************************************************************************************************************************************
ok: [db_server]

TASK [ru-apply : RU SU | Get list of patches] ****************************************************************************************************************************************************************
ok: [db_server]

TASK [ru-apply : RU SU | Template patch list] ****************************************************************************************************************************************************************
ok: [db_server]

TASK [ru-apply : RU SU | check for conflicts] ****************************************************************************************************************************************************************
changed: [db_server] => (item=CheckConflictAgainstOHWithDetail)
changed: [db_server] => (item=CheckSystemSpace)

TASK [ru-apply : debug] **************************************************************************************************************************************************************************************
ok: [db_server] => {
    "opatch_conflicts": {
        "changed": true,
        "msg": "All items completed",
        "results": [
            {
                "ansible_loop_var": "item",
                "changed": true,
                "cmd": "opatch prereq CheckConflictAgainstOHWithDetail -ph /u01/35643107",
                "delta": "0:00:15.536175",
                "end": "2023-11-01 01:50:49.856225",
                "failed": false,
                "invocation": {
                    "module_args": {
                        "_raw_params": "opatch prereq CheckConflictAgainstOHWithDetail -ph /u01/35643107",
                        "_uses_shell": true,
                        "argv": null,
                        "chdir": null,
                        "creates": null,
                        "executable": null,
                        "removes": null,
                        "stdin": null,
                        "stdin_add_newline": true,
                        "strip_empty_ends": true
                    }
                },
                "item": "CheckConflictAgainstOHWithDetail",
                "msg": "",
                "rc": 0,
                "start": "2023-11-01 01:50:34.320050",
                "stderr": "",
                "stderr_lines": [],
                "stdout": "Oracle Interim Patch Installer version 12.2.0.1.40\nCopyright (c) 2023, Oracle Corporation.  All rights reserved.\n\nPREREQ session\n\nOracle Home       : /u01/app/oracle/product/19.0.0/dbhome_1\nCentral Inventory : /u01/app/oraInventory\n   from           : /u01/app/oracle/product/19.0.0/dbhome_1/oraInst.loc\nOPatch version    : 12.2.0.1.40\nOUI version       : 12.2.0.7.0\nLog file location : /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/opatch/opatch2023-11-01_01-50-34AM_1.log\n\nInvoking prereq \"checkconflictagainstohwithdetail\"\n\nPrereq \"checkConflictAgainstOHWithDetail\" passed.\n\nOPatch succeeded.",
                "stdout_lines": [
                    "Oracle Interim Patch Installer version 12.2.0.1.40",
                    "Copyright (c) 2023, Oracle Corporation.  All rights reserved.",
                    "",
                    "PREREQ session",
                    "",
                    "Oracle Home       : /u01/app/oracle/product/19.0.0/dbhome_1",
                    "Central Inventory : /u01/app/oraInventory",
                    "   from           : /u01/app/oracle/product/19.0.0/dbhome_1/oraInst.loc",
                    "OPatch version    : 12.2.0.1.40",
                    "OUI version       : 12.2.0.7.0",
                    "Log file location : /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/opatch/opatch2023-11-01_01-50-34AM_1.log",
                    "",
                    "Invoking prereq \"checkconflictagainstohwithdetail\"",
                    "",
                    "Prereq \"checkConflictAgainstOHWithDetail\" passed.",
                    "",
                    "OPatch succeeded."
                ]
            },
            {
                "ansible_loop_var": "item",
                "changed": true,
                "cmd": "opatch prereq CheckSystemSpace -ph /u01/35643107",
                "delta": "0:00:07.632033",
                "end": "2023-11-01 01:50:58.486930",
                "failed": false,
                "invocation": {
                    "module_args": {
                        "_raw_params": "opatch prereq CheckSystemSpace -ph /u01/35643107",
                        "_uses_shell": true,
                        "argv": null,
                        "chdir": null,
                        "creates": null,
                        "executable": null,
                        "removes": null,
                        "stdin": null,
                        "stdin_add_newline": true,
                        "strip_empty_ends": true
                    }
                },
                "item": "CheckSystemSpace",
                "msg": "",
                "rc": 0,
                "start": "2023-11-01 01:50:50.854897",
                "stderr": "",
                "stderr_lines": [],
                "stdout": "Oracle Interim Patch Installer version 12.2.0.1.40\nCopyright (c) 2023, Oracle Corporation.  All rights reserved.\n\nPREREQ session\n\nOracle Home       : /u01/app/oracle/product/19.0.0/dbhome_1\nCentral Inventory : /u01/app/oraInventory\n   from           : /u01/app/oracle/product/19.0.0/dbhome_1/oraInst.loc\nOPatch version    : 12.2.0.1.40\nOUI version       : 12.2.0.7.0\nLog file location : /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/opatch/opatch2023-11-01_01-50-51AM_1.log\n\nInvoking prereq \"checksystemspace\"\n\nPrereq \"checkSystemSpace\" passed.\n\nOPatch succeeded.",
                "stdout_lines": [
                    "Oracle Interim Patch Installer version 12.2.0.1.40",
                    "Copyright (c) 2023, Oracle Corporation.  All rights reserved.",
                    "",
                    "PREREQ session",
                    "",
                    "Oracle Home       : /u01/app/oracle/product/19.0.0/dbhome_1",
                    "Central Inventory : /u01/app/oraInventory",
                    "   from           : /u01/app/oracle/product/19.0.0/dbhome_1/oraInst.loc",
                    "OPatch version    : 12.2.0.1.40",
                    "OUI version       : 12.2.0.7.0",
                    "Log file location : /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/opatch/opatch2023-11-01_01-50-51AM_1.log",
                    "",
                    "Invoking prereq \"checksystemspace\"",
                    "",
                    "Prereq \"checkSystemSpace\" passed.",
                    "",
                    "OPatch succeeded."
                ]
            }
        ],
        "skipped": false
    }
}

TASK [ru-apply : RU SU | get list of database names on host] *************************************************************************************************************************************************
changed: [db_server]

TASK [ru-apply : debug] **************************************************************************************************************************************************************************************
ok: [db_server] => {
    "dblist": {
        "changed": true,
        "cmd": "cat /etc/oratab|grep /u01/app/oracle/product/19.0.0/dbhome_1|egrep ':N|:Y'|grep -v \\*|cut -f1 -d':'",
        "delta": "0:00:00.014374",
        "end": "2023-11-01 01:50:59.534368",
        "failed": false,
        "msg": "",
        "rc": 0,
        "start": "2023-11-01 01:50:59.519994",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "cdb1",
        "stdout_lines": [
            "cdb1"
        ]
    }
}

TASK [ru-apply : RU SU | shutdown databases running in oracle_home] ******************************************************************************************************************************************
changed: [db_server] => (item=cdb1)

TASK [ru-apply : RU SU | shutdown listener in oracle_home] ***************************************************************************************************************************************************
changed: [db_server]

TASK [ru-apply : RU SU | opatch apply] ***********************************************************************************************************************************************************************
changed: [db_server]

TASK [ru-apply : debug] **************************************************************************************************************************************************************************************
ok: [db_server] => {
    "opatch_apply.stdout_lines": [
        "Oracle Interim Patch Installer version 12.2.0.1.40",
        "Copyright (c) 2023, Oracle Corporation.  All rights reserved.",
        "",
        "",
        "Oracle Home       : /u01/app/oracle/product/19.0.0/dbhome_1",
        "Central Inventory : /u01/app/oraInventory",
        "   from           : /u01/app/oracle/product/19.0.0/dbhome_1/oraInst.loc",
        "OPatch version    : 12.2.0.1.40",
        "OUI version       : 12.2.0.7.0",
        "Log file location : /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/opatch/opatch2023-11-01_01-51-05AM_1.log",
        "",
        "Verifying environment and performing prerequisite checks...",
        "OPatch continues with these patches:   35643107  ",
        "",
        "Do you want to proceed? [y|n]",
        "Y (auto-answered by -silent)",
        "User Responded with: Y",
        "All checks passed.",
        "",
        "Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.",
        "(Oracle Home = '/u01/app/oracle/product/19.0.0/dbhome_1')",
        "",
        "",
        "Is the local system ready for patching? [y|n]",
        "Y (auto-answered by -silent)",
        "User Responded with: Y",
        "Backing up files...",
        "Applying interim patch '35643107' to OH '/u01/app/oracle/product/19.0.0/dbhome_1'",
        "ApplySession: Optional component(s) [ oracle.network.gsm, 19.0.0.0.0 ] , [ oracle.pg4mq, 19.0.0.0.0 ] , [ oracle.rdbms.ic, 19.0.0.0.0 ] , [ oracle.rdbms.tg4db2, 19.0.0.0.0 ] , [ oracle.tfa, 19.0.0.0.0 ] , [ oracle.rdbms.tg4msql, 19.0.0.0.0 ] , [ oracle.ons.cclient, 19.0.0.0.0 ] , [ oracle.rdbms.tg4sybs, 19.0.0.0.0 ] , [ oracle.options.olap.api, 19.0.0.0.0 ] , [ oracle.network.cman, 19.0.0.0.0 ] , [ oracle.sdo.companion, 19.0.0.0.0 ] , [ oracle.xdk.companion, 19.0.0.0.0 ] , [ oracle.rdbms.tg4tera, 19.0.0.0.0 ] , [ oracle.net.cman, 19.0.0.0.0 ] , [ oracle.ons.eons.bwcompat, 19.0.0.0.0 ] , [ oracle.oid.client, 19.0.0.0.0 ] , [ oracle.rdbms.tg4ifmx, 19.0.0.0.0 ] , [ oracle.options.olap, 19.0.0.0.0 ] , [ oracle.jdk, 1.8.0.191.0 ]  not present in the Oracle Home or a higher version is found.",
        "",
        "Patching component oracle.rdbms.util, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms, 19.0.0.0.0...",
        "",
        "Patching component oracle.assistants.acf, 19.0.0.0.0...",
        "",
        "Patching component oracle.assistants.deconfig, 19.0.0.0.0...",
        "",
        "Patching component oracle.assistants.server, 19.0.0.0.0...",
        "",
        "Patching component oracle.blaslapack, 19.0.0.0.0...",
        "",
        "Patching component oracle.buildtools.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.ctx, 19.0.0.0.0...",
        "",
        "Patching component oracle.dbdev, 19.0.0.0.0...",
        "",
        "Patching component oracle.dbjava.ic, 19.0.0.0.0...",
        "",
        "Patching component oracle.dbjava.jdbc, 19.0.0.0.0...",
        "",
        "Patching component oracle.dbjava.ucp, 19.0.0.0.0...",
        "",
        "Patching component oracle.duma, 19.0.0.0.0...",
        "",
        "Patching component oracle.javavm.client, 19.0.0.0.0...",
        "",
        "Patching component oracle.ldap.owm, 19.0.0.0.0...",
        "",
        "Patching component oracle.ldap.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.ldap.security.osdt, 19.0.0.0.0...",
        "",
        "Patching component oracle.marvel, 19.0.0.0.0...",
        "",
        "Patching component oracle.network.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.odbc.ic, 19.0.0.0.0...",
        "",
        "Patching component oracle.ons, 19.0.0.0.0...",
        "",
        "Patching component oracle.ons.ic, 19.0.0.0.0...",
        "",
        "Patching component oracle.oracore.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.perlint, 5.28.1.0.0...",
        "",
        "Patching component oracle.precomp.common.core, 19.0.0.0.0...",
        "",
        "Patching component oracle.precomp.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.crs, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.dbscripts, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.deconfig, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.oci, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.rsf.ic, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.scheduler, 19.0.0.0.0...",
        "",
        "Patching component oracle.rhp.db, 19.0.0.0.0...",
        "",
        "Patching component oracle.sdo, 19.0.0.0.0...",
        "",
        "Patching component oracle.sdo.locator.jrf, 19.0.0.0.0...",
        "",
        "Patching component oracle.sqlplus, 19.0.0.0.0...",
        "",
        "Patching component oracle.sqlplus.ic, 19.0.0.0.0...",
        "",
        "Patching component oracle.wwg.plsql, 19.0.0.0.0...",
        "",
        "Patching component oracle.ldap.rsf.ic, 19.0.0.0.0...",
        "",
        "Patching component oracle.ldap.client, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.dv, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.install.common, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.hsodbc, 19.0.0.0.0...",
        "",
        "Patching component oracle.nlsrtl.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.xdk.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.odbc, 19.0.0.0.0...",
        "",
        "Patching component oracle.xdk.parser.java, 19.0.0.0.0...",
        "",
        "Patching component oracle.ctx.atg, 19.0.0.0.0...",
        "",
        "Patching component oracle.network.listener, 19.0.0.0.0...",
        "",
        "Patching component oracle.ctx.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.hs_common, 19.0.0.0.0...",
        "",
        "Patching component oracle.dbtoolslistener, 19.0.0.0.0...",
        "",
        "Patching component oracle.xdk, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.rman, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.drdaas, 19.0.0.0.0...",
        "",
        "Patching component oracle.install.deinstalltool, 19.0.0.0.0...",
        "",
        "Patching component oracle.ovm, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.install.plugins, 19.0.0.0.0...",
        "",
        "Patching component oracle.mgw.common, 19.0.0.0.0...",
        "",
        "Patching component oracle.xdk.xquery, 19.0.0.0.0...",
        "",
        "Patching component oracle.network.client, 19.0.0.0.0...",
        "",
        "Patching component oracle.ldap.ssl, 19.0.0.0.0...",
        "",
        "Patching component oracle.oraolap.api, 19.0.0.0.0...",
        "",
        "Patching component oracle.javavm.server, 19.0.0.0.0...",
        "",
        "Patching component oracle.sdo.locator, 19.0.0.0.0...",
        "",
        "Patching component oracle.oraolap, 19.0.0.0.0...",
        "",
        "Patching component oracle.oraolap.dbscripts, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.lbac, 19.0.0.0.0...",
        "",
        "Patching component oracle.precomp.common, 19.0.0.0.0...",
        "",
        "Patching component oracle.precomp.lang, 19.0.0.0.0...",
        "",
        "Patching component oracle.jdk, 1.8.0.201.0...",
        "Patch 35643107 successfully applied.",
        "N-Apply process is complete. No bug fixes are lost.",
        "Sub-set patch [29517242] has become inactive due to the application of a super-set patch [35643107].",
        "Please refer to Doc ID 2161861.1 for any possible further required actions.",
        "Log file location: /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/opatch/opatch2023-11-01_01-51-05AM_1.log",
        "",
        "OPatch succeeded."
    ]
}

TASK [ru-apply : RU SU | apply oneoff patches] ***************************************************************************************************************************************************************
skipping: [db_server]

TASK [ru-apply : RU SU | startup databases] ******************************************************************************************************************************************************************
changed: [db_server] => (item=cdb1)

TASK [ru-apply : RU SU | Load Modified SQL Files into the Database] ******************************************************************************************************************************************
included: /etc/ansible/roles/ru-apply/tasks/run_catbundle.yml for db_server

TASK [ru-apply : Load Modified SQL Files into the Database] **************************************************************************************************************************************************
changed: [db_server] => (item=cdb1)

TASK [ru-apply : RU SU | startup listener in oracle_home] ****************************************************************************************************************************************************
changed: [db_server]

TASK [ru-apply : Rollback Quarterly Patch] *******************************************************************************************************************************************************************
skipping: [db_server]

TASK [ru-apply : Opatch lsinventory] *************************************************************************************************************************************************************************
changed: [db_server] => (item=/u01/app/oracle/product/19.0.0/dbhome_1)
changed: [db_server] => (item=)

PLAY RECAP ***************************************************************************************************************************************************************************************************
db_server             : ok=24   changed=9    unreachable=0    failed=0    skipped=3    rescued=0    ignored=1

[ansadmin@ansible-controller ru-apply]$

11. Verify using SQL for patch applied



SQL> set serverout on;
 exec dbms_qopatch.get_sqlpatch_status;SQL>

Patch Id : 35643107
        Action : APPLY
        Action Time : 01-NOV-2023 02:29:07
        Description : Database Release Update : 19.21.0.0.231017 (35643107)
        Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/35643107/25405995/35643107_apply_CDB1_CDBRO
OT_2023Nov01_02_07_27.log
        Status : SUCCESS

PL/SQL procedure successfully completed.

SQL> 

12. Same way I tested the rollback as well, for rollback we need to change the line in /etc/ansible/roles/ru-apply/defaults/main.yml file

from:

rollback_ru: false # set to true if rolling back current RU.

to:

rollback_ru: true # set to true if rolling back current RU.

When we do the rollback_ru: true and run the playbook again, it will rollback the same patch.

see below:


[ansadmin@ansible-controller ru-apply]$ ansible-playbook patch.yml
[DEPRECATION WARNING]: "include" is deprecated, use include_tasks/import_tasks instead. See https://docs.ansible.com/ansible-core/2.14/user_guide/playbooks_reuse_includes.html for details. This feature
will be removed in version 2.16. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.

PLAY [all] ***************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************************
[WARNING]: Platform linux on host db_server is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change the meaning of that
path. See https://docs.ansible.com/ansible-core/2.14/reference_appendices/interpreter_discovery.html for more information.
ok: [db_server]

TASK [ru-apply : debug] **************************************************************************************************************************************************************************************
ok: [db_server] => {
    "patch_type": "RU"
}

TASK [ru-apply : Create Patch directory] *********************************************************************************************************************************************************************
skipping: [db_server]

TASK [ru-apply : debug] **************************************************************************************************************************************************************************************
skipping: [db_server]

TASK [ru-apply : Unzip Quarterly Patch] **********************************************************************************************************************************************************************
skipping: [db_server]

TASK [ru-apply : Check opatch version in database home] ******************************************************************************************************************************************************
ok: [db_server]

TASK [ru-apply : debug] **************************************************************************************************************************************************************************************
ok: [db_server] => {
    "database_opatch_version": {
        "changed": false,
        "cmd": "/u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatch version",
        "delta": "0:00:00.512442",
        "end": "2023-10-31 17:47:46.221175",
        "failed": false,
        "msg": "",
        "rc": 0,
        "start": "2023-10-31 17:47:45.708733",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "OPatch Version: 12.2.0.1.40\n\nOPatch succeeded.",
        "stdout_lines": [
            "OPatch Version: 12.2.0.1.40",
            "",
            "OPatch succeeded."
        ]
    }
}

TASK [ru-apply : Update opatch in database home] *************************************************************************************************************************************************************
skipping: [db_server]

TASK [ru-apply : Apply Quarterly Patch] **********************************************************************************************************************************************************************
skipping: [db_server]
[DEPRECATION WARNING]: "include" is deprecated, use include_tasks/import_tasks/import_playbook instead. This feature will be removed in version 2.16. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.

TASK [ru-apply : Rollback Quarterly Patch] *******************************************************************************************************************************************************************
included: /etc/ansible/roles/ru-apply/tasks/rollback_si.yml for db_server

TASK [ru-apply : SI ROLLBACK | get list of database names on host] *******************************************************************************************************************************************
changed: [db_server]

TASK [ru-apply : SI ROLLBACK | shutdown databases running in oracle_home] ************************************************************************************************************************************
changed: [db_server] => (item=cdb1)

TASK [ru-apply : SI ROLLBACK | shutdown listener in oracle_home] *********************************************************************************************************************************************
changed: [db_server]

TASK [ru-apply : SI ROLLBACK | Rollback PSU Specific One-off Patches] ****************************************************************************************************************************************
skipping: [db_server]

TASK [ru-apply : SI ROLLBACK | opatch rollback] **************************************************************************************************************************************************************
changed: [db_server]

TASK [ru-apply : debug] **************************************************************************************************************************************************************************************
ok: [db_server] => {
    "opatch_rollback.stdout_lines": [
        "Oracle Interim Patch Installer version 12.2.0.1.40",
        "Copyright (c) 2023, Oracle Corporation.  All rights reserved.",
        "",
        "",
        "Oracle Home       : /u01/app/oracle/product/19.0.0/dbhome_1",
        "Central Inventory : /u01/app/oraInventory",
        "   from           : /u01/app/oracle/product/19.0.0/dbhome_1/oraInst.loc",
        "OPatch version    : 12.2.0.1.40",
        "OUI version       : 12.2.0.7.0",
        "Log file location : /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/opatch/opatch2023-10-31_17-48-29PM_1.log",
        "",
        "",
        "Patches will be rolled back in the following order: ",
        "   35643107",
        "The following patch(es) will be rolled back: 35643107  ",
        "",
        "Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.",
        "(Oracle Home = '/u01/app/oracle/product/19.0.0/dbhome_1')",
        "",
        "",
        "Is the local system ready for patching? [y|n]",
        "Y (auto-answered by -silent)",
        "User Responded with: Y",
        "",
        "Rolling back patch 35643107...",
        "",
        "RollbackSession rolling back interim patch '35643107' from OH '/u01/app/oracle/product/19.0.0/dbhome_1'",
        "",
        "Patching component oracle.rdbms.util, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.rsf, 19.0.0.0.0...",
        "Deleting \"kubscrf.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kubsbd.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kubsppd.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kubsbdcellcore.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kpuadg.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kubsorcpb.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kubsutl.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kubsorccore.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kubscsvcore.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kubsconv.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kubscell.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kubsbdcore.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kubsdesccore.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kubsbufio.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"kubslistcore.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libclient19.a\"",
        "Deleting \"qesxldsb.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libcommon19.a\"",
        "Deleting \"skfparity.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libcommon19.a\"",
        "Deleting \"kdzu_art.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libcommon19.a\"",
        "Deleting \"kuzroci.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libcommon19.a\"",
        "Deleting \"qesdsbc.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libcommon19.a\"",
        "Deleting \"kziamc.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libcommon19.a\"",
        "Deleting \"kgoms.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libgeneric19.a\"",
        "Deleting \"skgzfndd.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libgeneric19.a\"",
        "Deleting \"qsodamd.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libgeneric19.a\"",
        "Deleting \"skgrlib.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libgeneric19.a\"",
        "Deleting \"skgzepclib.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libgeneric19.a\"",
        "Deleting \"qsodalob.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libgeneric19.a\"",
        "Deleting \"skgrlib_ipcdat.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libgeneric19.a\"",
        "Deleting \"qcpi8.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libgeneric19.a\"",
        "Deleting \"skgzibr.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libgeneric19.a\"",
        "",
        "Patching component oracle.rdbms, 19.0.0.0.0...",
        "Deleting \"kge.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"kubsbufio.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"ktbdat.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"kubsd.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"qjsntrans.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"kswspf.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"hcsbtmpl.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"kztoken.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"kjspare.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"kpoxcd.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"kziam.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"kbclgdr.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"kcert.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"ksmpgaum.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"kpdbapx.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"kbcs.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"hcscalctpl.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"hcscalctoz.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"hcscalctsg.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "Deleting \"kqro.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libserver19.a\"",
        "",
        "Patching component oracle.assistants.acf, 19.0.0.0.0...",
        "",
        "Patching component oracle.assistants.deconfig, 19.0.0.0.0...",
        "",
        "Patching component oracle.assistants.server, 19.0.0.0.0...",
        "",
        "Patching component oracle.blaslapack, 19.0.0.0.0...",
        "",
        "Patching component oracle.buildtools.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.ctx, 19.0.0.0.0...",
        "",
        "Patching component oracle.dbdev, 19.0.0.0.0...",
        "",
        "Patching component oracle.dbjava.ic, 19.0.0.0.0...",
        "",
        "Patching component oracle.dbjava.jdbc, 19.0.0.0.0...",
        "",
        "Patching component oracle.dbjava.ucp, 19.0.0.0.0...",
        "",
        "Patching component oracle.duma, 19.0.0.0.0...",
        "",
        "Patching component oracle.javavm.client, 19.0.0.0.0...",
        "",
        "Patching component oracle.ldap.owm, 19.0.0.0.0...",
        "",
        "Patching component oracle.ldap.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.ldap.security.osdt, 19.0.0.0.0...",
        "",
        "Patching component oracle.marvel, 19.0.0.0.0...",
        "",
        "Patching component oracle.network.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.odbc.ic, 19.0.0.0.0...",
        "",
        "Patching component oracle.ons, 19.0.0.0.0...",
        "",
        "Patching component oracle.ons.ic, 19.0.0.0.0...",
        "",
        "Patching component oracle.oracore.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.perlint, 5.28.1.0.0...",
        "",
        "Patching component oracle.precomp.common.core, 19.0.0.0.0...",
        "",
        "Patching component oracle.precomp.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.crs, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.dbscripts, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.deconfig, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.oci, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.rsf.ic, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.scheduler, 19.0.0.0.0...",
        "",
        "Patching component oracle.rhp.db, 19.0.0.0.0...",
        "",
        "Patching component oracle.sdo, 19.0.0.0.0...",
        "",
        "Patching component oracle.sdo.locator.jrf, 19.0.0.0.0...",
        "",
        "Patching component oracle.sqlplus, 19.0.0.0.0...",
        "",
        "Patching component oracle.sqlplus.ic, 19.0.0.0.0...",
        "",
        "Patching component oracle.wwg.plsql, 19.0.0.0.0...",
        "",
        "Patching component oracle.ldap.rsf.ic, 19.0.0.0.0...",
        "",
        "Patching component oracle.ldap.client, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.dv, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.install.common, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.hsodbc, 19.0.0.0.0...",
        "",
        "Patching component oracle.nlsrtl.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.xdk.rsf, 19.0.0.0.0...",
        "Deleting \"jzntrans.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libxml19.a\"",
        "Deleting \"jznpathval.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libxml19.a\"",
        "",
        "Patching component oracle.odbc, 19.0.0.0.0...",
        "",
        "Patching component oracle.xdk.parser.java, 19.0.0.0.0...",
        "",
        "Patching component oracle.ctx.atg, 19.0.0.0.0...",
        "",
        "Patching component oracle.network.listener, 19.0.0.0.0...",
        "",
        "Patching component oracle.ctx.rsf, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.hs_common, 19.0.0.0.0...",
        "",
        "Patching component oracle.dbtoolslistener, 19.0.0.0.0...",
        "",
        "Patching component oracle.xdk, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.rman, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.drdaas, 19.0.0.0.0...",
        "",
        "Patching component oracle.install.deinstalltool, 19.0.0.0.0...",
        "",
        "Patching component oracle.ovm, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.install.plugins, 19.0.0.0.0...",
        "",
        "Patching component oracle.mgw.common, 19.0.0.0.0...",
        "",
        "Patching component oracle.xdk.xquery, 19.0.0.0.0...",
        "",
        "Patching component oracle.network.client, 19.0.0.0.0...",
        "",
        "Patching component oracle.ldap.ssl, 19.0.0.0.0...",
        "",
        "Patching component oracle.oraolap.api, 19.0.0.0.0...",
        "",
        "Patching component oracle.javavm.server, 19.0.0.0.0...",
        "",
        "Patching component oracle.sdo.locator, 19.0.0.0.0...",
        "Deleting \"mdgrt1ht.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libordsdo19.a\"",
        "Deleting \"mdgrimath.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libordsdo19.a\"",
        "Deleting \"mdgrspary.o\" from archive \"/u01/app/oracle/product/19.0.0/dbhome_1/lib/libordsdo19.a\"",
        "",
        "Patching component oracle.oraolap, 19.0.0.0.0...",
        "",
        "Patching component oracle.oraolap.dbscripts, 19.0.0.0.0...",
        "",
        "Patching component oracle.rdbms.lbac, 19.0.0.0.0...",
        "",
        "Patching component oracle.precomp.common, 19.0.0.0.0...",
        "",
        "Patching component oracle.precomp.lang, 19.0.0.0.0...",
        "",
        "Patching component oracle.jdk, 1.8.0.201.0...",
        "RollbackSession removing interim patch '35643107' from inventory",
        "Inactive sub-set patch [29517242] has become active due to the rolling back of a super-set patch [35643107].",
        "Please refer to Doc ID 2161861.1 for any possible further required actions.",
        "Log file location: /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/opatch/opatch2023-10-31_17-48-29PM_1.log",
        "",
        "OPatch succeeded."
    ]
}

TASK [ru-apply : SI ROLLBACK | startup databases] ************************************************************************************************************************************************************
changed: [db_server] => (item=cdb1)

TASK [ru-apply : SI ROLLBACK | startup pluggable databases] **************************************************************************************************************************************************
changed: [db_server] => (item=cdb1)

TASK [ru-apply : SI ROLLBACK | Run the datapatch utility] ****************************************************************************************************************************************************
included: /etc/ansible/roles/ru-apply/tasks/run_catbundle.yml for db_server

TASK [ru-apply : Load Modified SQL Files into the Database] **************************************************************************************************************************************************
changed: [db_server] => (item=cdb1)

TASK [ru-apply : SI ROLLBACK | startup listener in oracle_home] **********************************************************************************************************************************************
changed: [db_server]

TASK [ru-apply : Opatch lsinventory] *************************************************************************************************************************************************************************
changed: [db_server] => (item=/u01/app/oracle/product/19.0.0/dbhome_1)
changed: [db_server] => (item=)

PLAY RECAP ***************************************************************************************************************************************************************************************************
db_server             : ok=16   changed=9    unreachable=0    failed=0    skipped=6    rescued=0    ignored=0

[ansadmin@ansible-controller ru-apply]$

Hope you like the blog.

ODA pre-check report alert (Cluster Ready Services attribute STOP_DEPENDENCIES/START_DEPENDENCIES for database is incorrect)

I am planning to Patch ODA from version 19.11 to 19.15 on X8-2L.



I am not discussing the whole ODA patching here but one of issue during pre-patch report.

When I run the Pre-patch report I got below error/alert.. This is part of pre-check report.



__GI__
Validate GI metadata            Success   Successfully validated GI metadata
Validate supported GI versions  Success   Validated minimum supported versions.
Validate available space        Success   Validated free space under /u01
Is clusterware running          Success   Clusterware is running
Validate patching tag           Success   Validated patching tag: 19.15.0.0.0.
Is system provisioned           Success   Verified system is provisioned
Validate ASM in online          Success   ASM is online
Validate kernel log level       Success   Successfully validated the OS log
                                          level
Validate minimum agent version  Success   GI patching enabled in current
                                          DCSAGENT version
Validate Central Inventory      Success   oraInventory validation passed
Validate patching locks         Success   Validated patching locks
Validate clones location exist  Success   Validated clones location
Validate DB start dependencies  Alert     Cluster Ready Services attribute
                                          START_DEPENDENCIES for database
                                          [testoda] is incorrect and will be
                                          corrected as part of server patching.
Validate DB stop dependencies   Alert     Cluster Ready Services attribute
                                          STOP_DEPENDENCIES for database
                                          [testoda] is incorrect and will be
                                          corrected as part of server patching.
Evaluate GI patching            Success   Successfully validated GI patching
Validate command execution      Success   Validated command execution                                          
                                          

Steps to resolve:

1. First I check the dcs logs to see why it's alerting on CRS attribute is incorrect for testoda database.


										  run: cmd= '[/u01/app/19.11.0.0/grid/bin/crsctl,
 status,
 resource,
 -w,
 (TYPE = ora.database.type) AND (START_DEPENDENCIES coi 'orahome_sh') AND ((START_DEPENDENCIES nci 'odabase_n0') OR (START_DEPENDENCIES nci 'odabase_n1')),
 -v,
 -attr,
 NAME]'
2023-10-18 10:47:03,706 DEBUG [Thread-981] [] c.o.d.c.u.CommonsUtils: Output :
NAME=ora.testoda.db
2023-10-18 10:47:03,706 DEBUG [Thread-981] [] c.o.d.c.u.CommonsUtils: Output : EMPTY CONTENT
2023-10-18 10:47:03,712 DEBUG [Run patching pre-checks for component: GI : JobId=e13279b9-990a-4c45-b0f5-1a96740828e5] [] c.o.d.a.u.OraHomeStorageUtils: Processing string "NAME=ora.testoda.db"
2023-10-18 10:47:03,712 INFO [Run patching pre-checks for component: GI : JobId=e13279b9-990a-4c45-b0f5-1a96740828e5] [] c.o.d.a.u.OraHomeStorageUtils: START_DEPENDENCIES invalid for "testoda"

If we see it mention 3 start dependencies in logs.

2. Then I thought let me check what are the CRS start and stop dependencies attribute values for this database with issue and other database without issue.



Attribute of DB with Issue:

[grid@odanode ~]$ crsctl status resource ora.testoda.db -f | grep DEPENDENCIES=
START_DEPENDENCIES=hard(global:uniform:ora.RECO.dg, global:uniform:ora.DATA.dg, ora.data.orahome_sh.acfs) pullup(ora.data.orahome_sh.acfs, global:ora.DATA.dg, global:ora.RECO.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns)
STOP_DEPENDENCIES=hard(global:intermediate:ora.asm,global:shutdown:ora.RECO.dg,global:shutdown:ora.DATA.dg,ora.data.orahome_sh.acfs)
[grid@odanode ~]$

Attribute of DB without Issue:

[grid@odanode ~]$ crsctl status resource ora.devoda.db -f | grep DEPENDENCIES=
START_DEPENDENCIES=hard(global:uniform:ora.DATA.dg, ora.data.orahome_sh.acfs, ora.data.odabase_n0.acfs, ora.data.odabase_n1.acfs, global:uniform:ora.RECO.dg) pullup(ora.data.orahome_sh.acfs, ora.data.odabase_n0.acfs, ora.data.odabase_n1.acfs, global:ora.RECO.dg, global:ora.DATA.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns)
STOP_DEPENDENCIES=hard(global:intermediate:ora.asm,global:shutdown:ora.DATA.dg,ora.data.orahome_sh.acfs,intermediate:ora.data.odabase_n0.acfs,intermediate:ora.data.odabase_n1.acfs, global:shutdown:ora.RECO.dg)
[grid@odanode ~]$

3. Then I compare the working and non working attributes from above and saw below missing from non working attribute, which is somewhat matching with the logs which I shows in step 1.

missing from stop_dependencies = intermediate:ora.data.odabase_n0.acfs,intermediate:ora.data.odabase_n1.acfs

missing from start_dependencies = hard(ora.data.odabase_n0.acfs, ora.data.odabase_n1.acfs) pullup(ora.data.odabase_n0.acfs, ora.data.odabase_n1.acfs)

4. So, I modified the incorrect start and stop dependencies for database shows in pre-patch report using below command. Basically I copy the attribute from correct DB.


[oracle@odanode ~] /u01/app/19.11.0.0/grid/bin/crsctl modify resource ora.testoda.db -attr "START_DEPENDENCIES='hard(global:uniform:ora.DATA.dg, ora.data.orahome_sh.acfs, ora.data.odabase_n0.acfs, ora.data.odabase_n1.acfs, global:uniform:ora.RECO.dg) pullup(ora.data.orahome_sh.acfs, ora.data.odabase_n0.acfs, ora.data.odabase_n1.acfs, global:ora.RECO.dg, global:ora.DATA.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns)'" -unsupported

[oracle@odanode ~] /u01/app/19.11.0.0/grid/bin/crsctl modify resource ora.testoda.db -attr "STOP_DEPENDENCIES=hard(global:intermediate:ora.asm,global:shutdown:ora.DATA.dg,ora.data.orahome_sh.acfs,intermediate:ora.data.odabase_n0.acfs,intermediate:ora.data.odabase_n1.acfs, global:shutdown:ora.RECO.dg)'" -unsupported

NOTE: If I don't use the last "-unsupported" keyword it gives below error.

CRS-4995:  The command 'Modify  resource' is invalid in crsctl. Use srvctl for this command.

5. After modify attribute, I ran the pre-patch report and don't see the alert anymore.

Hope you like the Blog!


How to re-install correct Certified version of AHF on ExaCC


I am writing this blog as I could not find how to reinstall correct supported version of AHF on ExaCC.

Recently I upgraded AHF to latest version on ExaCC, i.e. 23.8.0.0.0


[root@exacc1db2 AHF23.8]# ahfctl statusahf

.--------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID    | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+--------+------+------------+----------------------+------------------+
| exacc1db2 | RUNNING       | 258898 | 5000 | 23.8.0.0.0 | 23800020230830021106 | COMPLETE         |
| exacc1db1 | RUNNING       | 282299 | 5000 | 23.8.0.0.0 | 23800020230830021106 | COMPLETE         |
'-----------+---------------+--------+------+------------+----------------------+------------------'

Later, I realized as per below document the latest certified version on ExaCC is 23.5.2

Autonomous Health Framework (AHF) - Including TFA and ORAchk/EXAchk (Doc ID 2550798.1) 


So I have to Reinstall the approved version.

Here are the steps how we can do this;

1. Uninstall the current version.



[root@exacc1db1 u02]# ahfctl uninstall -local -silent

AHF Uninstallation Log : /tmp/ahf_uninstall_377829_2023_10_16-11_35_18.log
Starting AHF Uninstall
NOTE : Uninstalling does not return all the space used by the AHF repository
AHF will be uninstalled on: exacc1db1

Stopping AHF service on local node exacc1db1...
Sleeping for 10 seconds...

Stopping TFA Support Tools...

Removed symlink /etc/systemd/system/multi-user.target.wants/oracle-tfa.service.
Removed symlink /etc/systemd/system/graphical.target.wants/oracle-tfa.service.

Stopping exachk scheduler ...
Removing exachk cache discovery....
Successfully completed exachk cache discovery removal.



Unable to send message to TFA



Removed exachk from inittab


Removing AHF setup on exacc1db1:
Removing /etc/rc.d/rc0.d/K17init.tfa
Removing /etc/rc.d/rc1.d/K17init.tfa
Removing /etc/rc.d/rc2.d/K17init.tfa
Removing /etc/rc.d/rc4.d/K17init.tfa
Removing /etc/rc.d/rc6.d/K17init.tfa
Removing /etc/init.d/init.tfa...
Removing /etc/systemd/system/oracle-tfa.service...
Removing /opt/oracle.ahf/rpms
Removing /opt/oracle.ahf/jre
Removing /opt/oracle.ahf/common
Removing /opt/oracle.ahf/bin
Removing /opt/oracle.ahf/python
Removing /opt/oracle.ahf/analyzer
Removing /opt/oracle.ahf/tfa
Removing /opt/oracle.ahf/chm
Removing /opt/oracle.ahf/ahf
Removing /opt/oracle.ahf/exachk
Removing /opt/oracle.ahf/ahfscope
Removing /opt/oracle.ahf/chadiag
Removing /opt/oracle.ahf/iwa
Removing /opt/oracle.ahf/cha
Removing /u02/oracle.ahf/data/exacc1db1
Removing /opt/oracle.ahf/install.properties
Removing update backup directories from /u02/oracle.ahf/data/work
Removing /sys/fs/cgroup/cpu/oratfagroup/

[root@exacc1db1 u02]# 

2. Change ownership of /u02 from oracle to root.


[root@exacc1db1 /]# chown root /u02

if we don't do step 2, we will get this error.

[ERROR] : AHF-00014: AHF Data Location /u02/oracle.ahf/data is not owned by root in directory hierarchy

3. Install supported version:

On ExaCC, we already have AHF binaries of supported version, we just need to install it.

/var/opt/oracle/misc/ahf/ahf_setup ===>> This is default ahf location from cloud downloads.



[root@exacc1db1 /]# /var/opt/oracle/misc/ahf/ahf_setup -local -silent

AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_235200_16158_2023_10_16-11_41_18.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 23.5.2 Build Date: 202307172132

AHF Location : /opt/oracle.ahf

AHF Data Directory : /u02/oracle.ahf/data

Extracting AHF to /opt/oracle.ahf

Configuring TFA Services

Discovering Nodes and Oracle Resources

Not generating certificates as GI discovered

Starting TFA Services
Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.

.------------------------------------------------------------------------------.
| Host      | Status of TFA | PID   | Port | Version    | Build ID             |
+-----------+---------------+-------+------+------------+----------------------+
| exacc1db1 | RUNNING       | 32033 | 5000 | 23.5.2.0.0 | 23520020230717213218 |
'-----------+---------------+-------+------+------------+----------------------'

Running TFA Inventory...
Could not add action to TFA server

Adding default users to TFA Access list...

.-------------------------------------------------------.
|              Summary of AHF Configuration             |
+-----------------+-------------------------------------+
| Parameter       | Value                               |
+-----------------+-------------------------------------+
| AHF Location    | /opt/oracle.ahf                     |
| TFA Location    | /opt/oracle.ahf/tfa                 |
| Exachk Location | /opt/oracle.ahf/exachk              |
| Data Directory  | /u02/oracle.ahf/data                |
| Repository      | /u02/oracle.ahf/data/repository     |
| Diag Directory  | /u02/oracle.ahf/data/exacc1db1/diag |
'-----------------+-------------------------------------'

Starting Orachk Scheduler from AHF

Setting up AHF CLI and SDK

AHF binaries are available in /opt/oracle.ahf/bin

AHF is successfully installed

Moving /tmp/ahf_install_235200_16158_2023_10_16-11_41_18.log to /u02/oracle.ahf/data/exacc1db1/diag/ahf/

[root@exacc1db1 /]# 

4. Change ownership of /u02 back to Oracle


[root@exacc1db1 /]# chown oracle /u02

5. Repeat the above 4 steps on 2nd node of cluster as well and then check the AHF status.



[root@exacc1db1 u02]# ahfctl statusahf -all

.--------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID    | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+--------+------+------------+----------------------+------------------+
| exacc1db1 | RUNNING       | 190368 | 5000 | 23.5.2.0.0 | 23520020230717213218 | COMPLETE         |
| exacc1db2 | RUNNING       | 146942 | 5000 | 23.5.2.0.0 | 23520020230717213218 | COMPLETE         |
'-----------+---------------+--------+------+------------+----------------------+------------------'

This looks good and now we are on supported AHF version.

Note: As per above AHF note, for cloud the AHF upgrade is taken care by dbaas tools.

Migrate on-premise Oracle (standard edition) database on Windows to OCI DBCS

Recently, I migrated on-premise standard edition Oracle database on windows platform to DBCS in OCI.

I am writing this blog to mention all the issues/challenges which I faced during this migration. I am not writing all the commands here but the procedure on how to do that.



Pre-requisite: Compute instance with all the oracle binaries installed and DBCS in OCI

First challenge: Which migration method to use?

I thought of few methods but finally use the manual DG (Data Guard) method.

ZDM (Zero downtime migration) = we can not use because source is on Windows.

DMS (OCI Database migration) = we can not use because source is on Windows.

Expdp/Impdp = We can not use because source is standard edition and does not support encryption and                                 parallelism. (Hard to export big database without these 2 features)

Finally I choose old manual way to create Physical standby method, But this method also was not easy and face many issues, let's discuss those issues.

Second challenge: We can not create standby directly on DBCS, So I have to create a compute Instance in OCI for standby purpose and later I will move the DB to DBCS (will show in this blog later).

So now we have on-premises oracle standard edition DB on windows and compute instance in OCI, make sure we can access compute instance from windows machine and access source DB from OCI compute instance.

Third challenge: Recovery Issue

I created the standby database using below steps.

Step by Step Guide on Creating Physical Standby Using RMAN DUPLICATE...FROM ACTIVE DATABASE (Doc ID 1075908.1)

From the above Doc when you run the below step 

f. On the primary system invoke the RMAN executable and connect to the primary and the auxiliary database ( i.e., the standby)

It will restore database but fail during recovery with below error message.


RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 09/25/2023 00:11:27
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of sql command on ORA_DISK_1 channel at 09/25/2023 00:11:27
RMAN-10038: database session for channel ORA_DISK_1 terminated unexpectedly

So I used below MOS Doc to resolve above issue

Step By Step Guide On How To Recreate Standby Control File When Datafiles Are On ASM And Using Oracle Managed Files (Doc ID 734862.1)

Precisely I used below steps:


a. Create the Standby control file on primary database
b. Copy the control file backup to the standby system (compute Instance)
c. Shutdown standby DB and startup in nomount
d. Connect to RMAN and restore standby controlfile and rename
rman target /
RMAN> restore standby controlfile from '/u01/STDBYCTL.BKP';
RMAN> alter database mount;
RMAN> catalog start with '/u01/app/oracle/oradata/dev1_stby/DEV1_STBY/';
RMAN> switch database to copy;
RMAN> exit

Now standby is almost ready

Fourth challenge: Log shipping

Standard Edition database doesn’t support advanced Data Guard features like switch over, automatic log shipping etc. So we have to manually copy the archive logs from primary to standby.

On Standby it will apply logs automatically if MRP process running because standby is Enterprise Edition.

So I used below steps:





1. Start managed recovery on standby
recover managed standby database disconnect from session nodelay;

2. Check mrp process
ps -ef | grep mrp

3. Now check the primary and standby in sync at this point

target:
SQL> select current_SCN from v$database;

CURRENT_SCN
-----------
    1750369

Source:
SQL> select current_SCN from v$database;

CURRENT_SCN
-----------
    1895707

4. Now we need to find the archive logs between above 2 SCN and move those archives from primary to standby side manually due to standard edition of DB at source side. Use below command to find archives.

RMAN> list archivelog scn between 1750369 and 1895707;

5. Copy all the archives from above output to standby side manually.

6. Register all the archivelog files on standby database using below command.

alter database register logfile 'logfile_name.ARC';

7. Once all archive log registered, MRP process apply those archives on standby.

8. Now check if both DB in sync.

Once both DB in sync, Now it's time for cut over.

Fifth Challenge: Switchover

As source is standard edition database, we can do switchover, we have to activate standby database.

Steps I used:



1. Stop the primary DB if required for consistency purpose.
2. Activate standby database

SQL> recover managed standby database cancel;
Media recovery complete.
SQL> alter database recover managed standby database finish;

Database altered.

SQL> alter database activate standby database;

Database altered.

SQL> select name,open_mode,database_role from v$database;

NAME      OPEN_MODE            DATABASE_ROLE
--------- -------------------- ----------------
DEV1  MOUNTED              PRIMARY

SQL> alter database open;

Database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 FSDEV                          MOUNTED
SQL> alter pluggable database FSDEV open;

Warning: PDB altered with errors.


Now database is migrated to OCI compute instance, we just need to do few more post migration steps.

3. Add tempfiles in PDB.
4. Check the PDB plugin violation to find out why PDB open in restricted mode.
SQL> select line,message,status from pdb_plug_in_violations where name='FSDEV' order by time,line;

      LINE MESSAGE                                                                                                                                                STATUS
---------- ------------------------------------------------------------------------------------------------------------------------------------------------------ ---------
         1 Database option OWM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                                                            PENDING
         1 No release updates are installed in the CDB but '19.8.0.0.0 Release_Update 2007241916' is installed in the PDB                                         PENDING
         
5. If you see above RU is missing, so we have to apply datapatch.

[oracle@fsdev_compute OPatch]$ ./datapatch -verbose

6. Once datapatch applied successfully, stop/start the database and this time PDB will start without restriction.
7. Check any invalid objects and run utlrp script.
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

At this point migration is completed to compute instance but if we need to use DBCS service in OCI, we have to move this PDB to DBCS.

Last challenge: move PDB from compute instance to DBCS

I used PDB cloning method using DB link (you can use any other method as well), now DBCS is target and standby DB (which is opened now) is source.

Steps:




1. create tns entry on DBCS side to connect standby db which was opened in last step.
2.  Create user and provide grants on source side (standby side)

I will use system user so not creating the user but need to provide below grants.

SQL> grant create session, create pluggable database to system container=all;

Grant succeeded.

3. Create db link on DBCS side

SQL> create database link fsdev_link connect to system identified by password using 'dev1_stby';

Database link created.

SQL> select * from dual@fsdev_link;

D
-
X

4. Create pluggable database FSDEV on DBCS

create pluggable database FSDEV from FSDEV@fsdev_link CREATE_FILE_DEST='+DATA';
alter pluggable database FSDEV open;

5. We will see database is open in restricted mode now, so we have to check PDB violations.

SQL> select line,message,status from pdb_plug_in_violations where name='FSDEV' order by time,line;

You will see many errors regarding tablespaces because tablespaces are not encrypted and on DBCS its mandatory to use TDE/encryption. so we can encrypt later.

But I saw one of error is..

      LINE MESSAGE                                                                                                                                                STATUS
---------- ------------------------------------------------------------------------------------------------------------------------------------------------------ ---------
         1 CDB parameter processes mismatch: Previous 320 Current 400                                                                                             PENDING
         2 CDB parameter db_block_size mismatch: Previous 16384 Current 8192                                                                                      PENDING
         3 CDB parameter compatible mismatch: Previous '12.1.0.2' Current '19.0.0.0'                                                                              PENDING
         4 CDB parameter open_links_per_instance mismatch: Previous 10 Current 4                                                                                  PENDING
         5 CDB parameter max_pdbs mismatch: Previous 4 Current 4098                                                                                               PENDING
         1 Database option CONTEXT mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                                                        PENDING
         2 Database option DV mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                                                             PENDING
         3 Database option OLS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                                                            PENDING
         4 Database option ORDIM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                                                          PENDING
         5 Database option OWM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                                                            PENDING
         6 Database option SDO mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                                                            PENDING
         1 Interim patch 34786990/25032666 (OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990)): Installed in the CDB but not in the PDB                            PENDING
         1 Tablespace SYSTEM is not encrypted. Oracle Cloud mandates all tablespaces should be encrypted.                                                         PENDING

One issue that we need to address is..

1 Interim patch 34786990/25032666 (OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990)): Installed in the CDB but not in the PDB                            PENDING

6. As per above error, we have to apply the datapatch on FSDEV PDB.

[oracle@dbcs-dev admin]$ /u01/app/oracle/product/19.0.0.0/dbhome_1/OPatch/datapatch -verbose -apply 34786990/25032666 -force -pdbs FSDEV

7. Now we wont see that patch violation , so restart the PDB again and it will open in no restricted mode

create dblink for the pdb on the standby db that was opened.

8. Now check the INVALID objects and run the utlrp script.

10. Cloning id done. The last thing left is encryption.

To check on PDB, I saw wallet is already there but no key (DBCS already created wallet on CDB side)

SQL> alter session set container=FSDEV;

Session altered.

SQL> SELECT KEY_ID, CREATION_TIME FROM V$ENCRYPTION_KEYS  ORDER BY 1;

no rows selected

SQL> select CON_ID,STATUS,WRL_PARAMETER WALLET_DIR,WALLET_TYPE from V$ENCRYPTION_WALLET;

    CON_ID STATUS
---------- ------------------------------
WALLET_DIR
--------------------------------------------------------------------------------
WALLET_TYPE
--------------------
         4 OPEN_NO_MASTER_KEY

AUTOLOGIN


SQL>

11. Create the key:

SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY password WITH BACKUP;

keystore altered.

SQL> set linesize 200
 set CON_ID 20
 col KEY_ID for a60
 col KEYSTORE_TYPE for a20
select CON_ID,KEY_ID,KEYSTORE_TYPE from v$encryption_keys;SQL> SP2-0158: unknown SET option "CON_ID"
SQL> SQL> SQL>

    CON_ID KEY_ID                                                       KEYSTORE_TYPE
---------- ------------------------------------------------------------ --------------------
         4 ATdvIGfuhr987AcMK7sAAAAAAAAAAAAAfvAAAAAAA         SOFTWARE KEYSTORE

SQL> set linesize 200
 col WALLET_DIR for a32
 col status for a21
 select STATUS,WRL_PARAMETER WALLET_DIR,WALLET_TYPE from V$ENCRYPTION_WALLET;
 SQL> SQL> SQL>
Status                WALLET_DIR                       WALLET_TYPE
--------------------- -------------------------------- --------------------
OPEN                                                   AUTOLOGIN

12. Create a script to encrypt all tablespaces and run it.

SQL> spool /u01/app/oracle/tablespace_encryption/encryption.sql
SQL> set trimspool on linesize 1000 pages 1000 feedback off head off
select 'alter tablespace '||tablespace_name||' encryption online encrypt;'
from dba_tablespaces
where encrypted = 'NO' and contents='PERMANENT'
order by tablespace_name;

SQL> set timing on
SQL> set echo on
SQL> @encyption.sql

13. Check if any datafile let without encryption: 

SQL> select name,encrypted from v$datafile_header where encrypted='NO';

Full migration completed!!

If any question, please send me an email manoj.kumar@eclipsys.ca

Thanks for reading!


How to monitor DBCS database FRA using custom metric in OCI

The purpose of this blog is to show you how to use custom metric, if you don't see the default metric available in OCI for any particular case or even if you see default metric but you need the metric output to work your way :)

 
I am creating this custom metric to monitor FRA free space but as I said you can use any query to create custom metric for any other purpose, if you know the process.
 

Pre-requisite:

Should have environment with Python, oci and oracledb driver installed. See my previous blog for this.
 
 

A. Preparation Steps:

A-1. Create OCI config file to connect to OCI.
A-2. Prepare Python script

 

B. Post custom Metric data to OCI:

B-1. Run Python script to post custom metric data to OCI 
B-2. Check data is available on OCI
B-3. Schedule this to post data periodically to OCI
B-4. Create an Alarm to get notification

A-1. Create OCI config file to connect to OCI.

 
Login to OCI console and add API key to your user
Go to Profile -> my profile --> API keys -->  Add API key

Download private key and public key files to the server from where you want to connect and click Add.
 
We will see the configuration file preview like this
Click copy and save the text in a configuration file.
for Linux: /home/<user_name>/.oci/config
 
Make sure to add the private key file patch we download above.
 

A-2. Prepare Python script

 
#!/usr/lib/python3.6

import oci,subprocess,os,datetime
from pytz import timezone
import oracledb
import os

oracledb.init_oracle_client()

# using default configuration file (~/.oci/config)
from oci.config import from_file
config = from_file()

# initialize service client with default config file
monitoring_client = oci.monitoring.MonitoringClient(config,service_endpoint="https://telemetry-ingestion.ca-toronto-1.oraclecloud.com")

db_user = os.environ.get('DBAAS_USER_NAME', 'SYSTEM')
db_password = os.environ.get('DBAAS_USER_PASSWORD', '*******************')
db_connect = os.environ.get('DBAAS_DEFAULT_CONNECT_DESCRIPTOR', "ip address:1521/orcl_pdb1")

# Connect to database
connection = oracledb.connect(user=db_user, password=db_password, dsn=db_connect)
cursor = connection.cursor()

# get the FRA free space data

cursor.execute("""SELECT (CASE WHEN PERCENT_USED> 100 THEN 0 ELSE (100-PERCENT_USED) END) PERCENT_FREE
  FROM (SELECT (SUM(PERCENT_SPACE_USED)-SUM(PERCENT_SPACE_RECLAIMABLE)) PERCENT_USED
  FROM V$FLASH_RECOVERY_AREA_USAGE)""")

result = cursor.fetchall()

res = float(result[0][0])
print(res)

cursor.close()
connection.close()

times_stamp = datetime.datetime.now(timezone('UTC'))

# post custom metric to oci monitoring
# replace "compartment_ocid“ string with your compartmet ocid

post_metric_data_response = monitoring_client.post_metric_data(
    post_metric_data_details=oci.monitoring.models.PostMetricDataDetails(
        metric_data=[
            oci.monitoring.models.MetricDataDetails(
                namespace="custom_metrics",
                compartment_id="compartment ocid",
                name="FRA_free_space",
                dimensions={'DB_name': 'orcl_pdb1'},
                datapoints=[
                    oci.monitoring.models.Datapoint(
                        timestamp=datetime.datetime.strftime(
                            times_stamp,"%Y-%m-%dT%H:%M:%S.%fZ"),
                      value=res)]
                )]
    )
)

# Get the data from response
print(post_metric_data_response.data)

If  you see above script, first I import the modules, get the configuration file to authenticate with OCI, Initialize service client with default config file.. here replace "service_endpoint" string with your endpoint. Your should be different as per your region.
Service-endpoint
After I connect to database and execute query to get the data.
Then I post the metric to OCI monitoring service.

B-1. Run Python script to post custom metric data to OCI


You can see above FRA free space is 99.99% free. and failed_metrics_count is also 0.
This means we successfully posted data to OCI monitoring service.

B-2. Check data is available on OCI

Go to OCI --> observability & management --> Metric Explorer


Provide the correct compartment, metric namespace, metric name, dimension etc. that you provided 
in above python script and click update chart.


Now scroll up and enable show graph and you will see the values in OCI.

B-3. Schedule this to post data periodically to OCI

We can use crontab to schedule this python script to post data into OCI.
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed

B-4. Create an Alarm to get notification




Click on save alarm and now you can see the Alarm is setup and we will be notified as per threshold.

I hope you like the blog! Thanks for reading.

Analyze Invoices with Oracle Analytics and AI Document Understanding

OCI Document Understanding is an AI service that enables developers to extract text, tables, and other key data from document files through...