Quantcast
Channel: Ludovico – DBA survival BLOG
Viewing all articles
Browse latest Browse all 119

Oracle Grid Infrastructure 19c does not configure the local-mode automaton by default. How to add it?

$
0
0

I have been installing Grid Infrastructure 18c for a while, then switched to 19c when it became GA.

At the beginning I have been overly enthusiast by the shorter installation time:

The GIMR is now optional, that means that deciding to install it is a choice of the customer, and a customer might like to keep it or not, depending on its practices.

Not having the GIMR by default means not having the local-mode automaton. This is also not a problem at all. The default configuration is good for most customers and works really well.

This new simplified configuration reduces some maintenance effort at the beginning, but personally I use a lot the local-mode automaton for out-of-place patching of Grid Infrastructure (read my blog posts to know why I really love the local-mode automaton), so it is something that I definitely need in my clusters.

A choice that makes sense for Oracle and most customers

Oracle vision regarding Grid Infrastructure consists of a central management of clusters, using the Oracle Domain Services Cluster. In this kind of deployment, the Management Repository, TFA, and many other services, are centralized. All the clusters use those services remotely instead of having them configured locally. The local-mode automaton is no exception: the full, enterprise-grade version of Fleet Patching and Provisioning (FPP, formerly Rapid home provisioning or RHP) allows much more than just out-of-place patching of Grid Infrastructure, so it makes perfectly sense to avoid those configurations everywhere, if you use a Domain Cluster architecture. Read more here.

Again, as I said many times in the past, doing out-of-place patching is the best approach in my opinion, but if you keep doing in-place patching, not having the local-mode automaton is not a problem at all and the default behavior in 19c is a good thing for you.

I need local-mode automaton on 19c, what I need to do at install time?

If you have many clusters, you are not installing them by hand with the graphic interface (hopefully!). In the responseFile for 19c Grid Infrastructure installation, this is all you need to change comparing to a 18c:

$ diff grid_install_template_18.rsp grid_install_template_19.rsp
1c1
< oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v18.0.0
---
> oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
25a26
> oracle.install.crs.configureGIMR=true
27c28
< oracle.install.crs.config.storageOption=
---
> oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE

as you can see, also Flex ASM is not part of the game by default in 19c.

Once you specify in the responseFile that you want GIMR, then the local-mode automaton is installed  as well by default.

I installed GI 19c without GIMR and local-mode automaton. How can I add them to my new cluster?

First, recreate the empty MGMTDB CDB by hand:

$ dbca -silent -createDatabase -sid -MGMTDB -createAsContainerDatabase true \
 -templateName MGMTSeed_Database.dbc -gdbName _mgmtdb \
 -storageType ASM -diskGroupName +MGMT \
 -datafileJarLocation $OH/assistants/dbca/templates \
 -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck

Prepare for db operation
10% complete
Registering database with Oracle Grid Infrastructure
14% complete
Copying database files
43% complete
Creating and starting Oracle instance
45% complete
49% complete
54% complete
58% complete
62% complete
Completing Database Creation
66% complete
69% complete
71% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/_mgmtdb.
Database Information:
Global Database Name:_mgmtdb
System Identifier(SID):-MGMTDB
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/_mgmtdb2.log" for further details.

Then, configure the PDB for the cluster. Pay attention to the -local switch that is not documented (or at least it does not appear in the inline help):

$ mgmtca -local

After that, you might check that you have the PDB for your cluster inside the MGMTDB, I’ll skip this step.

Before creating the rhpserver (local-mode automaton resource), we need the volume and filesystem to make it work (read here for more information).

The volume:

ASMCMD> volcreate -G MGMT -s 1536M --column 8 --width 1024k --redundancy unprotected GHCHKPT

ASMCMD> volinfo --all
Diskgroup Name: MGMT

         Volume Name: GHCHKPT
         Volume Device: /dev/asm/ghchkpt-303
         State: ENABLED
         Size (MB): 1536
         Resize Unit (MB): 64
         Redundancy: UNPROT
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage:
         Mountpath:

The filesystem:

(oracle)$ mkfs -t acfs /dev/asm/ghchkpt-303

(root)# $CRS_HOME/bin/srvctl add filesystem -d /dev/asm/ghchkpt-303 -m /opt/oracle/rhp_images/chkbase -u oracle -fstype ACFS
(root)# $CRS_HOME/bin/srvctl enable filesystem -volume ghchkpt -diskgroup MGMT
(root)# $CRS_HOME/bin/srvctl start filesystem -volume ghchkpt -diskgroup MGMT

Finally, create the local-mode automaton resource:

(root)# $CRS_HOME/bin/srvctl add rhpserver -local -storage /opt/oracle/rhp_images

Again, note that there is a -local switch that is not documented. Specifying it will create the resource as a local-mode automaton and not as a full FPP Server (or RHP Server, damn, this change of name gets me mad when I write blog posts about it 🙂 ).

HTH

Ludovico


Viewing all articles
Browse latest Browse all 119

Trending Articles