Showing posts with label rac. Show all posts
Showing posts with label rac. Show all posts

Friday, July 26, 2024

NTP

 When enabling NTP (Network Time Protocol) on a RAC (Real Application Clusters) database running on Solaris 11, downtime is not strictly necessary, but it's highly recommended to minimize potential issues. Here's why:


1. *Node synchronization*: NTP ensures all nodes in the RAC cluster have synchronized clocks. If you enable NTP without downtime, the nodes might not sync immediately, potentially causing issues with database operations.

2. *Clock adjustments*: When NTP is enabled, the system clock might be adjusted to match the reference clock. This adjustment can cause issues with database processes, especially if they're sensitive to time changes.

3. *RAC node evictions*: If the clock adjustments are significant, RAC nodes might be evicted from the cluster, leading to downtime and potential data inconsistencies.


To minimize risks, consider the following:


1. *Schedule downtime*: Plan a maintenance window to enable NTP, ensuring all nodes are updated and synchronized simultaneously.

2. *Use the `ntpd` command*: Instead of enabling NTP through the `svcadm` command, use `ntpd` with the `-r` option to restart the NTP service. This allows for a smoother transition.

3. *Monitor the cluster*: Closely monitor the RAC cluster during and after NTP enablement to quickly address any issues that arise.


In summary, while downtime is not mandatory, it's highly recommended to ensure a smooth transition when enabling NTP on a RAC database running on Solaris 11.

RAC Health Checks

 RAC Health Checks

1) RAC Node Apps Health Checks :
[grid@rac19cdb01 ~]$ srvctl status nodeapps
VIP rac19cdb01-vip is enabled
VIP rac19cdb01-vip is running on node: rac19cdb01
VIP rac19cdb02-vip is enabled
VIP rac19cdb02-vip is running on node: rac19cdb02
Network is enabled
Network is running on node: rac19cdb01
Network is running on node: rac19cdb02
GSD is disabled
GSD is not running on node: rac19cdb01
GSD is not running on node: rac19cdb02
ONS is enabled
ONS daemon is running on node: rac19cdb01
ONS daemon is running on node: rac19cdb02
[grid@rac19cdb01 ~]$



2) ASM Status : 
[grid@rac19cdb01 ~]$ srvctl status asm
ASM is running on rac19cdb02,rac19cdb01
[grid@rac19cdb01 ~]$ srvctl status asm -n rac19cdb01
ASM is running on rac19cdb01
[grid@rac19cdb01 ~]$ srvctl status asm -n rac19cdb02
ASM is running on rac19cdb02
[grid@rac19cdb01 ~]$



3) Database Status:
[grid@rac19cdb01 ~]$ srvctl status database -d racdb
Instance racdb1 is running on node rac19cdb01
Instance racdb2 is running on node rac19cdb02
[grid@rac19cdb01 ~]$ srvctl status instance -d racdb -i racdb1
Instance racdb1 is running on node rac19cdb01
[grid@rac19cdb01 ~]$ srvctl status instance -d racdb -i racdb2
Instance racdb2 is running on node rac19cdb02
[grid@rac19cdb01 ~]$


4) CRS Status:
[grid@rac19cdb01 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@rac19cdb01 ~]$


5) Cluster Status:
[grid@rac19cdb01 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@rac19cdb01 ~]$


6) RAC High Availability Services Status:
[grid@rac19cdb01 ~]$ crsctl check has
CRS-4638: Oracle High Availability Services is online
[grid@rac19cdb01 ~]$


7) Database Services Status : 
[grid@rac19cdb01 ~]$ srvctl status service -d racdb
Service racdb_service is running on instance(s) racdb1,racdb2
[grid@rac19cdb01 ~]$


8) Listener Status: 
[grid@rac19cdb01 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac19cdb02,rac19cdb01
[grid@rac19cdb01 ~]$


9) SCAN VIP Status : 
[grid@rac19cdb01 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rac19cdb02
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node rac19cdb01
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node rac19cdb01
[grid@rac19cdb01 ~]$


10) Scan Listener Status : 
[grid@rac19cdb01 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node rac19cdb02
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node rac19cdb01
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node rac19cdb01
[grid@rac19cdb01 ~]$


11) Server Status : 
[grid@rac19cdb01 ~]$ srvctl status server -n rac19cdb01 -a
Server name: rac19cdb01
Server state: ONLINE
Server active pools: Generic ora.racdb ora.racdb_racdb_service
Server state details:
[grid@rac19cdb01 ~]$ srvctl status server -n rac19cdb02 -a
Server name: rac19cdb02
Server state: ONLINE
Server active pools: Generic ora.racdb ora.racdb_racdb_service
Server state details:
[grid@rac19cdb01 ~]$


12)CVU Status: 
[grid@rac19cdb01 ~]$ srvctl status cvu
CVU is enabled and running on node rac19cdb01
[grid@rac19cdb01 ~]$



13) GNS Status :  ( As root user)
[root@rac19cdb01 ~]#/u01/app/oracle/product/19.0.0/grid/bin/srvctl status gns
PRCS-1065 : GNS is not configured.
[root@rac19cdb01 ~]#


14) Serverpool Details : 
[grid@rac19cdb01 ~]$ srvctl status srvpool
Server pool name: Free
Active servers count: 0
Server pool name: Generic
Active servers count: 2
[grid@rac19cdb01 ~]$


15) Cluster Interconnect Details : 
[grid@rac19cdb01 ~]$ oifcfg getif
eth0  192.168.1.0  global  public
eth1  192.168.50.0  global  cluster_interconnect


16) OCR Checks:
[grid@rac19cdb01 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3224
         Available space (kbytes) :     258896
         ID                       : 1449084471
         Device/File Name         :  +OCR_VOTE
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user
[grid@rac19cdb01 ~]$



17) OCR Backups:
[grid@rac19cdb01 ~]$ ocrconfig -showbackup
rac19cdb01     2013/09/23 11:46:41    /u01/app/oracle/product/19.0.0/grid/cdata/orrcdbdv-clstr/backup00.ocr
rac19cdb01     2013/09/23 04:19:26    /u01/app/oracle/product/19.0.0/grid/cdata/orrcdbdv-clstr/backup01.ocr
rac19cdb01     2013/09/23 00:19:25    /u01/app/oracle/product/19.0.0/grid/cdata/orrcdbdv-clstr/backup02.ocr
rac19cdb01     2013/09/22 08:19:24    /u01/app/oracle/product/19.0.0/grid/cdata/orrcdbdv-clstr/day.ocr
rac19cdb01     2013/09/10 08:18:54    /u01/app/oracle/product/19.0.0/grid/cdata/orrcdbdv-clstr/week.ocr
PROT-25: Manual backups for the Oracle Cluster Registry are not available
[grid@rac19cdb01 ~]$




18) Voting Disk Status: 
[grid@rac19cdb01 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   9bc0828570854fa5bff3221500a1fc63 (ORCL:CRSVOL1) [OCR_VOTE]
Located 1 voting disk(s).
[grid@rac19cdb01 ~]$



19) Node apps config details : 
[grid@rac19cdb01 ~]$ srvctl config nodeapps  -a -g -s
Network exists: 1/192.168.1.0/255.255.255.0/eth0, type static
VIP exists: /rac19cdb01-vip/192.168.1.73/192.168.1.0/255.255.255.0/eth0, hosting node rac19cdb01
VIP exists: /rac19cdb02-vip/192.168.1.74/192.168.1.0/255.255.255.0/eth0, hosting node rac19cdb02
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
[grid@rac19cdb01 ~]$



20) Diskgroups Status:
[grid@rac19cdb01 ~]$ crs_stat -t | grep -i dg
ora....DATA.dg ora....up.type ONLINE    ONLINE    rac19cdb01
ora.FLASH.dg   ora....up.type ONLINE    ONLINE    rac19cdb01
ora....VOTE.dg ora....up.type ONLINE    ONLINE    rac19cdb01
[grid@rac19cdb01 ~]$

CRSCTL Commands

Oracle CRSCTL Commands List
CRSCTL utility allows you to administer cluster resources. Here are few quick commands to help you administer Oracle RAC cluster!
Check Cluster Status 
Check Cluster Nodes 
Stop Grid Cluster 
Start Grid Cluster 



Check Cluster Status


Status of upper & lower stack


./crsctl check crs


Status of upper stack


./crsctl check cluster


Cluster status on all nodes


./crsctl check cluster -all


Cluster status on specific node


./crsctl check cluster -n rac2



Check Cluster Nodes


Check cluster services in table format


./crsctl status resource -t


Checking status of clusterware nodes / services


./crsctl status server -f


Check cluster nodes


olsnodes -n

oraracn1        1

oraracn2        2



Stop Grid Cluster


Stop HAS on current node


./crsctl stop has


Stop HAS on remote node


./crsctl stop has –n rac2


Stop entire cluster on all nodes


./crsctl stop cluster -all


Stop cluster ( CRS + HAS ) on remote node


./crsctl stop cluster –n rac2 



Start Grid Cluster


Start HAS on current node


./crsctl start has 


Start HAS on remote node


./crsctl start has –n rac2


Start entire cluster on all nodes


./crsctl start cluster –all


Start cluster(CRS + HAS) on remote node


./crsctl start cluster –n rac2


ENABLE – DISABLE CLUSTER AUTO START


crsctl enable has

crsctl disable has