Tuesday, July 30, 2024

Procedure for migrating an Oracle Database from 12c to 19c on a new server using Data Pump:

Procedure for migrating an Oracle Database from 12c to 19c on a new server using Data Pump:

Target Server Preparation:

1. Install Oracle Database 19c software on the target server.
2. Create a new Oracle home directory for the 19c database (e.g., /u01/app/oracle/product/19.0.0/dbhome_1).
3. Set environment variables (e.g., ORACLE_HOME, ORACLE_SID, PATH).
4. Create a new database instance (e.g., SID=NEWDB).
5. Configure the listener and tnsnames.ora files.

Data Pump Migration:

1. On the source server (12c), create a directory for the Data Pump export files (e.g., /u01/export).
2. Run the Data Pump export command to export the entire database:

expdp system/password@source_db schemas=ALL directory=EXP_DIR dumpfile=full_db_exp.dmp logfile=exp_log.log

1. On the target server (19c), create a directory for the Data Pump import files (e.g., /u01/import).
2. Run the Data Pump import command to import the entire database:

impdp system/password@new_db directory=IMP_DIR dumpfile=full_db_exp.dmp logfile=imp_log.log

1. Monitor the import process and check for any errors.

Faster Approach using Data Pump:

1. Use the PARALLEL option to speed up the export and import processes.
2. Use the COMPRESSION option to reduce the size of the export files.
3. Use the DATA_OPTIONS option to enable data transformation and optimization.
4. Use the TABLE_EXISTS_ACTION option to handle existing tables in the target database.

Example:

expdp system/password@source_db schemas=ALL directory=EXP_DIR dumpfile=full_db_exp.dmp logfile=exp_log.log parallel=4 compression=all data_options=transform_optimize

impdp system/password@new_db directory=IMP_DIR dumpfile=full_db_exp.dmp logfile=imp_log.log parallel=4 table_exists_action=replace

Post-Migration Tasks:

1. Update the database statistics.
2. Rebuild indexes and recompile stored procedures.
3. Test the migrated database thoroughly.

Note: Ensure you have sufficient disk space and memory on both servers to accommodate the Data Pump export and import processes. Additionally, consider using Oracle GoldenGate or other replication tools for a zero-downtime migration.

Running DataPump Jobs Results In ORA-31634 ORA-31664


DataPump job (export or import) and this fails with the following errors:

ORA-31634: job already exists
ORA-31664: unable to construct unique job name when defaulted



1. If you are just running a single job then there are probably old job names that are still found in the database and this is creating the conflict. You could try to remove these job using the steps in the following

Note 336014.1 - How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ?

2. If you are running lots of jobs (more than 99) at a time, then the problem is that DataPump cannot build more than 99 consecutive jobnames when using the default job_name because the default job_name is system-generated in the form SYS_EXPORT_<mode>_NN, where NN expands to a 2-digit incrementing integer starting at 01, therefore the largest value for NN would be 99.

From Oracle Database Utilities guide:

JOB_NAME
Default: system-generated name of the form SYS_EXPORT_<mode>_NN

Purpose
Used to identify the export job in subsequent actions, such as when the ATTACH parameter is used to attach to a job, or to identify the job using the DBA_DATAPUMP_JOBS or USER_DATAPUMP_JOBS views.

Syntax and Description
JOB_NAME=jobname_string

The jobname_string specifies a name of up to 30 bytes for this export job. The bytes must represent printable characters and spaces. If spaces are included, then the name must be enclosed in single quotation marks (for example, 'Thursday Export'). The job name is implicitly qualified by the schema of the user performing the export operation. The job name is used as the name of the master table, which controls the export job.

The default job name is system-generated in the form SYS_EXPORT_<mode>_NN, where NN expands to a 2-digit incrementing integer starting at 01. An example of a default name is 'SYS_EXPORT_TABLESPACE_02'.

Example
The following example shows an export operation that is assigned a job name of exp_job:

#> expdp hr DIRECTORY=data_dump_dir DUMPFILE=export_job.dmp JOB_NAME=export_job NOLOGFILE=YES


To avoid this issue, please use the job_name DataPump parameter to create unique job names that will not conflict with any other job names in the database instead of using the default job_name.

Java Update in Oracle



  1. Java version in Goldengate.

 

GoldenGate 19c comes with an "embedded" JDK version located at $OGG_HOME/jdk. We can upgrade to the latest JDK 1.8.x available for Oracle GoldenGate.

 

To upgrade $OGG_HOME/jdk to the latest

 

Download  Patch 36270061: Oracle SERVER JRE 8 Update 411

The download contains file ->  p36270061_180411_Linux-x86-64.zip

Which in turn contains files ->  \server-jre-8u411-linux-x64.tar.gz\server-jre-8u411-linux-x64.tar\jdk1.8.0_411\bin\

Just replace existing -> $OGG_HOME/jdk/*.* with ones from server-jre-8u411-linux-x64.tar\jdk1.8.0_411

 

 

 

Another Option :

JAVA_VERSION "1.8.0_411" is a part of latest OGG binaries, It is recommended to upgrade to latest OGG version :
Oracle GoldenGate 19.24.0.0.240716 Patch Set Availability ( Doc ID 3037155.1 )

 

 

 

Ref.

Can the Embedded Java Version Part Of OGG Home Be Upgraded ( Doc ID 2359395.1 )
How To Update JDK For Golden Gate $OGG_HOME/jdk? ( Doc ID 2803652.1 )

 

 

 

  1. Java Version in OEM agent.

 

By default, EM 13.4 Agent uses java version "1.8.0_231" (On Linux/ Solaris / Windows).

Please note that before upgrading the JDK on Agents, Ensure that OMS's (OEM) JDK has been Updated already.

 

Apply RU 19 ( Patch 34692141 ) or a later version on the Agent Home before upgrading the JDK on agent.

 

 

For Linux

1. Download Patch 35009178 from JDK 8 tab of Supported Java SE Downloads on MOS (Doc ID 1439822.2)

File Name: p35009178_180371_Linux-x86-64.zip

2. Stop Agent

<Agent_Base>/agent_inst/bin/emctl stop agent

3. Go to <AGENT_ORACLE_HOME>/oracle_common/

Example: <Agent_Base>/agent_13.4.0.0.0/oracle_common/

4. Take backup of jdk directory.

$mv jdk jdk_bak

5. Copy p35009178_180371_Linux-x86-64.zip to $AGENT_ORACLE_HOME/oracle_common/

6. Unzip p35009178_180371_Linux-x86-64.zip and check the readme.txt for the instructions to install it.

$tar zxvf jdk-8u371-linux-x64.tar.gz

It will create jdk1.8.0_371 directory. Rename it to jdk.

7. Confirm the version of the JDK by checking:

<AGENT_ORACLE_HOME>/oracle_common/jdk/bin/java -version

8. Start Agent

<Agent_Base>/agent_inst/bin/emctl start agent

 

 

 

 

Ref.

EM 13.4: How to Use the Latest Certified JDK 8 Update with 13.4 Agents (Doc ID 2659208.1)

 

 

  1. Java Version in RDBMS Oracle Home.

 

JDK and PERL Patches for Oracle Database Home and Grid Home (Doc ID 2584628.1)


Saturday, July 27, 2024

Oracle Database Health Check

#Oracle Database Health Check 

ps -ef | grep pmon

ps -ef | grep tns

!sq

set lines 300;

SELECT DATABASE_ROLE, DB_UNIQUE_NAME, OPEN_MODE,PROTECTION_MODE, PROTECTION_LEVEL, SWITCHOVER_STATUS FROM V$DATABASE;

archive log list

select flashback_on from v$database;


df -h


cat /etc/*release*

uname -a

prtdiag -v


sar 1 5 

iostat -xnctz 5



SELECT name FROM v$controlfile;

set pagesize 40;
set lines 300;
col MEMBER format a60;
select GROUP#,STATUS,MEMBER from v$logfile;

SELECT name FROM v$controlfile;

SELECT value FROM v$parameter WHERE name='audit_file_dest';

Select owner, object_type, count(*) as Total From dba_objects Where status!= 'VALID' group by owner,object_type Order by owner, object_type;

set lines 300;
set pagesize 120;
select start_time,OUTPUT_BYTES_PER_SEC ,ELAPSED_SECONDS, INPUT_TYPE, STATUS from v$rman_backup_job_details;

set colsep |
set linesize 100 pages 100 trimspool on numwidth 14 
col name format a25
col owner format a15 
col "Used (GB)" format a15
col "Free (GB)" format a15 
col "(Used) %" format a15 
col "Size (M)" format a15 
SELECT d.status "Status", d.tablespace_name "Name", 
 TO_CHAR(NVL(a.bytes / 1024 / 1024 /1024, 0),'99,999,990.90') "Size (GB)", 
 TO_CHAR(NVL(a.bytes - NVL(f.bytes, 0), 0)/1024/1024 /1024,'99999999.99') "Used (GB)", 
 TO_CHAR(NVL(f.bytes / 1024 / 1024 /1024, 0),'99,999,990.90') "Free (GB)", 
 TO_CHAR(NVL((a.bytes - NVL(f.bytes, 0)) / a.bytes * 100, 0), '990.00') "(Used) %"
 FROM sys.dba_tablespaces d, 
 (select tablespace_name, sum(bytes) bytes from dba_data_files group by tablespace_name) a, 
 (select tablespace_name, sum(bytes) bytes from dba_free_space group by tablespace_name) f WHERE 
 d.tablespace_name = a.tablespace_name(+) AND d.tablespace_name = f.tablespace_name(+) AND NOT 
 (d.extent_management like 'LOCAL' AND d.contents like 'TEMPORARY') 
UNION ALL 
SELECT d.status 
 "Status", d.tablespace_name "Name", 
 TO_CHAR(NVL(a.bytes / 1024 / 1024 /1024, 0),'99,999,990.90') "Size (GB)", 
 TO_CHAR(NVL(t.bytes,0)/1024/1024 /1024,'99999999.99') "Used (GB)",
 TO_CHAR(NVL((a.bytes -NVL(t.bytes, 0)) / 1024 / 1024 /1024, 0),'99,999,990.90') "Free (GB)", 
 TO_CHAR(NVL(t.bytes / a.bytes * 100, 0), '990.00') "(Used) %" 
 FROM sys.dba_tablespaces d, 
 (select tablespace_name, sum(bytes) bytes from dba_temp_files group by tablespace_name) a, 
 (select tablespace_name, sum(bytes_cached) bytes from v$temp_extent_pool group by tablespace_name) t 
 WHERE d.tablespace_name = a.tablespace_name(+) AND d.tablespace_name = t.tablespace_name(+) AND 
 d.extent_management like 'LOCAL' AND d.contents like 'TEMPORARY'; 
 
 
 
#Standby database

ps -ef | grep pmon

ps -ef | grep tns

!sq

set lines 300;
 
SELECT DATABASE_ROLE, DB_UNIQUE_NAME, OPEN_MODE,PROTECTION_MODE, PROTECTION_LEVEL, SWITCHOVER_STATUS FROM V$DATABASE;

 
SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied",
(ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference" FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG
WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#))
ARCH,(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN
(SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;




select instance_name, to_char(startup_time,'mm/dd/yyyy hh24:mi:ss') as startup_time from v$instance;

Friday, July 26, 2024

Oracle Database 19c Maintenance and Monitoring Standard Operating Procedure (SOP)


Oracle Database 19c Maintenance and Monitoring Standard Operating Procedure


Purpose:

Ensure the reliability, performance, and security of the Oracle Database 19c environment.

Scope:

This SOP applies to all Oracle Database 19c instances.

Responsibilities:

- Database Administrators (DBAs)

- System Administrators

Maintenance Tasks:

- Daily:

    - Check database alert logs for errors.

    - Verify backup completion.

    - Monitor system resources (CPU, memory, disk space).

- Weekly:

    - Run database backups (RMAN).

    - Perform database maintenance tasks (e.g., rebuild indexes, update statistics).

    - Review database performance metrics.

- Monthly:

    - Perform operating system updates.

    - Review database security settings.

Monitoring Tasks:

- Database Performance:

    - Monitor CPU utilization.

    - Monitor disk I/O.

    - Monitor memory utilization.

- Database Logs:

    - Monitor alert logs.

    - Monitor trace files.

- Backup Monitoring:

    - Verify backup completion.

    - Verify backup integrity.

Log and Trace Purging:

- ADRCI Commands:

    - adrci exec="SHOW ALERT" to view alert logs.

    - adrci exec="SHOW TRACE" to view trace files.

    - adrci exec="PURGE ALERT" to purge alert logs.

    - adrci exec="PURGE TRACE" to purge trace files.

Backup Script:

```

bash

#!/bin/bash

rman target=/ <<EOF

BACKUP DATABASE;

EOF

```

Schedule:

Daily, weekly, and monthly tasks should be scheduled using a scheduling tool (e.g., cron).

Documentation:

- Database documentation should include:

    - Database configuration.

    - Backup schedules.

    - Maintenance schedules.

    - Performance metrics.

How to clear flashback log in oracle database

 Here are the steps to clear flashback logs in Oracle without causing downtime ¹:

- Check the size of the recovery area with the command "show parameter db_recovery"

- Check how much space is used and free with the command "select space_used/(1024_1024_1024),space_limit/(1024_1024_1024) from v$recovery_file_dest"

- Set the parameter "log_archive_min_succeed_dest" with the command "show parameter log_archive_min_succeed_dest"

- Set "log_archive_dest_1" to a different location with the command "alter system set log_archive_dest_1='LOCATION=/home/oracle/temp_archivelog' scope=memory"

- Defer the default "DB_RECOVERY_FILE_DESTINATION" with the command "alter system set log_archive_dest_state_10=defer scope=memory"

- Switch logfiles with the command "alter system switch logfile"

- Make "db_recovery_file_size" the same as the actual usage with the command "alter system set db_recovery_file_dest_size=55G scope=memory"

- Check for errors and manually remove archive log files from the OS and run "RMAN> crosscheck archivelog all" and "RMAN> delete expired"

- Put log_archive_destination back with the commands "alter system set log_archive_dest_1=' ' scope=memory" and "alter system set log_archive_dest_10='LOCATION=USE_DB_RECOVERY_FILE_DEST' scope=memory"

- Make sure "flash_recovery_area" has enough space with the command "alter system set db_recovery_file_dest_size=50G scope=spemory"

Procedure to manage Oracle Physical Standby Database using DGMGRL for Oracle 19c

Procedure to manage Oracle Physical Standby Database using DGMGRL for Oracle 19c:


*Step 1: Create a Standby Database*


- Create a physical standby database using RMAN duplicate command.

- Ensure the standby database is in recovery mode.


*Step 2: Configure DGMGRL*


- Connect to the primary database using DGMGRL.

- Configure the standby database using the following commands:

```

DGMGRL> CREATE CONFIGURATION 'standby_config';

DGMGRL> ADD DATABASE 'standby_db' AS 'physical_standby';

```

*Step 3: Enable DGMGRL*


- Enable DGMGRL using the following command:

```

DGMGRL> ENABLE CONFIGURATION;

```

*Step 4: Monitor Standby Database*


- Monitor the standby database using the following command:

```

DGMGRL> SHOW DATABASE 'standby_db';

```

*Step 5: Perform Switchover*


- Perform a switchover to the standby database using the following command:

```

DGMGRL> SWITCHOVER TO 'standby_db';

```

*Step 6: Perform Failover*


- Perform a failover to the standby database using the following command:

```

DGMGRL> FAILOVER TO 'standby_db';

```

*Step 7: Reinstate Primary Database*


- Reinstate the primary database using the following command:

```

DGMGRL> REINSTATE 'primary_db';

```

*Step 8: Remove Standby Database*


- Remove the standby database using the following command:

```

DGMGRL> REMOVE DATABASE 'standby_db';

```

*Additional Commands*


- To check the configuration: `DGMGRL> SHOW CONFIGURATION;`

- To check the database status: `DGMGRL> SHOW DATABASE <db_name>;`

- To check the switchover status: `DGMGRL> SHOW SWITCHOVER STATUS;`


Note: Ensure you have the necessary privileges and credentials to perform these operations.


Also, please note that this is a general procedure and you should consult the Oracle documentation and your organization's policies and procedures for specific instructions and guidelines on managing Oracle Physical Standby Database using DGMGRL.

Oracle “SYS” Credentials Update

 To change the Oracle sys password in a primary database and physical standby database, follow these steps:


*Primary Database:*


1. Connect to the primary database as sysdba:

```

sqlplus / as sysdba

```

1. Change the sys password:

```

ALTER USER SYS IDENTIFIED BY <new_password>;

```

*Physical Standby Database:*


1. Connect to the physical standby database as sysdba:

```

sqlplus / as sysdba

```

1. Change the sys password:

```

ALTER USER SYS IDENTIFIED BY <new_password> CONTAINER=ALL;

```

Note: The `CONTAINER=ALL` clause is required to change the password for all containers (root and PDBs) in a multitenant environment.


*Synchronize Passwords:*


To synchronize the sys password between the primary and physical standby databases, use the following command on the primary database:

```

ALTER SYSTEM SET STANDBY_PASSWORD = <new_password> SCOPE=BOTH;

```

This command updates the standby password in the primary database and propagates the change to the physical standby database.


*Verify Password Change:*


Verify that the sys password has been changed successfully by connecting to both databases with the new password:

```

sqlplus sys/<new_password>@<primary_db>

sqlplus sys/<new_password>@<physical_standby_db>

```

Remember to update any scripts, applications, or tools that use the sys password to connect to the databases.

Migrating Fusion Middleware from Solaris to Linux

 Migrating Fusion Middleware from Solaris to Linux involves several steps. Here's a general outline:


*Exporting the domain and configuration from Solaris:*


1. *Backup the domain*: Use the WebLogic Server Console or WLST commands to backup the entire domain, including all configurations, applications, and security settings.

2. *Export the domain configuration*: Use the `pack` command to export the domain configuration to a jar file.

    - `pack -domain=domain_name -template=domain_template.jar`

3. *Export individual components*: Export individual components like SOA Suite, Oracle Service Bus (OSB), or Oracle Enterprise Manager (EM) using their respective tools or WLST commands.

4. *Export database schemas*: Export the database schemas used by Fusion Middleware components using the database vendor's tools (e.g., Oracle Data Pump).


*Migrating to Linux:*


1. *Install Linux*: Install the chosen Linux distribution on the target server.

2. *Install Fusion Middleware*: Install the same version of Fusion Middleware on Linux as was on Solaris.

3. *Create a new domain*: Create a new domain on Linux using the WebLogic Server Configuration Wizard.

4. *Import the domain configuration*: Use the `unpack` command to import the domain configuration from the jar file.

    - `unpack -domain=domain_name -template=domain_template.jar`

5. *Configure individual components*: Configure individual components like SOA Suite, OSB, or EM using their respective tools or WLST commands.

6. *Update database connections*: Update the database connections and credentials in the Fusion Middleware configuration.

7. *Test and validate*: Perform thorough testing to ensure all components and applications are functioning correctly.


Additional tips:


- Use the same version of Fusion Middleware on Linux as was on Solaris to ensure compatibility.

- Ensure the Linux environment is properly configured and patched.

- Use Oracle's official documentation and support resources for guidance specific to your Fusion Middleware version and environment.

- Consider engaging Oracle consulting services or a certified partner for complex migrations.


Some useful WLST commands for exporting and importing domain configurations:


- `exportDomain`: Exports the entire domain configuration to a jar file.

- `importDomain`: Imports the domain configuration from a jar file.

- `exportComponent`: Exports a specific component's configuration (e.g., SOA Suite, OSB).

- `importComponent`: Imports a component's configuration from a jar file.


Remember to consult Oracle's official documentation and support resources for more detailed instructions and guidance specific to your Fusion Middleware version and environment.

Migrating Oracle E-Business Suite (EBS) from Solaris to Linux

 Migrating Oracle E-Business Suite (EBS) from Solaris to Linux involves several steps. Here's a general outline:


*Pre-Migration Steps:*


1. *Assess and plan*:

    - Identify the EBS version, modules, and customizations.

    - Determine the Linux distribution and version to use.

    - Plan the migration approach, timeline, and resources.

2. *Backup and archive*:

    - Backup the entire EBS environment, including databases, files, and configurations.

    - Archive the backups for future reference.


*Migration Steps:*


1. *Install Linux*:

    - Install the chosen Linux distribution on the target server.

2. *Install Oracle Database*:

    - Install the same version of Oracle Database on Linux as was on Solaris.

3. *Install EBS software*:

    - Install the same version of EBS on Linux as was on Solaris.

4. *Configure EBS*:

    - Configure EBS using the Rapid Install wizard or manual configuration.

5. *Migrate databases*:

    - Migrate the EBS databases from Solaris to Linux using Oracle Data Pump or RMAN.

6. *Migrate file system*:

    - Migrate the EBS file system from Solaris to Linux using tools like rsync or scp.

7. *Update configuration files*:

    - Update the EBS configuration files to reflect the new Linux environment.

8. *Test and validate*:

    - Perform thorough testing to ensure EBS is functioning correctly.


*Post-Migration Steps:*


1. *Cleanup and optimization*:

    - Remove unnecessary files and configurations.

    - Optimize the EBS configuration for the Linux environment.

2. *Final testing and validation*:

    - Perform final testing to ensure EBS is functioning as expected.


*Step-by-Step Procedure:*


*Step 1: Prepare the Linux Environment*


- Install the chosen Linux distribution on the target server.

- Install required packages and updates.


*Step 2: Install Oracle Database*


- Download the Oracle Database installation media.

- Install Oracle Database using the graphical or silent installation method.


*Step 3: Install EBS Software*


- Download the EBS installation media.

- Install EBS using the Rapid Install wizard or manual configuration.


*Step 4: Configure EBS*


- Configure EBS using the Rapid Install wizard or manual configuration.

- Update the EBS configuration files to reflect the new Linux environment.


*Step 5: Migrate Databases*


- Use Oracle Data Pump or RMAN to migrate the EBS databases from Solaris to Linux.


*Step 6: Migrate File System*


- Use tools like rsync or scp to migrate the EBS file system from Solaris to Linux.


*Step 7: Update Configuration Files*


- Update the EBS configuration files to reflect the new Linux environment.


*Step 8: Test and Validate*


- Perform thorough testing to ensure EBS is functioning correctly.


*Step 9: Cleanup and Optimization*


- Remove unnecessary files and configurations.

- Optimize the EBS configuration for the Linux environment.


*Step 10: Final Testing and Validation*


- Perform final testing to ensure EBS is functioning as expected.


Remember to consult Oracle's official documentation and support resources for more detailed instructions and guidance specific to your EBS version and environment.

NTP

 When enabling NTP (Network Time Protocol) on a RAC (Real Application Clusters) database running on Solaris 11, downtime is not strictly necessary, but it's highly recommended to minimize potential issues. Here's why:


1. *Node synchronization*: NTP ensures all nodes in the RAC cluster have synchronized clocks. If you enable NTP without downtime, the nodes might not sync immediately, potentially causing issues with database operations.

2. *Clock adjustments*: When NTP is enabled, the system clock might be adjusted to match the reference clock. This adjustment can cause issues with database processes, especially if they're sensitive to time changes.

3. *RAC node evictions*: If the clock adjustments are significant, RAC nodes might be evicted from the cluster, leading to downtime and potential data inconsistencies.


To minimize risks, consider the following:


1. *Schedule downtime*: Plan a maintenance window to enable NTP, ensuring all nodes are updated and synchronized simultaneously.

2. *Use the `ntpd` command*: Instead of enabling NTP through the `svcadm` command, use `ntpd` with the `-r` option to restart the NTP service. This allows for a smoother transition.

3. *Monitor the cluster*: Closely monitor the RAC cluster during and after NTP enablement to quickly address any issues that arise.


In summary, while downtime is not mandatory, it's highly recommended to ensure a smooth transition when enabling NTP on a RAC database running on Solaris 11.

Purge ADR Contents

# Purge ADR contents (adr_purge.sh)
# 00 05 * * 0 adr_purge.sh
# Add the above line with `crontab -e` to the oracle user's cron
 
ALERT_RET="129600" # 90 Days
INCIDENT_RET="43200" # 30 Days
TRACE_RET="43200" # 30 Days
CDUMP_RET="43200" # 30 Days
HM_RET="43200" # 30 Days
 
echo "INFO: adrci purge started at `date`"
adrci exec="show homes"|grep -v : | while read file_line
do
echo "INFO: adrci purging diagnostic destination " $file_line
echo "INFO: purging ALERT older than 90 days"
adrci exec="set homepath $file_line;purge -age $ALERT_RET -type ALERT"
echo "INFO: purging INCIDENT older than 30 days"
adrci exec="set homepath $file_line;purge -age $INCIDENT_RET -type INCIDENT"
echo "INFO: purging TRACE older than 30 days"
adrci exec="set homepath $file_line;purge -age $TRACE_RET -type TRACE"
echo "INFO: purging CDUMP older than 30 days"
adrci exec="set homepath $file_line;purge -age $CDUMP_RET -type CDUMP"
echo "INFO: purging HM older than 30 days"
adrci exec="set homepath $file_line;purge -age $HM_RET -type HM"
echo ""
echo ""
done
echo
echo "INFO: adrci purge finished at `date`"

RAC Health Checks

 RAC Health Checks

1) RAC Node Apps Health Checks :
[grid@rac19cdb01 ~]$ srvctl status nodeapps
VIP rac19cdb01-vip is enabled
VIP rac19cdb01-vip is running on node: rac19cdb01
VIP rac19cdb02-vip is enabled
VIP rac19cdb02-vip is running on node: rac19cdb02
Network is enabled
Network is running on node: rac19cdb01
Network is running on node: rac19cdb02
GSD is disabled
GSD is not running on node: rac19cdb01
GSD is not running on node: rac19cdb02
ONS is enabled
ONS daemon is running on node: rac19cdb01
ONS daemon is running on node: rac19cdb02
[grid@rac19cdb01 ~]$



2) ASM Status : 
[grid@rac19cdb01 ~]$ srvctl status asm
ASM is running on rac19cdb02,rac19cdb01
[grid@rac19cdb01 ~]$ srvctl status asm -n rac19cdb01
ASM is running on rac19cdb01
[grid@rac19cdb01 ~]$ srvctl status asm -n rac19cdb02
ASM is running on rac19cdb02
[grid@rac19cdb01 ~]$



3) Database Status:
[grid@rac19cdb01 ~]$ srvctl status database -d racdb
Instance racdb1 is running on node rac19cdb01
Instance racdb2 is running on node rac19cdb02
[grid@rac19cdb01 ~]$ srvctl status instance -d racdb -i racdb1
Instance racdb1 is running on node rac19cdb01
[grid@rac19cdb01 ~]$ srvctl status instance -d racdb -i racdb2
Instance racdb2 is running on node rac19cdb02
[grid@rac19cdb01 ~]$


4) CRS Status:
[grid@rac19cdb01 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@rac19cdb01 ~]$


5) Cluster Status:
[grid@rac19cdb01 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@rac19cdb01 ~]$


6) RAC High Availability Services Status:
[grid@rac19cdb01 ~]$ crsctl check has
CRS-4638: Oracle High Availability Services is online
[grid@rac19cdb01 ~]$


7) Database Services Status : 
[grid@rac19cdb01 ~]$ srvctl status service -d racdb
Service racdb_service is running on instance(s) racdb1,racdb2
[grid@rac19cdb01 ~]$


8) Listener Status: 
[grid@rac19cdb01 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac19cdb02,rac19cdb01
[grid@rac19cdb01 ~]$


9) SCAN VIP Status : 
[grid@rac19cdb01 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rac19cdb02
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node rac19cdb01
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node rac19cdb01
[grid@rac19cdb01 ~]$


10) Scan Listener Status : 
[grid@rac19cdb01 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node rac19cdb02
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node rac19cdb01
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node rac19cdb01
[grid@rac19cdb01 ~]$


11) Server Status : 
[grid@rac19cdb01 ~]$ srvctl status server -n rac19cdb01 -a
Server name: rac19cdb01
Server state: ONLINE
Server active pools: Generic ora.racdb ora.racdb_racdb_service
Server state details:
[grid@rac19cdb01 ~]$ srvctl status server -n rac19cdb02 -a
Server name: rac19cdb02
Server state: ONLINE
Server active pools: Generic ora.racdb ora.racdb_racdb_service
Server state details:
[grid@rac19cdb01 ~]$


12)CVU Status: 
[grid@rac19cdb01 ~]$ srvctl status cvu
CVU is enabled and running on node rac19cdb01
[grid@rac19cdb01 ~]$



13) GNS Status :  ( As root user)
[root@rac19cdb01 ~]#/u01/app/oracle/product/19.0.0/grid/bin/srvctl status gns
PRCS-1065 : GNS is not configured.
[root@rac19cdb01 ~]#


14) Serverpool Details : 
[grid@rac19cdb01 ~]$ srvctl status srvpool
Server pool name: Free
Active servers count: 0
Server pool name: Generic
Active servers count: 2
[grid@rac19cdb01 ~]$


15) Cluster Interconnect Details : 
[grid@rac19cdb01 ~]$ oifcfg getif
eth0  192.168.1.0  global  public
eth1  192.168.50.0  global  cluster_interconnect


16) OCR Checks:
[grid@rac19cdb01 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3224
         Available space (kbytes) :     258896
         ID                       : 1449084471
         Device/File Name         :  +OCR_VOTE
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user
[grid@rac19cdb01 ~]$



17) OCR Backups:
[grid@rac19cdb01 ~]$ ocrconfig -showbackup
rac19cdb01     2013/09/23 11:46:41    /u01/app/oracle/product/19.0.0/grid/cdata/orrcdbdv-clstr/backup00.ocr
rac19cdb01     2013/09/23 04:19:26    /u01/app/oracle/product/19.0.0/grid/cdata/orrcdbdv-clstr/backup01.ocr
rac19cdb01     2013/09/23 00:19:25    /u01/app/oracle/product/19.0.0/grid/cdata/orrcdbdv-clstr/backup02.ocr
rac19cdb01     2013/09/22 08:19:24    /u01/app/oracle/product/19.0.0/grid/cdata/orrcdbdv-clstr/day.ocr
rac19cdb01     2013/09/10 08:18:54    /u01/app/oracle/product/19.0.0/grid/cdata/orrcdbdv-clstr/week.ocr
PROT-25: Manual backups for the Oracle Cluster Registry are not available
[grid@rac19cdb01 ~]$




18) Voting Disk Status: 
[grid@rac19cdb01 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   9bc0828570854fa5bff3221500a1fc63 (ORCL:CRSVOL1) [OCR_VOTE]
Located 1 voting disk(s).
[grid@rac19cdb01 ~]$



19) Node apps config details : 
[grid@rac19cdb01 ~]$ srvctl config nodeapps  -a -g -s
Network exists: 1/192.168.1.0/255.255.255.0/eth0, type static
VIP exists: /rac19cdb01-vip/192.168.1.73/192.168.1.0/255.255.255.0/eth0, hosting node rac19cdb01
VIP exists: /rac19cdb02-vip/192.168.1.74/192.168.1.0/255.255.255.0/eth0, hosting node rac19cdb02
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
[grid@rac19cdb01 ~]$



20) Diskgroups Status:
[grid@rac19cdb01 ~]$ crs_stat -t | grep -i dg
ora....DATA.dg ora....up.type ONLINE    ONLINE    rac19cdb01
ora.FLASH.dg   ora....up.type ONLINE    ONLINE    rac19cdb01
ora....VOTE.dg ora....up.type ONLINE    ONLINE    rac19cdb01
[grid@rac19cdb01 ~]$

CRSCTL Commands

Oracle CRSCTL Commands List
CRSCTL utility allows you to administer cluster resources. Here are few quick commands to help you administer Oracle RAC cluster!
Check Cluster Status 
Check Cluster Nodes 
Stop Grid Cluster 
Start Grid Cluster 



Check Cluster Status


Status of upper & lower stack


./crsctl check crs


Status of upper stack


./crsctl check cluster


Cluster status on all nodes


./crsctl check cluster -all


Cluster status on specific node


./crsctl check cluster -n rac2



Check Cluster Nodes


Check cluster services in table format


./crsctl status resource -t


Checking status of clusterware nodes / services


./crsctl status server -f


Check cluster nodes


olsnodes -n

oraracn1        1

oraracn2        2



Stop Grid Cluster


Stop HAS on current node


./crsctl stop has


Stop HAS on remote node


./crsctl stop has –n rac2


Stop entire cluster on all nodes


./crsctl stop cluster -all


Stop cluster ( CRS + HAS ) on remote node


./crsctl stop cluster –n rac2 



Start Grid Cluster


Start HAS on current node


./crsctl start has 


Start HAS on remote node


./crsctl start has –n rac2


Start entire cluster on all nodes


./crsctl start cluster –all


Start cluster(CRS + HAS) on remote node


./crsctl start cluster –n rac2


ENABLE – DISABLE CLUSTER AUTO START


crsctl enable has

crsctl disable has

Thursday, July 25, 2024

Back up and recover Oracle Database on an Azure Linux VM by using Azure Files

Back up and recover Oracle Database on an Azure Linux VM by using Azure Files - Azure Virtual Machines | Microsoft Learn

RMAN Backup Script

## Mohamed Fowjil

## RMAN Database backup


#!/bin/bash
# Configuration
ORACLE_HOME=/u01/app/oracle/product/19c/db_1
ORACLE_SID=DBSPRD
RMAN_LOG_DIR=/home/oracle/logs
RMAN_BACKUP_BASE_DIR=/BACKUP/DBSPRD/
##EMAIL_RECIPIENT="fowjil@abc.ae"
BACKUP_TYPE=$1
DATE_FORMAT=$(date +%Y%m%d)
CURRENT_BACKUP_DIR="$RMAN_BACKUP_BASE_DIR/$DATE_FORMAT"
##LOG_FILE="$RMAN_LOG_DIR/rman_backup_${DATE_FORMAT}.log"
LOG_FILE="$RMAN_LOG_DIR/rman_backup_${BACKUP_TYPE}_${DATE_FORMAT}.log"
PARALLELISM=6
MAXPIECESIZE=25G

# Export Oracle environment variables
export ORACLE_HOME ORACLE_SID
# Validate backup type
if [[ "$BACKUP_TYPE" != "full" && "$BACKUP_TYPE" != "INC0" && "$BACKUP_TYPE" != "INC1" && "$BACKUP_TYPE" != "archivelog" ]]; then
    echo "Invalid backup type specified. Use 'full', 'incremental', or 'archivelog'."
    exit 1
fi
# Create necessary directories if they don't exist
mkdir -p $RMAN_LOG_DIR $CURRENT_BACKUP_DIR
# Function to send email
send_email() {
    SUBJECT=$1
    BODY=$2
    echo -e "$BODY" | mailx -s "$SUBJECT" $EMAIL_RECIPIENT
 }
# Function to perform RMAN backup based on type
perform_rman_backup() {
    case $BACKUP_TYPE in
        full)
            BACKUP_CMD="BACKUP DATABASE TAG 'FULL BACKUP'; BACKUP ARCHIVELOG ALL FORMAT '${CURRENT_BACKUP_DIR}/ARC_%d_%Y%M%D_%U.bkp' DELETE ALL INPUT;"
            ;;
        INC0)
            BACKUP_CMD="BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'INC0 BACKUP'; BACKUP ARCHIVELOG ALL FORMAT '${CURRENT_BACKUP_DIR}/ARC_%d_%Y%M%D_%U.bkp' DELETE ALL INPUT; "
            ;;
        INC1)
            BACKUP_CMD="BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INC1 BACKUP'; BACKUP ARCHIVELOG ALL FORMAT '${CURRENT_BACKUP_DIR}/ARC_%d_%Y%M%D_%U.bkp' DELETE ALL INPUT; "
            ;;
        archivelog)
            BACKUP_CMD="BACKUP ARCHIVELOG ALL FORMAT '${CURRENT_BACKUP_DIR}/ARC_%d_%Y%M%D_%U.bkp' DELETE ALL INPUT;"
            ;;
    esac
    $ORACLE_HOME/bin/rman target / <<EOF > $LOG_FILE
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE MAXSETSIZE TO UNLIMITED;
CONFIGURE DEVICE TYPE DISK PARALLELISM $PARALLELISM;
RUN {
    ALLOCATE CHANNEL C1 TYPE DISK FORMAT '${CURRENT_BACKUP_DIR}/%d_%Y%M%D_%U.bkp' MAXPIECESIZE ${MAXPIECESIZE};
    ALLOCATE CHANNEL C2 TYPE DISK FORMAT '${CURRENT_BACKUP_DIR}/%d_%Y%M%D_%U.bkp' MAXPIECESIZE ${MAXPIECESIZE};
    ALLOCATE CHANNEL C3 TYPE DISK FORMAT '${CURRENT_BACKUP_DIR}/%d_%Y%M%D_%U.bkp' MAXPIECESIZE ${MAXPIECESIZE};
    ALLOCATE CHANNEL C4 TYPE DISK FORMAT '${CURRENT_BACKUP_DIR}/%d_%Y%M%D_%U.bkp' MAXPIECESIZE ${MAXPIECESIZE};
    ALLOCATE CHANNEL C5 TYPE DISK FORMAT '${CURRENT_BACKUP_DIR}/%d_%Y%M%D_%U.bkp' MAXPIECESIZE ${MAXPIECESIZE};
    ALLOCATE CHANNEL C6 TYPE DISK FORMAT '${CURRENT_BACKUP_DIR}/%d_%Y%M%D_%U.bkp' MAXPIECESIZE ${MAXPIECESIZE};
        CROSSCHECK BACKUP;
        CROSSCHECK ARCHIVELOG ALL;
        SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
        $BACKUP_CMD
        BACKUP FORMAT '${CURRENT_BACKUP_DIR}/ctrlfile_%d_%Y%M%D_%U.bkp' CURRENT CONTROLFILE;
        BACKUP SPFILE FORMAT '${CURRENT_BACKUP_DIR}/spfile_%d_%Y%M%D_%U.bkp' ;
        REPORT OBSOLETE;
        DELETE NOPROMPT OBSOLETE;
    RELEASE CHANNEL C1 ;
    RELEASE CHANNEL C2 ;
    RELEASE CHANNEL C3 ;
    RELEASE CHANNEL C4 ;
    RELEASE CHANNEL C5 ;
    RELEASE CHANNEL C6 ;
}
EOF
}
# Perform the backup
perform_rman_backup
# Function to delete old backups
cleanup_old_empty_dir() {
    find $RMAN_BACKUP_BASE_DIR -maxdepth 1 -type d -empty -delete;
}
# Check RMAN backup status
if grep -q "ORA-" $LOG_FILE; then
##    send_email "RMAN Backup Failed" "RMAN $BACKUP_TYPE backup failed. Check the log file for details: $LOG_FILE"
    exit 1
else
 ##   send_email "RMAN Backup Successful" "RMAN $BACKUP_TYPE backup completed successfully. Check the log file for details: $LOG_FILE"
        # Delete old backups empty directories if current backup is successful
    cleanup_old_empty_dir
fi

-------------------------------------------------------------------------------------------------------------

Usage:
sh rman_backup.sh full
sh rman_backup.sh archivelog




User DDL

set termout off
set linesize 19000
set pages 50000
set feedback off
set trim on
set echo off
set serveroutput on
set long 99999999
set longchunksize 20000 pagesize 0
column Extracted_DDL for a1000
EXEC DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'PRETTY',TRUE);
EXEC DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',TRUE);
select (case
when ((select count(*)
from dba_users
where username = '$dbuser' and profile <> 'DEFAULT') > 0)
then chr(10)||' -- Note: Profile'||(select dbms_metadata.get_ddl('PROFILE', u.profile) AS ddl from dba_users u where u.username = '$dbuser')
else to_clob (chr(10)||' -- Note: Default profile, no need to create!')
end ) from dual
UNION ALL
select (case
when ((select count(*)
from dba_users
where username = '$dbuser') > 0)
then ' -- Note: Create user statement'||dbms_metadata.get_ddl ('USER', '$dbuser')
else to_clob (chr(10)||' -- Note: User not found!')
end ) Extracted_DDL from dual
UNION ALL
select (case
when ((select count(*)
from dba_ts_quotas
where username = '$dbuser') > 0)
then ' -- Note: TBS quota'||dbms_metadata.get_granted_ddl( 'TABLESPACE_QUOTA', '$dbuser')
else to_clob (chr(10)||' -- Note: No TS Quotas found!')
end ) from dual
UNION ALL
select (case
when ((select count(*)
from dba_role_privs
where grantee = '$dbuser') > 0)
then ' -- Note: Roles'||dbms_metadata.get_granted_ddl ('ROLE_GRANT', '$dbuser')
else to_clob (chr(10)||' -- Note: No granted Roles found!')
end ) from dual
UNION ALL
select (case
when ((select count(*)
from V\$PWFILE_USERS
where username = '$dbuser' and SYSDBA='TRUE') > 0)
then ' -- Note: sysdba'||chr(10)||to_clob (' GRANT SYSDBA TO '||'"'||'$dbuser'||'"'||';')
else to_clob (chr(10)||' -- Note: No sysdba administrative Privilege found!')
end ) from dual
UNION ALL
select (case
when ((select count(*)
from dba_sys_privs
where grantee = '$dbuser') > 0)
then ' -- Note: System Privileges'||dbms_metadata.get_granted_ddl ('SYSTEM_GRANT', '$dbuser')
else to_clob (chr(10)||' -- Note: No System Privileges found!')
end ) from dual
UNION ALL
select (case
when ((select count(*)
from dba_tab_privs
where grantee = '$dbuser') > 0)
then ' -- Note: Object Privileges'||dbms_metadata.get_granted_ddl ('OBJECT_GRANT', '$dbuser')
else to_clob (chr(10)||' -- Note: No Object Privileges found!')
end ) from dual;

Gather Stats Script

 -- -----------------------------------------------------------------------------------
-- File Name    : gather_stats_script.sql
-- Author       : Mohamed Fowjil
-- Description  : Gathers statistics for the database using DBMS_STATS procedures.
-- Last Modified: 2024-07-01
-- -----------------------------------------------------------------------------------
SET SERVEROUTPUT ON
-- Define variables for timing
VARIABLE start_time VARCHAR2(100);
VARIABLE end_time VARCHAR2(100);
-- Get current timestamp as start time
BEGIN
   SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') INTO :start_time FROM DUAL;
   DBMS_OUTPUT.PUT_LINE('Start Time: ' || :start_time);
END;
/
-- Spool start time to separate log file
SPOOL gather_stats_log.txt
SELECT 'Start Time: ' || :start_time FROM DUAL;
SPOOL OFF
-- Set global preferences for DBMS_STATS
EXEC DBMS_STATS.SET_GLOBAL_PREFS('DEGREE', 8);
EXEC DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT', 'AUTOMATIC');
-- Gather dictionary stats
BEGIN
   DBMS_STATS.GATHER_DICTIONARY_STATS();
   DBMS_OUTPUT.PUT_LINE('Gathered dictionary stats.');
EXCEPTION
   WHEN OTHERS THEN
      DBMS_OUTPUT.PUT_LINE('Error gathering dictionary stats: ' || SQLERRM);
END;
/
-- Gather fixed objects stats
BEGIN
   DBMS_STATS.GATHER_FIXED_OBJECTS_STATS();
   DBMS_OUTPUT.PUT_LINE('Gathered fixed objects stats.');
EXCEPTION
   WHEN OTHERS THEN
      DBMS_OUTPUT.PUT_LINE('Error gathering fixed objects stats: ' || SQLERRM);
END;
/
-- Gather database stats
DECLARE
   l_options VARCHAR2(200);
BEGIN
   l_options := 'GATHER STALE';
   DBMS_STATS.GATHER_DATABASE_STATS(
      options => l_options,
      estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
      method_opt => 'FOR ALL COLUMNS SIZE SKEWONLY',
      granularity => 'ALL',
      cascade => TRUE,
      degree => 8,
      no_invalidate => DBMS_STATS.AUTO_INVALIDATE
   );
   DBMS_OUTPUT.PUT_LINE('Gathered database stats.');
EXCEPTION
   WHEN OTHERS THEN
      DBMS_OUTPUT.PUT_LINE('Error gathering database stats: ' || SQLERRM);
END;
/
-- Get current timestamp as end time
BEGIN
   SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS') INTO :end_time FROM DUAL;
   DBMS_OUTPUT.PUT_LINE('End Time: ' || :end_time);
END;
/

--SET PAGESIZE 14
SET FEEDBACK ON
SET VERIFY ON
SET TIMING ON
SET HEADING ON

SRVCTL Commands

 Usage: srvctl {-version | -version -fullversion | -fullversion}
Usage: srvctl add database -db <db_unique_name> -oraclehome <oracle_home> [-domain <domain_name>] [-spfile <spfile>] [-pwfile <password_file_path>] [-role {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY | FAR_SYNC}] [-startoption <start_options>] [-stopoption <stop_options>] [-dbname <db_name>] [-instance <inst_name>] [-policy {AUTOMATIC | MANUAL | NORESTART | USERONLY}] [-diskgroup "<diskgroup_list>"] [-verbose]
Usage: srvctl config database [-db <db_unique_name> [-all] | -home] [-verbose]
Usage: srvctl start database -db <db_unique_name> [-startoption <start_options>]
Usage: srvctl stop database -db <db_unique_name> [-stopoption <stop_options>] [-drain_timeout <timeout>] [-force]
Usage: srvctl status database {-db <db_unique_name> [-sid] [-home] | -thisversion | -thishome} [-force] [-verbose]
Usage: srvctl enable database -db <db_unique_name>
Usage: srvctl disable database -db <db_unique_name>
Usage: srvctl modify database -db <db_unique_name> [-dbname <db_name>] [-instance <inst_name>] [-oraclehome <oracle_home>] [-user <oracle_user>] [-domain <domain>] [-spfile <spfile>] [-pwfile <password_file_path>] [-role {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-startoption <start_options>] [-stopoption <stop_options>] [-policy {AUTOMATIC | MANUAL | NORESTART | USERONLY}] [-diskgroup "<diskgroup_list>"|-nodiskgroup] [-force]
Usage: srvctl remove database -db <db_unique_name> [-force] [-noprompt] [-verbose]
Usage: srvctl update database -db <db_unique_name> -startoption <start_options>
Usage: srvctl getenv database -db <db_unique_name> [-envs "<name>[,...]"]
Usage: srvctl setenv database -db <db_unique_name> {-envs "<name>=<val>[,...]" | -env "<name>=<val>"}
Usage: srvctl unsetenv database -db <db_unique_name> -envs "<name>[,...]"
Usage: srvctl upgrade database -db <db_unique_name> -oraclehome <oracle_home>
Usage: srvctl downgrade database -db <db_unique_name> -oraclehome <oracle_home> -targetversion <to_version>
Usage: srvctl add service -db <db_unique_name> -service "<service_name_list>"
       [-role [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-policy {AUTOMATIC | MANUAL}]
       [-notification {TRUE | FALSE}] [-clbgoal {SHORT | LONG}] [-rlbgoal {NONE | SERVICE_TIME | THROUGHPUT}]
       [-failovertype {NONE | SESSION | SELECT | TRANSACTION | AUTO}] [-failovermethod {NONE | BASIC}][-failoverretry <failover_retries>] [-failoverdelay <failover_delay>] [-failover_restore {NONE | LEVEL1}]
       [-edition <edition>] [-pdb <pluggable_database>] [-global <TRUE | FALSE>] [-maxlag <max_lag_time>] [-sql_translation_profile <sql_translation_profile>]
       [-commit_outcome {TRUE | FALSE}] [-retention <retention>] [replay_init_time <replay_initiation_time>] [-tablefamilyid <table_family_id>] [-drain_timeout <timeout>]
       [-stopoption <stop_option>] [-session_state {STATIC | DYNAMIC}] [-force]
Usage: srvctl config service -db <db_unique_name> [-service <service_name>] [-verbose]
Usage: srvctl start service -db <db_unique_name> [-service  "<service_name_list>" | -pdb <pluggable_database>] [-startoption <start_options>] [-global_override] [-role] [-verbose]
Usage: srvctl stop service -db <db_unique_name> [-service "<service_name_list>" | -pdb <pluggable_database>] [-drain_timeout <timeout>] [-stopoption <stop_option>] [-global_override] [-wait <wait_option>] [-force] [-verbose]
Usage: srvctl status service -db <db_unique_name> [-service  "<service_name_list>" | -pdb <pluggable_database>] [-force] [-verbose]
Usage: srvctl enable service -db <db_unique_name> -service  "<service_name_list>" [-global_override]
Usage: srvctl disable service -db <db_unique_name> -service  "<service_name_list>" [-global_override]
Usage: srvctl modify service -db <db_unique_name> -service <service_name>
       [-role [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-policy {AUTOMATIC | MANUAL}]
       [-notification {TRUE | FALSE}] [-clbgoal {SHORT | LONG}] [-rlbgoal {NONE | SERVICE_TIME | THROUGHPUT}]
       [-failovertype {NONE | SESSION | SELECT | TRANSACTION | AUTO}] [-failovermethod {NONE | BASIC}] [-failoverretry <integer>] [-failoverdelay <integer>] [-failover_restore {NONE | LEVEL1}]
       [-edition <edition>] [-pdb <pluggable_database>] [-sql_translation_profile <sql_translation_profile>] [-commit_outcome {TRUE | FALSE}]
       [-retention <retention>] [replay_init_time <replay_initiation_time>] [-tablefamilyid <table_family_id>] [-drain_timeout <timeout>] [-stopoption <stop_option>]
       [-session_state {STATIC | DYNAMIC}] [-global_override]
Usage: srvctl remove service -db <db_unique_name> -service <service_name> [-global_override] [-force]
Usage: srvctl add asm [-listener <lsnr_name>] [-spfile <spfile>] [-pwfile <password_file_path>] [-diskstring <asm_diskstring>]
Usage: srvctl config asm [-detail]
Usage: srvctl start asm [-startoption <start_options>]
Usage: srvctl stop asm [-stopoption <stop_options>] [-force]
Usage: srvctl status asm [-all] [-verbose]
Usage: srvctl enable asm
Usage: srvctl disable asm
Usage: srvctl modify asm [-listener <lsnr_name>] [-spfile <spfile>] [-pwfile <password_file_path>] [-diskstring <asm_diskstring>]
Usage: srvctl remove asm [-force]
Usage: srvctl getenv asm [-envs "<name>[,...]"]
Usage: srvctl setenv asm {-envs "<name>=<val>[,...]" | -env "<name>=<value>"}
Usage: srvctl unsetenv asm -envs "<name>[,...]"
Usage: srvctl start diskgroup -diskgroup <dg_name>
Usage: srvctl stop diskgroup -diskgroup <dg_name> [-force]
Usage: srvctl status diskgroup -diskgroup <dg_name> [-all] [-verbose]
Usage: srvctl enable diskgroup -diskgroup <dg_name>
Usage: srvctl disable diskgroup -diskgroup <dg_name>
Usage: srvctl remove diskgroup -diskgroup <dg_name> [-force]
Usage: srvctl add listener [-listener <lsnr_name>] [-skip] [-endpoints "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>][/SDP:<port>][/EXADIRECT:<port>]"] [-oraclehome <path>]
Usage: srvctl config listener [-listener <lsnr_name>]
Usage: srvctl start listener [-listener <lsnr_name>]
Usage: srvctl stop listener [-listener <lsnr_name>] [-force]
Usage: srvctl enable listener [-listener <lsnr_name>]
Usage: srvctl disable listener [-listener <lsnr_name>]
Usage: srvctl modify listener [-listener <lsnr_name>] [-oraclehome <path>] [-endpoints "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>][/SDP:<port>][/EXADIRECT:<port>]"]
Usage: srvctl remove listener [-listener <lsnr_name> | -all] [-force]
Usage: srvctl getenv listener [-listener <lsnr_name>] [-envs "<name>[,...]"]
Usage: srvctl setenv listener [-listener <lsnr_name>] -envs "<name>=<val>[,...]" | -env "<name>=<value>"
Usage: srvctl unsetenv listener [-listener <lsnr_name>] -envs "<name>[,...]"
Usage: srvctl start home -oraclehome <oracle_home> -statefile <state_file>
Usage: srvctl stop home -oraclehome <oracle_home> -statefile <state_file> [-stopoption <stop_options>] [-force]
Usage: srvctl status home -oraclehome <oracle_home> -statefile <state_file>
Usage: srvctl add ons [-emport <em_port>] [-onslocalport <ons_local_port>]  [-onsremoteport <ons_remote_port>] [-remoteservers <host>[:<port>][,<host>[:<port>]...]] [-verbose]
Usage: srvctl remove ons [-force] [-verbose]
Usage: srvctl enable ons [-verbose]
Usage: srvctl disable ons [-verbose]
Usage: srvctl modify ons [-emport <em_port>] [-onslocalport <ons_local_port>]  [-onsremoteport <ons_remote_port>] [-remoteservers <host>[:<port>][,<host>[:<port>]...]] [-verbose]
Usage: srvctl config ons
Usage: srvctl status ons [-verbose]
Usage: srvctl start ons [-verbose]
Usage: srvctl stop ons [-verbose]
Usage: srvctl add oraclehome -name <home_name> -path <path>
Usage: srvctl config oraclehome [-name <home_name>]
Usage: srvctl start oraclehome -name <home_name>
Usage: srvctl stop oraclehome -name <home_name> [-force]
Usage: srvctl enable oraclehome -name <home_name>
Usage: srvctl disable oraclehome -name <home_name>
Usage: srvctl status oraclehome -name <home_name>
Usage: srvctl modify oraclehome -name <home_name> -path <path>
Usage: srvctl remove oraclehome -name <home_name> [-force]

Datafile Resize for Specific Tablespace

 DECLARE
  v_tablespace_name VARCHAR2(30) := 'YOUR_TABLESPACE_NAME'; -- Change to your tablespace name
  v_new_size        VARCHAR2(20) := '1024M'; -- New size for each datafile
BEGIN
  FOR rec IN (SELECT file_name
              FROM dba_data_files
              WHERE tablespace_name = v_tablespace_name) 
  LOOP
    EXECUTE IMMEDIATE 'ALTER DATABASE DATAFILE ''' || rec.file_name || ''' RESIZE ' || v_new_size;
  END LOOP;
END;
/

DGMGRL Commands

 The following commands are available:

@              Execute DGMGRL script file
!              Host operating system command
/              Repeat the last command
--             Comment to be ignored by DGMGRL
add            Adds a member to the broker configuration
connect        Connects to an Oracle database instance
convert        Converts a database from one type to another
create         Creates a broker configuration
disable        Disables a configuration, a member, or fast-start failover
edit           Edits a configuration or a member
enable         Enables a configuration, a member, or fast-start failover
exit           Exits the program
export         Export Data Guard Broker configuration to a file.
failover       Changes a standby database to be the primary database
help           Displays description and syntax for a command
host           Host operating system command
import         Import Data Guard Broker configuration from a file.
migrate        Migrate a pluggable database from one configuration to another.
quit           Exits the program
reinstate      Changes a database marked for reinstatement into a viable standby
rem            Comment to be ignored by DGMGRL
remove         Removes a configuration or a member
set            Set a DGMGRLI CLI property to a specified value
show           Displays information about a configuration or a member
shutdown       Shuts down a currently running Oracle database instance
spool          store input and output of DGMGRL CLI in a file
sql            Executes a SQL statement
start          Starts the fast-start failover observer
startup        Starts an Oracle database instance
stop           Stops the fast-start failover observer
switchover     Switches roles between a primary and standby database
validate       Performs an exhaustive set of validations for a member

Use "help <command>" to see syntax for individual commands

---------------------------------------------------------------------------------------

DGMGRL> help show

Displays information about a configuration or a member

Syntax:

  SHOW ALL;

  SHOW CONFIGURATION [LAG] [VERBOSE];

  SHOW CONFIGURATION WHEN PRIMARY IS <database name>;

  SHOW { RECOVERY_APPLIANCE | DATABASE | FAR_SYNC } [VERBOSE]
    <object name> [<property name>];

  SHOW { RECOVERY_APPLIANCE | DATABASE | FAR_SYNC | MEMBER }
    <object name> PARAMETER [<initialization parameter name>];

  SHOW INSTANCE [VERBOSE] <instance name> [<property name>]
    [ON { DATABASE | FAR_SYNC } <object name>];

  SHOW FAST_START FAILOVER;

  SHOW OBSERVER;

  SHOW OBSERVERS [FOR <configuration group name>];

  SHOW OBSERVERCONFIGFILE;

------------------------------------------------------------------------
DGMGRL> help edit

Edits a configuration or a member

Syntax:

  EDIT CONFIGURATION SET PROTECTION MODE [AS]
    { MaxProtection | MaxAvailability | MaxPerformance };

  EDIT CONFIGURATION SET PROPERTY <property name> = <value>;

  EDIT CONFIGURATION RESET PROPERTY <property name>;

  EDIT CONFIGURATION RENAME TO <new configuration name>;

  EDIT { RECOVERY_APPLIANCE | DATABASE | FAR_SYNC }
    <object name> SET PROPERTY <property name> = <value>;

  EDIT { RECOVERY_APPLIANCE | DATABASE | FAR_SYNC }
    <object name> RESET PROPERTY <property name>;

  EDIT { RECOVERY_APPLIANCE | DATABASE | FAR_SYNC }
    <object name> RENAME TO <new object name>;

  EDIT DATABASE <database name> SET STATE = <state>
    [WITH APPLY INSTANCE = <instance name>];

  EDIT INSTANCE <instance name> [ON { DATABASE | FAR_SYNC } <object name>]
    SET AUTO PFILE [ = {<initialization file path> | OFF} ];

  EDIT INSTANCE <instance name> [ON { DATABASE | FAR_SYNC } <object name>]
    SET PROPERTY <property name> = <value>;

  EDIT INSTANCE <instance name> [ON { DATABASE | FAR_SYNC } <object name>]
    RESET PROPERTY <property name>;

  EDIT INSTANCE * ON { DATABASE | FAR_SYNC } <object name>
    SET PROPERTY <property name> = <value>;

  EDIT INSTANCE * ON { DATABASE | FAR_SYNC } <object name>
    RESET PROPERTY <property name>;

  EDIT { RECOVERY_APPLIANCE | DATABASE | FAR_SYNC | MEMBER }
    <object name> SET PARAMETER <parameter name> = <value>
                                [ <initialization parameter options> ];

  EDIT { RECOVERY_APPLIANCE | DATABASE | FAR_SYNC | MEMBER }
    <object name> RESET PARAMETER <parameter name>;

--------------------------------------------------------------
DGMGRL> help validate

Performs an exhaustive set of validations for a member

Syntax:

  VALIDATE DATABASE [VERBOSE] <database name>;

  VALIDATE DATABASE [VERBOSE] <database name> DATAFILE <datafile number>
    OUTPUT=<file name>;

  VALIDATE DATABASE [VERBOSE] <database name> SPFILE;

  VALIDATE FAR_SYNC [VERBOSE] <far_sync name>
    [WHEN PRIMARY IS <database name>];

  VALIDATE NETWORK CONFIGURATION FOR { ALL | <member name> };

  VALIDATE STATIC CONNECT IDENTIFIER FOR { ALL | <database name> };

  VALIDATE DGConnectIdentifier <connect identfier>;

  VALIDATE FAST_START FAILOVER;



------------------------------------------------

DGMGRL> help switchover

Switches roles between a primary and standby database

Syntax:

  SWITCHOVER TO <standby database name> [WAIT [<timeout in seconds>]];


----------------------------------------------------

DGMGRL> help export

Export Data Guard Broker configuration to a file.

Syntax:

  EXPORT CONFIGURATION [TO <file_name>];