Tuesday, July 30, 2024

Running DataPump Jobs Results In ORA-31634 ORA-31664


DataPump job (export or import) and this fails with the following errors:

ORA-31634: job already exists
ORA-31664: unable to construct unique job name when defaulted



1. If you are just running a single job then there are probably old job names that are still found in the database and this is creating the conflict. You could try to remove these job using the steps in the following

Note 336014.1 - How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ?

2. If you are running lots of jobs (more than 99) at a time, then the problem is that DataPump cannot build more than 99 consecutive jobnames when using the default job_name because the default job_name is system-generated in the form SYS_EXPORT_<mode>_NN, where NN expands to a 2-digit incrementing integer starting at 01, therefore the largest value for NN would be 99.

From Oracle Database Utilities guide:

JOB_NAME
Default: system-generated name of the form SYS_EXPORT_<mode>_NN

Purpose
Used to identify the export job in subsequent actions, such as when the ATTACH parameter is used to attach to a job, or to identify the job using the DBA_DATAPUMP_JOBS or USER_DATAPUMP_JOBS views.

Syntax and Description
JOB_NAME=jobname_string

The jobname_string specifies a name of up to 30 bytes for this export job. The bytes must represent printable characters and spaces. If spaces are included, then the name must be enclosed in single quotation marks (for example, 'Thursday Export'). The job name is implicitly qualified by the schema of the user performing the export operation. The job name is used as the name of the master table, which controls the export job.

The default job name is system-generated in the form SYS_EXPORT_<mode>_NN, where NN expands to a 2-digit incrementing integer starting at 01. An example of a default name is 'SYS_EXPORT_TABLESPACE_02'.

Example
The following example shows an export operation that is assigned a job name of exp_job:

#> expdp hr DIRECTORY=data_dump_dir DUMPFILE=export_job.dmp JOB_NAME=export_job NOLOGFILE=YES


To avoid this issue, please use the job_name DataPump parameter to create unique job names that will not conflict with any other job names in the database instead of using the default job_name.

Java Update in Oracle



  1. Java version in Goldengate.

 

GoldenGate 19c comes with an "embedded" JDK version located at $OGG_HOME/jdk. We can upgrade to the latest JDK 1.8.x available for Oracle GoldenGate.

 

To upgrade $OGG_HOME/jdk to the latest

 

Download  Patch 36270061: Oracle SERVER JRE 8 Update 411

The download contains file ->  p36270061_180411_Linux-x86-64.zip

Which in turn contains files ->  \server-jre-8u411-linux-x64.tar.gz\server-jre-8u411-linux-x64.tar\jdk1.8.0_411\bin\

Just replace existing -> $OGG_HOME/jdk/*.* with ones from server-jre-8u411-linux-x64.tar\jdk1.8.0_411

 

 

 

Another Option :

JAVA_VERSION "1.8.0_411" is a part of latest OGG binaries, It is recommended to upgrade to latest OGG version :
Oracle GoldenGate 19.24.0.0.240716 Patch Set Availability ( Doc ID 3037155.1 )

 

 

 

Ref.

Can the Embedded Java Version Part Of OGG Home Be Upgraded ( Doc ID 2359395.1 )
How To Update JDK For Golden Gate $OGG_HOME/jdk? ( Doc ID 2803652.1 )

 

 

 

  1. Java Version in OEM agent.

 

By default, EM 13.4 Agent uses java version "1.8.0_231" (On Linux/ Solaris / Windows).

Please note that before upgrading the JDK on Agents, Ensure that OMS's (OEM) JDK has been Updated already.

 

Apply RU 19 ( Patch 34692141 ) or a later version on the Agent Home before upgrading the JDK on agent.

 

 

For Linux

1. Download Patch 35009178 from JDK 8 tab of Supported Java SE Downloads on MOS (Doc ID 1439822.2)

File Name: p35009178_180371_Linux-x86-64.zip

2. Stop Agent

<Agent_Base>/agent_inst/bin/emctl stop agent

3. Go to <AGENT_ORACLE_HOME>/oracle_common/

Example: <Agent_Base>/agent_13.4.0.0.0/oracle_common/

4. Take backup of jdk directory.

$mv jdk jdk_bak

5. Copy p35009178_180371_Linux-x86-64.zip to $AGENT_ORACLE_HOME/oracle_common/

6. Unzip p35009178_180371_Linux-x86-64.zip and check the readme.txt for the instructions to install it.

$tar zxvf jdk-8u371-linux-x64.tar.gz

It will create jdk1.8.0_371 directory. Rename it to jdk.

7. Confirm the version of the JDK by checking:

<AGENT_ORACLE_HOME>/oracle_common/jdk/bin/java -version

8. Start Agent

<Agent_Base>/agent_inst/bin/emctl start agent

 

 

 

 

Ref.

EM 13.4: How to Use the Latest Certified JDK 8 Update with 13.4 Agents (Doc ID 2659208.1)

 

 

  1. Java Version in RDBMS Oracle Home.

 

JDK and PERL Patches for Oracle Database Home and Grid Home (Doc ID 2584628.1)


Saturday, July 27, 2024

Oracle Database Health Check

#Oracle Database Health Check 

ps -ef | grep pmon

ps -ef | grep tns

!sq

set lines 300;

SELECT DATABASE_ROLE, DB_UNIQUE_NAME, OPEN_MODE,PROTECTION_MODE, PROTECTION_LEVEL, SWITCHOVER_STATUS FROM V$DATABASE;

archive log list

select flashback_on from v$database;


df -h


cat /etc/*release*

uname -a

prtdiag -v


sar 1 5 

iostat -xnctz 5



SELECT name FROM v$controlfile;

set pagesize 40;
set lines 300;
col MEMBER format a60;
select GROUP#,STATUS,MEMBER from v$logfile;

SELECT name FROM v$controlfile;

SELECT value FROM v$parameter WHERE name='audit_file_dest';

Select owner, object_type, count(*) as Total From dba_objects Where status!= 'VALID' group by owner,object_type Order by owner, object_type;

set lines 300;
set pagesize 120;
select start_time,OUTPUT_BYTES_PER_SEC ,ELAPSED_SECONDS, INPUT_TYPE, STATUS from v$rman_backup_job_details;

set colsep |
set linesize 100 pages 100 trimspool on numwidth 14 
col name format a25
col owner format a15 
col "Used (GB)" format a15
col "Free (GB)" format a15 
col "(Used) %" format a15 
col "Size (M)" format a15 
SELECT d.status "Status", d.tablespace_name "Name", 
 TO_CHAR(NVL(a.bytes / 1024 / 1024 /1024, 0),'99,999,990.90') "Size (GB)", 
 TO_CHAR(NVL(a.bytes - NVL(f.bytes, 0), 0)/1024/1024 /1024,'99999999.99') "Used (GB)", 
 TO_CHAR(NVL(f.bytes / 1024 / 1024 /1024, 0),'99,999,990.90') "Free (GB)", 
 TO_CHAR(NVL((a.bytes - NVL(f.bytes, 0)) / a.bytes * 100, 0), '990.00') "(Used) %"
 FROM sys.dba_tablespaces d, 
 (select tablespace_name, sum(bytes) bytes from dba_data_files group by tablespace_name) a, 
 (select tablespace_name, sum(bytes) bytes from dba_free_space group by tablespace_name) f WHERE 
 d.tablespace_name = a.tablespace_name(+) AND d.tablespace_name = f.tablespace_name(+) AND NOT 
 (d.extent_management like 'LOCAL' AND d.contents like 'TEMPORARY') 
UNION ALL 
SELECT d.status 
 "Status", d.tablespace_name "Name", 
 TO_CHAR(NVL(a.bytes / 1024 / 1024 /1024, 0),'99,999,990.90') "Size (GB)", 
 TO_CHAR(NVL(t.bytes,0)/1024/1024 /1024,'99999999.99') "Used (GB)",
 TO_CHAR(NVL((a.bytes -NVL(t.bytes, 0)) / 1024 / 1024 /1024, 0),'99,999,990.90') "Free (GB)", 
 TO_CHAR(NVL(t.bytes / a.bytes * 100, 0), '990.00') "(Used) %" 
 FROM sys.dba_tablespaces d, 
 (select tablespace_name, sum(bytes) bytes from dba_temp_files group by tablespace_name) a, 
 (select tablespace_name, sum(bytes_cached) bytes from v$temp_extent_pool group by tablespace_name) t 
 WHERE d.tablespace_name = a.tablespace_name(+) AND d.tablespace_name = t.tablespace_name(+) AND 
 d.extent_management like 'LOCAL' AND d.contents like 'TEMPORARY'; 
 
 
 
#Standby database

ps -ef | grep pmon

ps -ef | grep tns

!sq

set lines 300;
 
SELECT DATABASE_ROLE, DB_UNIQUE_NAME, OPEN_MODE,PROTECTION_MODE, PROTECTION_LEVEL, SWITCHOVER_STATUS FROM V$DATABASE;

 
SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied",
(ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference" FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG
WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#))
ARCH,(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN
(SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;




select instance_name, to_char(startup_time,'mm/dd/yyyy hh24:mi:ss') as startup_time from v$instance;

Friday, July 26, 2024

Oracle Database 19c Maintenance and Monitoring Standard Operating Procedure (SOP)


Oracle Database 19c Maintenance and Monitoring Standard Operating Procedure


Purpose:

Ensure the reliability, performance, and security of the Oracle Database 19c environment.

Scope:

This SOP applies to all Oracle Database 19c instances.

Responsibilities:

- Database Administrators (DBAs)

- System Administrators

Maintenance Tasks:

- Daily:

    - Check database alert logs for errors.

    - Verify backup completion.

    - Monitor system resources (CPU, memory, disk space).

- Weekly:

    - Run database backups (RMAN).

    - Perform database maintenance tasks (e.g., rebuild indexes, update statistics).

    - Review database performance metrics.

- Monthly:

    - Perform operating system updates.

    - Review database security settings.

Monitoring Tasks:

- Database Performance:

    - Monitor CPU utilization.

    - Monitor disk I/O.

    - Monitor memory utilization.

- Database Logs:

    - Monitor alert logs.

    - Monitor trace files.

- Backup Monitoring:

    - Verify backup completion.

    - Verify backup integrity.

Log and Trace Purging:

- ADRCI Commands:

    - adrci exec="SHOW ALERT" to view alert logs.

    - adrci exec="SHOW TRACE" to view trace files.

    - adrci exec="PURGE ALERT" to purge alert logs.

    - adrci exec="PURGE TRACE" to purge trace files.

Backup Script:

```

bash

#!/bin/bash

rman target=/ <<EOF

BACKUP DATABASE;

EOF

```

Schedule:

Daily, weekly, and monthly tasks should be scheduled using a scheduling tool (e.g., cron).

Documentation:

- Database documentation should include:

    - Database configuration.

    - Backup schedules.

    - Maintenance schedules.

    - Performance metrics.

How to clear flashback log in oracle database

 Here are the steps to clear flashback logs in Oracle without causing downtime ¹:

- Check the size of the recovery area with the command "show parameter db_recovery"

- Check how much space is used and free with the command "select space_used/(1024_1024_1024),space_limit/(1024_1024_1024) from v$recovery_file_dest"

- Set the parameter "log_archive_min_succeed_dest" with the command "show parameter log_archive_min_succeed_dest"

- Set "log_archive_dest_1" to a different location with the command "alter system set log_archive_dest_1='LOCATION=/home/oracle/temp_archivelog' scope=memory"

- Defer the default "DB_RECOVERY_FILE_DESTINATION" with the command "alter system set log_archive_dest_state_10=defer scope=memory"

- Switch logfiles with the command "alter system switch logfile"

- Make "db_recovery_file_size" the same as the actual usage with the command "alter system set db_recovery_file_dest_size=55G scope=memory"

- Check for errors and manually remove archive log files from the OS and run "RMAN> crosscheck archivelog all" and "RMAN> delete expired"

- Put log_archive_destination back with the commands "alter system set log_archive_dest_1=' ' scope=memory" and "alter system set log_archive_dest_10='LOCATION=USE_DB_RECOVERY_FILE_DEST' scope=memory"

- Make sure "flash_recovery_area" has enough space with the command "alter system set db_recovery_file_dest_size=50G scope=spemory"