Sunday, February 15, 2026

Workflow Services Not Starting in Oracle E-Business Suite R12 – BES Control Queue Fix

Workflow Services Not Running or Starting in Oracle E-Business Suite R12 (Target=1, Actual=0) – Complete Fix

In Oracle E-Business Suite R12, Oracle Workflow Services may sometimes fail to start from the frontend (System Administrator responsibility) or from the Concurrent > Manager > Administer form. In this scenario, Workflow service components remain stuck with:

  • Target = 1
  • Actual = 0

This post provides a clean, production-friendly fix to bring the Workflow services back online, along with the MOS-based advanced recovery for the common Service Component Container error.


Affected Workflow Components (Service Managers)

  • Workflow Agent Listener ServiceWFALSNRSVC
  • Workflow Mailer ServiceWFMLRSVC
  • Workflow Document Web Services ServiceWFWSSVC

How to Confirm the Service Short Names (SQL)

You can confirm the concurrent queue short names using the following queries:

Workflow Agent Listener Service

SELECT concurrent_queue_name
FROM apps.fnd_concurrent_queues_tl
WHERE user_concurrent_queue_name = 'Workflow Agent Listener Service';

Workflow Mailer Service

SELECT concurrent_queue_name
FROM apps.fnd_concurrent_queues_tl
WHERE user_concurrent_queue_name = 'Workflow Mailer Service';

Workflow Document Web Services Service

SELECT concurrent_queue_name
FROM apps.fnd_concurrent_queues_tl
WHERE user_concurrent_queue_name = 'Workflow Document Web Services Service';

Solution (Primary Fix) – Reset Workflow Service Manager Definitions

⚠️ Important: Run the below updates carefully (preferably during a controlled window). Take a backup / snapshot before changes in Production. Execute as APPS.

Step 1 – Set process values to zero

UPDATE fnd_concurrent_queues
   SET running_processes = 0,
       max_processes     = 0
 WHERE concurrent_queue_name IN ('WFWSSVC','WFALSNRSVC','WFMLRSVC');

Step 2 – Reset invalid control codes (if applicable)

UPDATE fnd_concurrent_queues
   SET control_code = NULL
 WHERE concurrent_queue_name IN ('WFWSSVC','WFALSNRSVC','WFMLRSVC')
   AND control_code NOT IN ('E','R','X')
   AND control_code IS NOT NULL;

Step 3 – Clear target node (remove node binding)

UPDATE fnd_concurrent_queues
   SET target_node = NULL
 WHERE concurrent_queue_name IN ('WFWSSVC','WFALSNRSVC','WFMLRSVC');

Step 4 – Commit

COMMIT;

What Happens Next?

After the above reset, wait a few minutes. The Internal Concurrent Manager (ICM) typically brings the services up automatically.

Verification – Confirm Workflow Service Managers Are Up

SELECT concurrent_queue_name,
       control_code,
       running_processes,
       max_processes
  FROM fnd_concurrent_queues
 WHERE concurrent_queue_name IN ('WFALSNRSVC','WFMLRSVC','WFWSSVC');

Expected state:

  • CONTROL_CODE should typically be C
  • RUNNING_PROCESSES should be 1
  • MAX_PROCESSES should be 1 (or as configured)

Common Error Seen (When Services Still Do Not Start)

In some cases, the services still fail with the following error:

ERROR:[SVC-GSM-WFALSNRSVC-9700 : oracle.apps.fnd.cp.gsc.SvcComponentContainer.startBusinessEventListener()]:
BES system could not establish connection to the control queue after 180 seconds

oracle.apps.fnd.cp.gsc.SvcComponentContainerException:
Could not start Service Component Container

This points to an issue with the Business Event System (BES) control queue / container configuration.


Advanced Fix (MOS) – Rebuild Workflow Control Queue / Container

🚨 MOS-Based Fix: Use this only when the primary fix does not resolve the issue and the error shows SvcComponentContainerException.

Refer to Oracle Support Document:

Starting Workflow Services Fails With Error oracle.apps.fnd.cp.gsc.SvcComponentContainerException Could not start Service Component Container
Doc ID 1663093.1

As per the document, run the following script to rebuild the Workflow control queue/container configuration:

Run as APPS user:

sqlplus apps/xxxxxxx @$FND_TOP/patch/115/sql/wfctqrec.sql APPLSYS xxxx

Replace:

  • xxxxxxx → APPS password
  • xxxx → APPLSYS password

Important Notes

  • Ensure Workflow Services are stopped before running the script.
  • Take a database backup / snapshot before executing in Production.
  • After script execution, bounce Concurrent Managers (or at least ICM).
  • Wait a few minutes and recheck the service status.

Post-Fix Verification

SELECT concurrent_queue_name,
       control_code,
       running_processes,
       max_processes
  FROM fnd_concurrent_queues
 WHERE concurrent_queue_name IN ('WFALSNRSVC','WFMLRSVC','WFWSSVC');

Once the Workflow services are healthy, you should see the managers running and the frontend should reflect:

  • Target = 1
  • Actual = 1

Root Cause (Why This Happens)

  • Corrupt / stale Service Component Container configuration
  • Invalid node binding (TARGET_NODE) after cloning or node changes
  • Improper shutdown causing inconsistent queue state
  • BES control queue connection timeouts

Conclusion

Workflow services stuck at Target=1 / Actual=0 can be fixed safely by resetting the service manager definitions in FND_CONCURRENT_QUEUES. If the environment throws the Service Component Container exception, the MOS script (wfctqrec.sql) provides the advanced recovery to rebuild the container/control queue configuration.

— Punit Kumar
Oracle EBS Techno Functional Consultant


Tags

Oracle EBS, R12, Workflow Mailer, WFALSNRSVC, WFMLRSVC, WFWSSVC, Concurrent Manager, ICM, BES, Service Component Container, wfctqrec.sql, Doc ID 1663093.1

Saturday, February 14, 2026

Project CloudBridge – Day 3: Designing & Deploying Amazon RDS PostgreSQL (OLTP Foundation)

Project CloudBridge – Day 3: Building the OLTP Foundation with Amazon RDS PostgreSQL

Series Context: After understanding why enterprises separate OLTP and Analytics (Day 1) and designing a real-time analytics architecture (Day 2), today we move into hands-on infrastructure setup.

Day 3 is where architecture thinking transforms into real implementation.


🎯 Day 3 Objective

Design and deploy a production-grade Amazon RDS PostgreSQL instance that will act as the OLTP source database for our real-time analytics pipeline.

This database will:

  • Act as the enterprise transactional system
  • Serve as the source for AWS DMS
  • Feed analytics data into Amazon Redshift (later days)

🧱 What We Are Building Today

Today we focus on the OLTP layer of our architecture:

  1. Understand RDS PostgreSQL architecture components
  2. Design production-ready configuration
  3. Create and configure the RDS instance
  4. Validate connectivity and basic database operations

📐 RDS PostgreSQL Architecture Overview

Before creating the database, we must understand key architectural components:

  • DB Instance vs Database
  • Multi-AZ vs Single-AZ deployment
  • Storage types (gp3, io1)
  • Subnet Groups and VPC isolation
  • Security Groups and access control
  • Parameter Groups
  • Backup and maintenance configuration

This is not just “Create Database”. This is infrastructure design.


⚙ Production Design Decisions

We will make enterprise-style configuration choices:

  • Instance class selection (compute vs memory optimized)
  • Storage sizing and autoscaling
  • Backup retention period
  • Monitoring configuration
  • Encryption using AWS KMS
  • Public vs Private accessibility

The goal is to think like a Cloud Architect — not just a console user.


🛠 Hands-On Tasks

By the end of today, we will:

  • Launch Amazon RDS PostgreSQL
  • Configure VPC and Subnet Group
  • Attach Security Group
  • Enable automated backups
  • Create initial database
  • Create application user
  • Test connectivity using psql

🔍 Validation Step

After deployment, we will verify:

SELECT version();

This confirms:

  • Database is operational
  • Network configuration is correct
  • Security rules are properly defined

🧠 Why Day 3 Matters

Without a properly designed OLTP source database:

  • AWS DMS cannot replicate data
  • Change Data Capture (CDC) will not work
  • Analytics pipeline will fail

Day 3 builds the foundation for the entire CloudBridge architecture.


📌 End of Day 3 Expected Outcome

  • One production-ready RDS PostgreSQL instance
  • Verified connectivity
  • Clear understanding of AWS database architecture
  • Ready to configure AWS DMS on Day 4

Next: Day 4 – Configuring AWS Database Migration Service (DMS) for Continuous Data Replication

Friday, February 13, 2026

Project CloudBridge – Day 1 & 2 Summary (Architect Consolidation)

Project CloudBridge – Brief Summary (Day 1 & Day 2)

This short recap consolidates the core learning from Day 1 and Day 2, so we can move into Day 3 (hands-on build) with a clear production mindset.


✅ Day 1 – Why Enterprises Separate OLTP and Analytics

Day 1 established the foundational enterprise principle: Transactional systems (OLTP) and analytical systems (Analytics/OLAP) are built for different workload patterns.

OLTP Analytics
Small, frequent transactions Large scans & aggregations
Low latency (milliseconds) High throughput reporting
Predictable response time Long-running queries
Enterprise Rule: Never run heavy analytics workload on the production OLTP database. It creates CPU/I/O contention, lock waits, and SLA impact.

The enterprise solution pattern introduced: separate OLTP and Analytics and connect them using a replication layer.


✅ Day 2 – Enterprise Real-Time Architecture Design (Deep Mode)

Day 2 moved from principle to production design thinking. We designed a real-time architecture using: RDS PostgreSQL (OLTP), AWS DMS (Full Load + CDC), and Amazon Redshift (Analytics).

Customer Applications ↓ Amazon RDS PostgreSQL (OLTP) ↓ AWS DMS (Full Load + CDC) ↓ Amazon Redshift (Analytics Warehouse) ↓ BI / Dashboards

Key technical decisions and learnings:

  • RDS (OLTP): design for low latency, high availability (Multi-AZ), and predictable storage performance.
  • CDC (Change Data Capture): DMS reads PostgreSQL WAL to capture INSERT/UPDATE/DELETE with minimal OLTP impact.
  • WAL/Slot Risk: if DMS lags or stops, WAL can accumulate (replication slot retention), creating storage pressure.
  • DMS Sizing Matters: under-sized replication instances lead to replication lag and stale analytics.
  • Redshift (Analytics): purpose-built warehouse for heavy aggregations and large scans; scales independently from OLTP.
Day 2 Outcome: We shifted from “tool learning” to architect thinking — workload isolation, CDC design, sizing awareness, and failure scenario planning.

🔜 Day 3 – What Comes Next (Hands-on Build)

Day 3 is execution with a production mindset:

  • Create RDS PostgreSQL with proper sizing and storage
  • Configure parameter group for CDC (WAL/logical replication)
  • Plan networking (subnets, security groups) and access model
  • Prepare DMS replication instance sizing and monitoring approach

Tags: #AWS #CloudBridge #RDS #PostgreSQL #DMS #CDC #Redshift #RealTimeAnalytics #DataEngineering

Project CloudBridge – Day 2: Enterprise Real-Time Analytics Architecture on AWS

Project CloudBridge – Day 2: Enterprise Real-Time Analytics Architecture on AWS

Day 2 of Project CloudBridge focuses on designing an enterprise-grade real-time analytics architecture using AWS database services. Before building anything hands-on, we must think like architects.


1️⃣ The Enterprise Problem

Organizations need near real-time dashboards, reporting, and analytics — but running heavy queries directly on the production OLTP database can slow down customer-facing applications.

Rule: Never mix heavy analytics workload with OLTP transactions in production.

2️⃣ Master Architecture – End-to-End Design

This pattern cleanly separates:

  • OLTP Layer – Amazon RDS PostgreSQL
  • Replication Layer – AWS DMS (Full Load + CDC)
  • Analytics Layer – Amazon Redshift
  • BI Layer – QuickSight / Tableau / Power BI
Customer Applications ↓ Amazon RDS PostgreSQL (OLTP) ↓ AWS DMS (Full Load + CDC) ↓ Amazon Redshift (Analytics Warehouse) ↓ BI / Reporting / Dashboards

3️⃣ Amazon RDS PostgreSQL – OLTP Layer

Purpose:

  • Handle live transactions (INSERT / UPDATE / DELETE)
  • Maintain fast response time for customer-facing apps
  • Use Multi-AZ for high availability (production best practice)
  • Protect OLTP from reporting workload by isolating analytics

4️⃣ CDC Flow – PostgreSQL WAL to DMS

CDC (Change Data Capture) reads PostgreSQL transaction logs (WAL) and captures changes such as:

  • INSERT
  • UPDATE
  • DELETE

This enables near real-time synchronization to the analytics platform without repeatedly running full reloads.


5️⃣ AWS DMS – Full Load + Continuous Replication

DMS typically runs in two phases:

  • Full Load – One-time initial data copy
  • CDC – Ongoing continuous replication of changes
Operational Note: If DMS stops, OLTP continues normally. Analytics becomes stale until replication resumes.

6️⃣ Amazon Redshift – Analytics Warehouse

Redshift is designed for fast analytics at scale:

  • Columnar storage for efficient scanning
  • Massively Parallel Processing (MPP)
  • Fast aggregation queries across large datasets
  • Easy BI integration (QuickSight / Tableau / Power BI)

7️⃣ Why This Architecture Is Enterprise-Grade

Layer Responsibility
RDS PostgreSQL Transactions (OLTP)
AWS DMS Replication (Full Load + CDC)
Amazon Redshift Analytics & Reporting

🔐 Production Considerations (What Architects Always Add)

  • Enable Multi-AZ and automated backups for RDS
  • Ensure PostgreSQL WAL settings support CDC requirements
  • Monitor DMS replication lag and task health
  • Use least-privilege IAM roles for DMS access
  • Secure connectivity with VPC Security Groups and encryption in transit

8️⃣ Day 3 Preview

Next, we go hands-on:

  • Create RDS PostgreSQL (production-minded configuration)
  • Enable WAL settings needed for CDC
  • Create DMS replication instance
  • Configure source & target endpoints
  • Start Full Load + CDC and validate data flow

Tags: #AWS #CloudBridge #RDS #PostgreSQL #DMS #CDC #Redshift #RealTimeAnalytics #DataEngineering


📘 Project CloudBridge – Series Navigation


Thursday, February 12, 2026

Project CloudBridge – Day 1: Why Enterprises Separate OLTP and Analytics Using RDS PostgreSQL, AWS DMS, and Amazon Redshift

Project CloudBridge: Enterprise Real-Time Data Integration on AWS

A Hands-On Learning Series Designing Enterprise-Grade Real-Time Data Pipelines Using AWS Database Services

Series Overview: This is a structured hands-on learning and implementation series focused on designing enterprise real-time data integration pipelines using AWS database technologies including Amazon RDS PostgreSQL, AWS Database Migration Service (DMS), and Amazon Redshift.

Series Goal: To build industry-level architecture knowledge, hands-on cloud database integration skills, and real-world production design understanding.


Project CloudBridge – Series Roadmap

  • Day 1 – Why Enterprises Separate OLTP and Analytics
  • Day 2 – Industry Case Study: Real-Time Analytics Pipeline
  • Day 3 – Configuring RDS PostgreSQL for CDC
  • Day 4 – Amazon Redshift Fundamentals
  • Day 5 – AWS DMS Deep Dive
  • Day 6 – End-to-End Pipeline Implementation
  • Day 7 – Data Validation Strategies
  • Day 8 – Performance Optimization
  • Day 9 – Security and Compliance
  • Day 10 – Production Runbook and Lessons Learned

Day 1: Why Enterprises Separate OLTP and Analytics — Building Real-Time Data Pipelines Using RDS PostgreSQL, AWS DMS, and Redshift

Description: In this Day 1 article of Project CloudBridge, we’ll understand why modern enterprises separate transactional (OLTP) workloads from analytics (OLAP) workloads—and how Amazon RDS PostgreSQL, AWS DMS, and Amazon Redshift work together to deliver near real-time reporting without impacting production performance.


Introduction

In modern enterprise environments, databases are no longer used only for storing application data. They also power dashboards, compliance reporting, fraud detection, and analytics platforms.

One of the biggest architectural mistakes organizations make is running heavy reporting workloads directly on production transactional databases. It may work in early stages, but over time it leads to performance degradation, user complaints, and scalability challenges.

Enterprises solve this by separating transactional workloads from analytics workloads. In this article, we will explore why this separation is critical and how AWS services like Amazon RDS PostgreSQL, AWS Database Migration Service (DMS), and Amazon Redshift enable this architecture.


1) OLTP vs OLAP — The Core Concept

Enterprise data platforms typically support two very different workload types:

OLTP (Online Transaction Processing)

OLTP systems handle day-to-day business transactions such as claims, payments, orders, billing, and user activity.

  • High number of small, frequent transactions
  • Fast response time is critical
  • Mostly INSERT/UPDATE operations
  • Strong consistency and concurrency

OLAP (Online Analytical Processing)

OLAP systems support reporting, dashboards, trends, and decision-making analytics.

  • Large data scans and aggregations
  • Complex joins
  • Historical trend analysis
  • High concurrency for business users and BI tools

2) Why Running Analytics on Production Databases is Risky

Running heavy reporting directly on production OLTP databases creates real operational risks:

  • Performance impact: Analytics queries can scan large tables and consume CPU, memory, and I/O needed for production transactions.
  • Lock contention: Long-running queries can create contention that slows business-critical operations.
  • Scalability limits: OLTP databases are optimized for transactions, not large-scale analytics processing.
  • Availability risk: Reporting spikes can contribute to slowdowns and outages during peak business hours.

3) The Enterprise Pattern: Workload Separation

To solve this, enterprises adopt a proven pattern:

  • OLTP database remains dedicated to the application workload.
  • Analytics warehouse handles reporting and insights at scale.
  • Replication/CDC pipeline keeps analytics data updated with minimal impact on production.

4) Where AWS Services Fit In

Amazon RDS PostgreSQL (OLTP)

Amazon RDS PostgreSQL is a strong OLTP platform because it offers managed operations, backups, and high availability options. It is ideal for application transactions—however it is not the best place for heavy analytics.

Amazon Redshift (OLAP)

Amazon Redshift is a cloud-native data warehouse designed for analytics workloads. With columnar storage and massively parallel processing (MPP), it is well-suited for complex queries at scale.

AWS Database Migration Service (DMS) — The Bridge

AWS DMS helps keep analytics systems updated by enabling:

  • Full Load: Move historical data initially
  • CDC (Change Data Capture): Continuously replicate ongoing changes
  • Near real-time analytics: Keep Redshift updated without overloading production

5) High-Level Architecture

Application Users
       |
       v
Amazon RDS PostgreSQL (OLTP)
       |
       v
AWS DMS (Full Load + CDC)
       |
       v
Amazon Redshift (Analytics / OLAP)
       |
       v
BI Dashboards / Reporting

6) Business Benefits

  • Better production performance by removing reporting load from OLTP
  • Near real-time dashboards powered by CDC replication
  • Scalable analytics without impacting application users
  • Improved compliance reporting and audit readiness

7) Real-World Adoption

This pattern is widely used across industries such as:

  • Healthcare (claims analytics, fraud detection)
  • Finance (risk analytics, compliance reporting)
  • Retail (customer behavior analytics, demand forecasting)
  • Telecom (billing analytics, usage reporting)

What’s Next (Day 2 Preview)

In Day 2, I will share an industry-level case study showing how an enterprise implements a real-time analytics pipeline using RDS PostgreSQL → AWS DMS → Redshift, including key design decisions and common challenges.


Project CloudBridge – Daily Enterprise Learning Series

Follow this series to learn how modern enterprises design scalable real-time data pipelines using AWS database technologies.

If you are working on cloud data modernization or AWS database integration, feel free to share your experiences or questions in the comments.


📘 Project CloudBridge – Series Navigation


Oracle Critical Patch Update (CPU) January 2026 — What Every DBA Must Know

 

Oracle Critical Patch Update (CPU) January 2026 — What Every DBA Must Know

Oracle has officially released the January 2026 Critical Patch Update (CPU), marking the first quarterly security release of the year. As database and Oracle E-Business Suite administrators, quarterly CPUs are not just routine maintenance — they are critical security milestones that protect enterprise environments from emerging cyber threats.

In this article, I will break down the January 2026 CPU from a DBA perspective, explain why it matters, and share practical guidance on how organizations should approach patching.


📌 What is Oracle Critical Patch Update (CPU)?

Oracle releases security patches quarterly in January, April, July, and October. These updates contain fixes for vulnerabilities across Oracle products including:

  • Oracle Database
  • Oracle E-Business Suite
  • Fusion Middleware
  • Java
  • MySQL
  • Enterprise Manager
  • Cloud Services and many more

These updates not only fix Oracle-specific vulnerabilities but also address third-party component risks embedded within Oracle products.


🚨 January 2026 CPU — Key Highlights

  • 337 Security Fixes Released
  • 158 Unique CVEs Addressed
  • Several vulnerabilities rated Critical and High Severity
  • Multiple vulnerabilities exploitable remotely without authentication

This clearly indicates the increasing complexity of enterprise security and the importance of maintaining regular patching cycles.


🔍 Why This CPU is Important for DBAs

From my experience working with large Oracle EBS and Database environments, one of the biggest risks organizations face is delayed patching. Attackers actively target known vulnerabilities soon after patch announcements.

The January 2026 CPU includes fixes for vulnerabilities such as:

  • Remote Code Execution Risks
  • Server Side Request Forgery (SSRF)
  • Privilege Escalation Vulnerabilities
  • Data Exposure Risks

Many of these vulnerabilities can be exploited without requiring database login credentials, which significantly increases the security risk.


🏢 Impact on Oracle E-Business Suite Environments

For Oracle EBS environments, CPUs usually involve:

  • Database Release Updates (DB RU)
  • OJVM Patch Updates
  • Technology Stack Updates
  • Middleware Security Fixes

DBAs managing EBS must carefully validate patch compatibility with application tiers, especially in environments running Online Patching.


🧪 Recommended DBA Patching Strategy

Step 1: Environment Assessment

  • Identify database versions
  • Check applied RU and OJVM levels
  • Review Oracle Support Patch Availability Documents (PAD)

Step 2: Pre-Patch Validation

  • Validate OPatch version
  • Verify database backups
  • Confirm standby / DR synchronization
  • Check application downtime window

Step 3: Patch Testing

  • Apply patch in lower environments first
  • Validate application functionality
  • Monitor database performance

Step 4: Production Deployment

  • Follow documented SOP
  • Apply RU + OJVM carefully
  • Run datapatch validation
  • Perform post patch health checks

⚠️ Common Risks if CPU is Ignored

  • Data breaches
  • System compromise
  • Compliance violations
  • Production outages
  • Potential ransomware attacks

Security patching is no longer optional — it is a core responsibility for DBAs and infrastructure teams.


📊 My Personal Recommendation

Based on industry trends and enterprise patching experience:

  • Always align CPU patching with quarterly maintenance cycles
  • Maintain detailed patch runbooks
  • Keep DR environment ready for fallback
  • Automate patch verification wherever possible

🔐 Final Thoughts

The January 2026 CPU highlights Oracle’s continued focus on strengthening enterprise security. With hundreds of vulnerabilities addressed, organizations must treat this update as a top operational priority.

For DBAs, CPUs are more than patching exercises — they represent proactive security defense and business continuity assurance.

Regular patching ensures not only compliance but also protects business-critical data and applications.


📅 Oracle CPU Release Cycle Reminder

  • January
  • April
  • July
  • October

✍️ About the Author

Punit is an Oracle E-Business Suite and Database Specialist with 20+ years of experience managing enterprise-scale Oracle environments, cloud migrations, performance tuning, and security patching strategies.


If you found this article useful, stay tuned for my upcoming detailed runbook on applying Oracle 19c RU and OJVM patches for EBS environments.

January 2026 Critical Patch Update.

Tuesday, February 10, 2026

Oracle Data Guard DR Activation: Opening a Standby Database in READ WRITE Mode


Opening a DR Database in READ WRITE Mode in Oracle

Complete Step-by-Step Guide with Warnings & DR Flow Diagram

In an Oracle Disaster Recovery (DR) setup using Data Guard, the standby database is designed to protect the business from outages—not to be casually opened in READ WRITE mode.

Opening a DR database in READ WRITE mode is a critical operation that effectively promotes the standby to a new primary. This blog explains when, why, and how to do it correctly, with required checks, warnings, and a clear DR activation flow.


Understanding the DR Standby Database

  • Primary database runs in READ WRITE
  • DR database runs as PHYSICAL STANDBY
  • Redo is shipped and applied continuously
  • Standby remains in MOUNT or READ ONLY mode

Note: A physical standby cannot be opened READ WRITE unless it is activated.

When Should DR Be Opened in READ WRITE Mode?

  • Actual primary database outage
  • Declared Disaster Recovery event
  • Approved failover / cutover
  • Business-approved DR drill (with rebuild planned)

DR Activation Flow Diagram (Primary → Standby → New Primary)

PRIMARY DATABASE (Production)
        |
        | Redo Transport
        v
PHYSICAL STANDBY (DR)
(MOUNT / READ ONLY)
        |
        | ALTER DATABASE ACTIVATE STANDBY DATABASE
        v
NEW PRIMARY DATABASE
(READ WRITE)

Step 1: Confirm You Are on the DR Database

Verify database role before making any changes:

SELECT name, open_mode, database_role FROM v$database;

Expected output:

DATABASE_ROLE = PHYSICAL STANDBY
OPEN_MODE     = MOUNTED or READ ONLY

Step 2: Verify DR Is Fully in Sync (BEFORE Activation)

Synchronization checks must be done before activation.

Check redo apply status

SELECT process, status FROM v$managed_standby;

Ensure MRP0 is applying redo.

Check transport and apply lag

SELECT name, value, unit
FROM v$dataguard_stats
WHERE name IN ('transport lag','apply lag');

Ideal result:

transport lag = 0 seconds
apply lag     = 0 seconds

Step 3: Stop Managed Recovery

Redo apply must be stopped explicitly:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

🚨 Critical Warning: Point-of-No-Return Command

⚠️ EXTREME CAUTION REQUIRED

ALTER DATABASE ACTIVATE STANDBY DATABASE;

This command permanently converts the standby database into a PRIMARY. After execution:

  • Data Guard is irreversibly broken
  • Redo shipping cannot resume
  • Synchronization checks no longer apply
  • Restore points cannot revert the role
  • Flashback cannot recreate a standby
  • A full standby rebuild is mandatory

There is NO rollback for this command. Execute only with business approval and confirmed DR conditions.

Step 4: Activate the Standby Database

ALTER DATABASE ACTIVATE STANDBY DATABASE;

Step 5: Restart the Database

SHUTDOWN IMMEDIATE;
STARTUP;

Step 6: Open the Database in READ WRITE Mode

ALTER DATABASE OPEN;

Verify final status:

SELECT name, open_mode, database_role FROM v$database;

Expected:

DATABASE_ROLE = PRIMARY
OPEN_MODE     = READ WRITE

Important Note: No Sync or Restore Point Checks After Activation

After activation, the database is no longer a standby. There is no primary–standby relationship, so synchronization queries are not meaningful. Restore points cannot restore Data Guard and Flashback cannot revert the role. To restore DR protection, you must rebuild a new standby database.

Post-Activation Validation Checks

SELECT instance_name, status FROM v$instance;
SELECT switchover_status FROM v$database;
SHOW PARAMETER db_unique_name;

Key Takeaways

  • Physical standby cannot open READ WRITE without activation
  • ACTIVATE STANDBY DATABASE is a one-way command
  • Complete all sync checks before activation
  • After DR usage, rebuild standby to restore DR protection

Final Thoughts

Opening a DR database in READ WRITE mode is not just a technical task—it’s a business decision. Treat DR activation as a controlled emergency, not a routine command.

Oracle EBS Database Patching on Linux x86-64: Applying CPU (DB RU) + OJVM (Oct 2025)

Oracle EBS Database Patching on Linux x86-64: Applying CPU (DB RU) + OJVM (Oct 2025) Like a Pro

If you manage an Oracle E-Business Suite (EBS) environment, one thing is non-negotiable: quarterly security patching. Oracle releases Critical Patch Updates (CPU) every quarter, and for the Database tier these typically arrive as Release Updates (RU). Along with the RU, Oracle also delivers OJVM (Oracle JavaVM) patches which address Java-related vulnerabilities and database JVM fixes.

In this post, I’m documenting a practical, DBA-friendly approach to applying Oct 2025 CPU patches for Oracle Database 19c on Linux x86-64 for an EBS system—keeping it simple, safe, and production-ready.


1) What we are patching and why

CPU / DB RU (Release Update)

The Database RU is Oracle’s quarterly cumulative patch that includes:

  • Security fixes (CVE fixes)
  • Critical defect fixes
  • Stability improvements

OJVM Patch

OJVM patches fix vulnerabilities and issues related to the Java VM inside the database. In many environments, DB RU and OJVM are applied together as part of the quarterly patch cycle.


2) Key patches for Oct 2025 (Oracle Database 19c)

From Oracle’s Patch Availability Document (Oct 2025 CPU PAD – DB-only), the main patch paths for 19c are:

Option A: Combo Patch (DB RU + OJVM together)

  • Combo RU + OJVM 19.29.0.0.251021 – Patch 38273545

This is often the easiest path because RU and OJVM are packaged together.

Option B: Apply DB RU and OJVM separately

  • DB RU 19.29.0.0.251021 – Patch 38291812
  • OJVM RU 19.29.0.0.251021 – Patch 38194382

This approach is common when DB RU is already applied and OJVM is pending, your change window is split, or you have sequencing constraints.

Optional but commonly required in many enterprises

  • OPatch requirement: OPatch 12.2.0.1.48+
  • JDK patch (if mandated): JDK8u471 Patch 38245243

3) My patching mindset for EBS systems

EBS is not just a database; it’s an application ecosystem. Even if we are patching only the Database home, the discipline remains the same:

  • Do conflict checks (avoid surprises)
  • Have a rollback plan
  • Take a reliable backup
  • Run datapatch cleanly
  • Validate in DBA_REGISTRY_SQLPATCH
  • Do a basic EBS smoke test (login + concurrent manager, if applicable)

4) Pre-checks before you touch anything

Check current DB & patch level

select * from v$version;

select patch_id, patch_type, action, status,
       to_char(action_time,'DD-MON-YYYY HH24:MI') action_time
from dba_registry_sqlpatch
order by action_time desc;

Check OPatch version

$ORACLE_HOME/OPatch/opatch version

Run conflict check (recommended)

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /path/to/unzipped_patch

Backup (minimum expectation)

  • RMAN backup / snapshot / cold backup as per your org standards
  • Backup inventory metadata if required

5) Patching Steps (DB RU + OJVM) — the clean runbook

Below is the cleanest standard process for most EBS environments.

Method 1: Combo Patch (DB RU + OJVM together)

Patch: 38273545

Step 1 — Stage the patch

mkdir -p /u01/stage/CPU_OCT2025_19c
cd /u01/stage/CPU_OCT2025_19c
unzip p38273545_190000_Linux-x86-64.zip

Step 2 — Shutdown the database

sqlplus / as sysdba <<EOF
shutdown immediate;
exit;
EOF

Step 3 — Apply patch using OPatch

cd /u01/stage/CPU_OCT2025_19c/<patch_dir>
$ORACLE_HOME/OPatch/opatch apply

Step 4 — Startup database

sqlplus / as sysdba <<EOF
startup;
exit;
EOF

Step 5 — Run datapatch

cd $ORACLE_HOME/OPatch
./datapatch -verbose

Step 6 — Validate success

select patch_id, patch_type, action, status,
       to_char(action_time,'DD-MON-YYYY HH24:MI') action_time
from dba_registry_sqlpatch
order by action_time desc;

Method 2: Apply DB RU and OJVM separately

Use this method if you are applying:

  • DB RU: 38291812
  • OJVM RU: 38194382

Phase A — Apply DB RU

  1. Shutdown DB
  2. OPatch apply DB RU
  3. Startup DB
  4. Run datapatch

Phase B — Apply OJVM RU

  1. Shutdown DB
  2. OPatch apply OJVM
  3. Startup DB
  4. Run datapatch again

Why datapatch twice?

Because each patch updates SQL components differently, and Oracle expects SQL patch registration after each change.


6) Post-patching validation checklist (what I always verify)

OPatch inventory

$ORACLE_HOME/OPatch/opatch lsinventory | egrep "38273545|38291812|38194382|19\.29"

SQL patch registry

select patch_id, patch_type, action, status,
       to_char(action_time,'DD-MON-YYYY HH24:MI') action_time
from dba_registry_sqlpatch
order by action_time desc;

Component status

select comp_id, comp_name, version, status
from dba_registry;

Optional but useful checks

  • Invalid objects count (before vs after)
  • Listener status
  • Basic EBS login smoke test (if apps involved)
  • Concurrent manager sanity (if your change plan includes it)

7) Common issues and quick DBA fixes

OPatch too old

Symptom: Patch fails at start
Fix: Upgrade OPatch to the required version first.

Conflict with one-off patches

Symptom: Conflict detected
Fix: Use MOS Conflict Checker and request merged patch if needed.

datapatch errors

Symptom: SQL patch not registered / failures
Fix: Review datapatch logs, ensure DB/PDB state is correct, and rerun datapatch after fixing root cause.


Conclusion

Applying CPU + OJVM patches on an EBS database is not hard—but it must be disciplined. For Oct 2025 on 19c Linux x86-64, you can either:

  • Go with the Combo patch for simplicity, or
  • Apply DB RU then OJVM separately for flexible scheduling

Either way, the real success factor is: conflict check + clean datapatch + solid validation.




Saturday, February 7, 2026

Where Can I Find EBS 12.2.15 Documentation?

Where Can I Find EBS 12.2.15 Documentation?


 EBS -12.2.15 to E-Business Suite 12.2. It can be applied online — you do not need to take your EBS environment down to apply this update. Our online E-Business Suite Documentation Web Library always contains the latest versions of all of our guides, including our Installation Guides, Upgrade Guides, and Readme Notes:

The EBS 12.2.15 release update pack (RUP) is delivered on My Oracle Support as 

Patch 37182900. Instructions for downloading and applying this latest RUP on top of the EBS 12.2 codeline can be found here:

EBS 12.2.15

Key Highlights of EBS 12.2.15

1️⃣ Introduction of “What’s New” Home Experience

One of the most visible improvements in this release is the introduction of a centralized “What’s New” documentation hub.

This new framework helps both technical and functional users easily understand enhancements introduced in each release. The documentation is organized by product families and includes:

  • Detailed feature descriptions

  • Screenshots demonstrating new capabilities

  • Configuration and setup instructions

  • Practical usage recommendations

Previously, organizations had to rely heavily on Release Content Documents (RCDs) and Transfer of Information (TOI) presentations. The new approach significantly simplifies feature discovery and improves adoption.


2️⃣ Fully Cumulative Update

EBS 12.2.15 is a cumulative release, which means:

  • It includes all fixes and improvements from previous 12.2 updates

  • It bundles previously released one-off patches

  • It reduces patching complexity for customers catching up on maintenance

For organizations running older 12.2 releases, this significantly reduces the number of patches required to reach the latest supported level.


3️⃣ Online Patching Support

One of the biggest strengths of the EBS 12.2 architecture remains intact.

The 12.2.15 RUP can be applied using:

👉 Online Patching (ADOP)

This allows patching while the production system remains available to users, minimizing downtime and business disruption. This capability continues to be a major differentiator of EBS 12.2 compared to earlier versions.


🔄 Upgrade Path to EBS 12.2.15

A common question customers ask is whether they need intermediate upgrades.

The answer is simple:

✔ Any existing EBS 12.2.x environment can directly apply the 12.2.15 RUP.

There is no need to apply intermediate release updates unless required for compatibility or internal testing requirements.


 

Oracle EBS Monitoring Analyzer – A Proactive Health Check Tool Every Apps DBA Should Use

Maintaining the health and stability of an Oracle E-Business Suite (EBS) environment requires continuous monitoring, proactive troubleshooting, and periodic validation of system configurations. In large enterprise environments where EBS supports critical business operations such as Financials, Supply Chain, HRMS, and Manufacturing, even minor configuration deviations can lead to performance degradation or functional failures.

During my recent EBS administration and support activities, I revisited one of the most powerful and underrated diagnostic utilities provided by Oracle Support — the Oracle EBS Monitoring Analyzer.

In this article, I will explain what the Monitoring Analyzer is, why it is essential for Apps DBAs and functional teams, and how to install, execute, and interpret its results effectively.


What is Oracle EBS Monitoring Analyzer?

The Monitoring Analyzer is a diagnostic health-check utility developed by Oracle Support. It is designed to analyze Oracle EBS environments and provide actionable insights into configuration settings, known issues, and best practice recommendations.

The analyzer works as a self-service script that:

  • Reviews EBS configuration parameters

  • Identifies known product and setup issues

  • Provides corrective action recommendations

  • Suggests best practice improvements

  • Helps Oracle Support Engineers during SR troubleshooting

One of the most important characteristics of this tool is that it is completely non-intrusive.

No data modification

Why Monitoring Analyzer is Important

In most real-world EBS environments, system issues are often caused by configuration drift, incomplete setups, or overlooked best practices rather than software defects.

The Monitoring Analyzer helps organizations move from reactive troubleshooting to proactive maintenance.

Key Advantages

  •  Early detection of configuration issues
  •  Improved environment stability
  • Faster root cause analysis
  • Simplified Oracle SR diagnostics
  • Preventive maintenance capability
  • Performance optimization recommendations✔ No inserts, updates, or deletes

Only reads and reports configuration data

This makes it safe to run even in production environments.

Target Audience

The Monitoring Analyzer is beneficial across both technical and functional teams.

Apps DBAs and System Administrators

  • Execute analyzer scripts

  • Validate environment configuration

  • Review performance and stability warnings

Functional Consultants and Business Analysts

  • Review module specific recommendations

  • Identify functional setup gaps

  • Validate business process configuration


Key Benefits of Monitoring Analyzer

The analyzer provides:

✔ Instant health-check reports
✔ Detailed HTML output for easy review
✔ Known issue identification
✔ Best practice guidance
✔ Oracle Support data collection assistance


Downloading the Latest Monitoring Analyzer

Oracle continuously updates analyzer scripts to incorporate newly identified issues and validation checks. Therefore, always ensure you are using the latest available version.

Example package:

mon_analyzer_200.18.zip

 Installing and Running Monitoring Analyzer

Monitoring Analyzer can be executed using two different approaches:

1️⃣ Running as a Concurrent Program
2️⃣ Running via SQL*Plus

Both methods are widely used depending on administrative requirements.


Method 1 – Running Monitoring Analyzer as Concurrent Request

This is the preferred approach when functional teams need access without requiring database credentials.


Step 1: Install Analyzer Package

Login as APPS user and execute:

sqlplus apps/<password> SQL> @mon_analyzer.sql

This step creates the analyzer package in the EBS database.

This step must be repeated whenever a new analyzer version is downloaded.


Step 2: Register Concurrent Program

Upload the concurrent program definition using FNDLOAD utility.

FNDLOAD apps/<password> 0 Y UPLOAD \ $FND_TOP/patch/115/import/afcpprog.lct \ MONAZ.ldt CUSTOM_MODE=FORCE

This creates the concurrent program called Monitoring Analyzer.


Step 3: Assign Program to Responsibility

Navigate to:

System Administrator → Security → Responsibility → Define

Identify the request group associated with the responsibility and add:

Monitoring Analyzer

Save the configuration.


Step 4: Execute Analyzer

Navigate to:

Processes and ReportsSubmit Request

Submit request:

Monitoring Analyzer

Ensure language setting is:

American English

Step 5: Review Output

Once the request completes:

  • Click View Output

  • Save the output locally as:

Web Page, HTML only

Conclusion

Oracle EBS Monitoring Analyzer is an extremely valuable diagnostic utility

that enables proactive monitoring and preventive maintenance of

Oracle E-Business Suite

Reference

Oracle Support Documentation

Monitoring Analyzer – MOS Doc ID 2886645.1.