Saturday, May 9, 2026

MariaDB DBA Playbook: Architecture, Performance Tuning & Automation Scripts

MariaDB for DBAs: Architecture, Performance Tuning & Real-World Scripts | DBA Chronicles
📘 DBA Reference Guide · May 2026

MariaDB for DBAs: Architecture, Performance Tuning & Real-World Scripts

A field-tested MariaDB playbook for Oracle, MySQL, and cloud DBAs covering architecture internals, essential commands, performance tuning, security hardening, backup strategy, and automation scripts.

Written by Punit Kumar — Oracle EBS DBA · Cloud DBA · AWS · AI Learner

⏱ 22 min read MariaDB 10.11 / 11.x DBA MariaDB Performance Security
🎯

Why This Guide?

This guide is created from a real DBA point of view. It is not only a command list — it is a practical field reference for understanding MariaDB architecture, operating production databases, tuning performance, securing access, and automating daily DBA tasks.

💡 DBA Perspective If you are coming from an Oracle DBA, MySQL DBA, or cloud database background, MariaDB becomes easier when you understand three areas clearly: storage engines, InnoDB behavior, and replication/backup strategy.
⚖️

Oracle DBA View: Oracle vs MariaDB

For Oracle DBAs, MariaDB may look simple at first, but it has its own architecture, memory model, optimizer behavior, replication style, and storage engine flexibility. The table below helps map familiar Oracle concepts to MariaDB concepts.

AreaOracleMariaDBDBA Note
Primary storageTablespaces / datafilesInnoDB tablespaces / .ibd filesMariaDB often uses file-per-table with InnoDB.
RedoOnline redo logsInnoDB redo logCritical for crash recovery and write performance.
Undo / MVCCUndo tablespaceUndo logs in InnoDBNeeded for consistent reads and rollback.
OptimizerCBO with statsCost-based optimizer with table/index statsEXPLAIN, ANALYZE, and index design are key.
Flashback styleFlashback Query / FDASystem-versioned tablesUseful for audit/history use cases.
HA / replicationData Guard / GoldenGate / RACBinary log replication / GTID / GaleraDesign depends on RPO/RTO and workload.
BackupRMANmariabackup / mysqldumpAlways test restore, not only backup success.
⚠️ Real DBA Insight Do not treat MariaDB as “just MySQL commands.” Production stability depends on the right engine, right indexing strategy, tested backup, secure grants, and well-designed replication.
🏛

MariaDB Architecture

MariaDB is a community-driven fork of MySQL offering a fully pluggable storage engine architecture, enhanced replication (Galera, GTID), and features like temporal tables, virtual columns, and JSON functions — all production-ready for OLTP and analytical workloads.

Client Connection Layer TCP/IP · Unix Socket · Named Pipe · SSL/TLS · MaxScale Proxy Application MySQL CLI JDBC / ODBC Replication MaxScale DBeaver SQL Interface & Connection Management Thread cache · Connection pool · Auth · Privilege check · Character set Query Processing Engine Parser Lexer → AST Syntax validation Optimizer Cost-based plan Index selection Query cache Result set cache (MariaDB 10.1+) Execution engine Row operations Filesort / tmp tables Pluggable Storage Engine Layer InnoDB ACID · MVCC Row locking Aria Crash-safe MyISAM Temp tables MyRocks LSM-tree · RocksDB High write I/O MEMORY In-RAM tables No persistence ColumnStore OLAP columnar Analytical queries Spider Sharding BLACKHOLE Replication relay SEQUENCE Number gen CONNECT External data TokuDB Fractal tree CSV Plain CSV files Memory & Cache Layer InnoDB buffer pool · Key cache · Table cache · Sort buffer · Join buffer Logging & Replication Binary log · Redo log · Undo log · GTID · Galera wsrep · Semi-sync Physical Storage .ibd files · .frm · .aria · .MYD / .MYI · Binary logs · OS filesystem · Raw disk SSD · NVMe · SAN · EBS (AWS) · Persistent Disk (GCP)

Storage Engine Comparison

InnoDB

Default engine. ACID-compliant, MVCC row-level locking, foreign keys, crash recovery. Best for OLTP.

Aria

Crash-safe replacement for MyISAM. Used internally for temp tables. Good for read-heavy workloads.

MyRocks

RocksDB-based LSM tree. Excellent write amplification reduction. Ideal for high-write workloads.

ColumnStore

Columnar storage for OLAP. Parallel query execution on billions of rows. Great for data warehousing.

MEMORY

All data in RAM. No persistence. Ideal for lookup tables and temporary caches.

Spider

Built-in sharding and horizontal partitioning. Federated access across multiple MariaDB nodes.

Essential DBA Commands

Connection & Server Info

Connection & StatusSQL
-- Connect from command line
mariadb -u root -p -h 127.0.0.1 -P 3306

-- Server version and status
SELECT VERSION();
SHOW STATUS;
SHOW GLOBAL STATUS;
SHOW GLOBAL VARIABLES;
SHOW VARIABLES LIKE '%innodb%';
SHOW VARIABLES LIKE 'max_connections';

-- Current connections
SHOW FULL PROCESSLIST;
SELECT * FROM information_schema.PROCESSLIST
WHERE COMMAND != 'Sleep'
ORDER BY TIME DESC;

-- Kill a specific connection
KILL 12345;
KILL QUERY 12345;  -- kill only the query, keep connection

Database & Schema Management

DDL OperationsSQL
-- Database operations
CREATE DATABASE myapp CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
SHOW DATABASES;
USE myapp;
DROP DATABASE IF EXISTS myapp;

-- Table operations
SHOW TABLES;
SHOW TABLE STATUS;
DESCRIBE employees;
SHOW CREATE TABLE employees\G
SHOW FULL COLUMNS FROM employees;

-- Table size info
SELECT
  table_name,
  ROUND((data_length + index_length) / 1024 / 1024, 2) AS size_mb,
  table_rows,
  engine
FROM information_schema.TABLES
WHERE table_schema = 'myapp'
ORDER BY (data_length + index_length) DESC;

-- Online DDL (MariaDB 10.3+)
ALTER TABLE employees
  ADD COLUMN dept_id INT,
  ADD INDEX idx_dept (dept_id),
  ALGORITHM=INPLACE, LOCK=NONE;

User & Privilege Management

Security & GrantsSQL
-- Create users
CREATE USER 'appuser'@'192.168.1.%' IDENTIFIED BY 'StrongP@ss2024!';
CREATE USER 'readonly'@'%' IDENTIFIED BY 'ReadOnly@123';

-- Grant privileges
GRANT SELECT, INSERT, UPDATE, DELETE ON myapp.* TO 'appuser'@'192.168.1.%';
GRANT SELECT ON myapp.* TO 'readonly'@'%';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'10.0.0.%' IDENTIFIED BY 'ReplPass!';
FLUSH PRIVILEGES;

-- Show users and grants
SELECT user, host, password_expired, account_locked
FROM mysql.user ORDER BY user;
SHOW GRANTS FOR 'appuser'@'192.168.1.%';

-- Revoke and drop
REVOKE ALL PRIVILEGES ON myapp.* FROM 'appuser'@'192.168.1.%';
DROP USER 'appuser'@'192.168.1.%';

-- Password policy (MariaDB 10.4+)
SET GLOBAL strict_password_validation = 1;
ALTER USER 'appuser'@'%' PASSWORD EXPIRE INTERVAL 90 DAY;

Replication Commands

Replication ManagementSQL
-- Check replication status (replica side)
SHOW REPLICA STATUS\G        -- MariaDB 10.5+
SHOW SLAVE STATUS\G          -- older syntax

-- Setup replica
CHANGE MASTER TO
  MASTER_HOST = '10.0.0.1',
  MASTER_USER = 'repl',
  MASTER_PASSWORD = 'ReplPass!',
  MASTER_USE_GTID = slave_pos;
START REPLICA;

-- Primary side binlog info
SHOW MASTER STATUS\G
SHOW BINARY LOGS;
SHOW BINLOG EVENTS IN 'mariadb-bin.000010' LIMIT 20;

-- GTID-based operations
SELECT @@gtid_current_pos;
SELECT @@gtid_binlog_pos;

-- Skip one errant transaction
STOP REPLICA;
SET GLOBAL sql_slave_skip_counter = 1;
START REPLICA;
🔥

Performance Tuning

EXPLAIN & Query Analysis

Query AnalysisSQL
-- Basic EXPLAIN
EXPLAIN SELECT * FROM orders WHERE customer_id = 1001;

-- Extended EXPLAIN (shows filtered %)
EXPLAIN EXTENDED SELECT * FROM orders WHERE customer_id = 1001;

-- JSON format (detailed optimizer info)
EXPLAIN FORMAT=JSON
SELECT o.id, c.name, SUM(oi.amount)
FROM orders o
JOIN customers c ON o.customer_id = c.id
JOIN order_items oi ON oi.order_id = o.id
WHERE o.created_at > '2025-01-01'
GROUP BY o.id;

-- Analyze (executes + returns real row counts)
ANALYZE SELECT * FROM orders WHERE status = 'pending';

-- Slow query log check
SHOW VARIABLES LIKE 'slow_query%';
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1;  -- seconds

-- Performance schema: top slow queries
SELECT
  DIGEST_TEXT,
  COUNT_STAR,
  ROUND(AVG_TIMER_WAIT/1000000000000, 3) AS avg_sec,
  ROUND(SUM_TIMER_WAIT/1000000000000, 3) AS total_sec
FROM performance_schema.events_statements_summary_by_digest
ORDER BY SUM_TIMER_WAIT DESC
LIMIT 10;

Index Management

Index OperationsSQL
-- Show indexes
SHOW INDEX FROM orders;
SELECT * FROM information_schema.STATISTICS
WHERE TABLE_SCHEMA = 'myapp' AND TABLE_NAME = 'orders';

-- Create indexes
CREATE INDEX idx_orders_status_date ON orders (status, created_at);
CREATE UNIQUE INDEX idx_email ON customers (email);
CREATE FULLTEXT INDEX idx_ft_desc ON products (description);

-- Invisible indexes (test impact without dropping)
ALTER TABLE orders ALTER INDEX idx_status INVISIBLE;
ALTER TABLE orders ALTER INDEX idx_status VISIBLE;

-- Find unused indexes (no seeks/scans)
SELECT object_schema, object_name, index_name
FROM performance_schema.table_io_waits_summary_by_index_usage
WHERE index_name IS NOT NULL
  AND count_star = 0
  AND object_schema NOT IN ('mysql', 'information_schema')
ORDER BY object_schema, object_name;

-- Update table statistics
ANALYZE TABLE orders;
OPTIMIZE TABLE orders;        -- reclaims space, rebuilds indexes
mysqlcheck -u root -p --all-databases --analyze

InnoDB Buffer Pool & Memory

InnoDB TuningSQL
-- Buffer pool hit ratio (should be > 99%)
SELECT
  ROUND(
    (1 - (
      (SELECT variable_value FROM information_schema.GLOBAL_STATUS WHERE variable_name = 'Innodb_buffer_pool_reads') /
      (SELECT variable_value FROM information_schema.GLOBAL_STATUS WHERE variable_name = 'Innodb_buffer_pool_read_requests')
    )) * 100, 2
  ) AS buffer_pool_hit_ratio;

-- InnoDB status
SHOW ENGINE INNODB STATUS\G

-- Check buffer pool usage
SELECT
  pool_id,
  pool_size,
  free_buffers,
  database_pages,
  ROUND(database_pages / pool_size * 100, 1) AS pct_used
FROM information_schema.INNODB_BUFFER_POOL_STATS;

-- Memory usage overview
SELECT * FROM sys.memory_global_total;
SELECT event_name, current_alloc, high_alloc
FROM sys.memory_global_by_current_bytes
LIMIT 10;

Key Configuration Parameters

ParameterRecommended ValueDescription
innodb_buffer_pool_size70–80% of RAMMost critical setting. Caches data and index pages.
innodb_buffer_pool_instances8–16Reduce contention on multi-core systems.
innodb_log_file_size1–4 GBRedo log size. Larger = fewer checkpoints, faster writes.
innodb_flush_log_at_trx_commit1 (ACID) / 2 (perf)1 = full durability. 2 = 1s data loss risk, faster.
innodb_io_capacity200–2000 (SSD: 2000+)InnoDB background I/O operations per second.
max_connections150–500Max simultaneous connections. Use connection pooling.
query_cache_type0 (OFF)Query cache is deprecated; use ProxySQL/application cache.
tmp_table_size64M–256MIn-memory temp table size before disk spill.
thread_pool_sizeCPU coresMariaDB thread pool for high-concurrency workloads.
binlog_formatROWMost reliable for replication. MIXED for storage savings.
my.cnf — Production TemplateCONFIG
# /etc/mysql/mariadb.conf.d/99-custom.cnf
[mysqld]
# ── Engine ──────────────────────────────────────────
default_storage_engine     = InnoDB
innodb_buffer_pool_size    = 12G      # 75% of 16G RAM
innodb_buffer_pool_instances = 8
innodb_log_file_size       = 2G
innodb_flush_log_at_trx_commit = 1
innodb_flush_method        = O_DIRECT
innodb_io_capacity         = 2000     # NVMe SSD
innodb_io_capacity_max     = 4000
innodb_read_io_threads     = 8
innodb_write_io_threads    = 8

# ── Connections ────────────────────────────────────
max_connections            = 300
thread_pool_size           = 16
thread_cache_size          = 64
wait_timeout               = 300
interactive_timeout        = 300

# ── Memory ─────────────────────────────────────────
tmp_table_size             = 128M
max_heap_table_size        = 128M
sort_buffer_size           = 4M
join_buffer_size           = 4M
read_buffer_size           = 2M
read_rnd_buffer_size       = 4M

# ── Logging ────────────────────────────────────────
slow_query_log             = 1
slow_query_log_file        = /var/log/mysql/slow.log
long_query_time            = 1
log_queries_not_using_indexes = 1
general_log                = 0       # enable only for debugging

# ── Replication ────────────────────────────────────
server_id                  = 1
log_bin                    = /var/log/mysql/mariadb-bin
binlog_format              = ROW
expire_logs_days           = 7
gtid_strict_mode           = 1
sync_binlog                = 1

# ── Character Set ──────────────────────────────────
character_set_server       = utf8mb4
collation_server           = utf8mb4_unicode_ci
💾

Backup & Recovery

mariabackup (Physical Backup)

mariabackup — Full & IncrementalBASH
# Full backup
mariabackup --backup \
  --user=root \
  --password='yourpassword' \
  --target-dir=/backup/full/$(date +%Y%m%d)

# Prepare (apply redo logs)
mariabackup --prepare \
  --target-dir=/backup/full/20260509

# Incremental backup (after full)
mariabackup --backup \
  --user=root --password='yourpassword' \
  --target-dir=/backup/incr/$(date +%Y%m%d_%H) \
  --incremental-basedir=/backup/full/20260509

# Prepare incremental
mariabackup --prepare \
  --target-dir=/backup/full/20260509 \
  --incremental-dir=/backup/incr/20260509_02

# Restore
systemctl stop mariadb
rm -rf /var/lib/mysql/*
mariabackup --copy-back --target-dir=/backup/full/20260509
chown -R mysql:mysql /var/lib/mysql
systemctl start mariadb

mysqldump (Logical Backup)

Logical BackupBASH
# Full dump with GTID info
mysqldump -u root -p \
  --all-databases \
  --single-transaction \
  --flush-logs \
  --master-data=2 \
  --routines \
  --triggers \
  --events \
  --hex-blob \
  | gzip > /backup/full_$(date +%Y%m%d).sql.gz

# Single database
mysqldump -u root -p --single-transaction myapp \
  | gzip > /backup/myapp_$(date +%Y%m%d).sql.gz

# Restore
zcat /backup/myapp_20260509.sql.gz | mysql -u root -p myapp

# Dump only structure
mysqldump -u root -p --no-data myapp > myapp_schema.sql

# Dump only data (no create statements)
mysqldump -u root -p --no-create-info myapp > myapp_data.sql
💡 DBA Tip Always use --single-transaction with InnoDB for a consistent snapshot without locking. Never use --lock-all-tables in production — it blocks all writes.
⚠️ Critical — Test Your Restores A backup you've never restored is not a backup. Schedule monthly restore drills to a separate environment and verify row counts and application functionality.
🔐

Security Hardening

Security Audit Queries

Security ChecksSQL
-- Users with no password
SELECT user, host FROM mysql.user
WHERE (authentication_string = '' OR authentication_string IS NULL)
  AND plugin NOT IN ('unix_socket', 'gssapi');

-- Users with global privileges (risky)
SELECT user, host, Super_priv, Grant_priv, File_priv, Process_priv
FROM mysql.user
WHERE Super_priv = 'Y' OR Grant_priv = 'Y';

-- Anonymous users
SELECT user, host FROM mysql.user WHERE user = '';
DELETE FROM mysql.user WHERE user = '';
FLUSH PRIVILEGES;

-- Wildcard host access
SELECT user, host FROM mysql.user WHERE host = '%';

-- Audit plugin status
SHOW PLUGINS;
SELECT * FROM information_schema.PLUGINS
WHERE PLUGIN_NAME LIKE '%audit%';

-- Enable MariaDB Audit Plugin
INSTALL SONAME 'server_audit';
SET GLOBAL server_audit_logging = 'ON';
SET GLOBAL server_audit_events = 'QUERY,CONNECT';
SET GLOBAL server_audit_file_path = '/var/log/mysql/audit.log';

SSL Configuration

SSL/TLS SetupBASH
# Generate self-signed CA (production: use real CA)
openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 3650 -key ca-key.pem > ca-cert.pem

# Server certificate
openssl req -newkey rsa:2048 -days 3650 \
  -nodes -keyout server-key.pem > server-req.pem
openssl x509 -req -in server-req.pem -days 3650 \
  -CA ca-cert.pem -CAkey ca-key.pem -CAcreateserial > server-cert.pem

# Add to my.cnf [mysqld]
# ssl-ca   = /etc/mysql/ssl/ca-cert.pem
# ssl-cert = /etc/mysql/ssl/server-cert.pem
# ssl-key  = /etc/mysql/ssl/server-key.pem
# require_secure_transport = ON
🛠

DBA Automation Scripts

Health Check Script

mariadb_health_check.shBASH
#!/bin/bash
# MariaDB Health Check Script
# Usage: ./mariadb_health_check.sh

DB_USER="root"
DB_PASS="yourpassword"
DB_HOST="127.0.0.1"
LOG_FILE="/var/log/mariadb_health_$(date +%Y%m%d).log"
ALERT_EMAIL="dba@yourcompany.com"

MYSQL="mysql -u${DB_USER} -p${DB_PASS} -h${DB_HOST} -e"
ISSUES=0

log() { echo "[$(date '+%F %T')] $1" | tee -a $LOG_FILE; }

log "=== MariaDB Health Check Start ==="

# 1. Uptime
UPTIME=$($MYSQL "SHOW GLOBAL STATUS LIKE 'Uptime';" | awk 'NR==2{print $2}')
log "Uptime: ${UPTIME}s ($(( UPTIME / 86400 )) days)"

# 2. Connections (warn if > 80%)
MAX_CONN=$($MYSQL "SHOW VARIABLES LIKE 'max_connections';" | awk 'NR==2{print $2}')
CURR_CONN=$($MYSQL "SHOW GLOBAL STATUS LIKE 'Threads_connected';" | awk 'NR==2{print $2}')
PCT=$(( CURR_CONN * 100 / MAX_CONN ))
if [ $PCT -gt 80 ]; then
  log "WARN: Connections at ${PCT}% (${CURR_CONN}/${MAX_CONN})"
  ISSUES=$(( ISSUES + 1 ))
else
  log "OK: Connections ${CURR_CONN}/${MAX_CONN} (${PCT}%)"
fi

# 3. Buffer pool hit ratio
READS=$($MYSQL "SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_reads';" | awk 'NR==2{print $2}')
REQS=$($MYSQL "SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_read_requests';" | awk 'NR==2{print $2}')
HIT_RATIO=$(echo "scale=2; (1 - $READS / $REQS) * 100" | bc)
log "Buffer pool hit ratio: ${HIT_RATIO}%"

# 4. Replication lag
LAG=$($MYSQL "SHOW REPLICA STATUS\G" 2>/dev/null | grep "Seconds_Behind_Master:" | awk '{print $2}')
if [[ "$LAG" != "NULL" ]] && [[ $LAG -gt 30 ]]; then
  log "WARN: Replication lag = ${LAG}s"
  ISSUES=$(( ISSUES + 1 ))
else
  log "OK: Replication lag = ${LAG:-N/A}s"
fi

# 5. Long running queries (> 60s)
LONG_QUERIES=$($MYSQL "SELECT COUNT(*) FROM information_schema.PROCESSLIST WHERE TIME > 60 AND COMMAND != 'Sleep';" | awk 'NR==2{print $1}')
if [ $LONG_QUERIES -gt 0 ]; then
  log "WARN: ${LONG_QUERIES} long-running queries detected"
  ISSUES=$(( ISSUES + 1 ))
fi

# 6. Summary
if [ $ISSUES -gt 0 ]; then
  log "ALERT: ${ISSUES} issue(s) found. Sending email..."
  mail -s "MariaDB Health Alert - $(hostname)" $ALERT_EMAIL < $LOG_FILE
else
  log "All checks passed."
fi
log "=== Health Check End ==="

Automated Backup Script

mariadb_backup.shBASH
#!/bin/bash
# MariaDB Full + Incremental Backup with Retention

BACKUP_ROOT="/backup/mariadb"
FULL_DIR="${BACKUP_ROOT}/full"
INCR_DIR="${BACKUP_ROOT}/incr"
LOG="/var/log/mariadb_backup.log"
RETENTION_DAYS=7
DB_USER="root"
DB_PASS="yourpassword"
TODAY=$(date +%Y%m%d)
DOW=$(date +%u)  # 1=Mon, 7=Sun

mkdir -p $FULL_DIR $INCR_DIR

log() { echo "[$(date '+%F %T')] $*" | tee -a $LOG; }

if [ "$DOW" -eq 7 ]; then
  # Sunday: Full backup
  log "Starting FULL backup..."
  mariabackup --backup \
    --user=$DB_USER --password="${DB_PASS}" \
    --target-dir=${FULL_DIR}/$TODAY 2>>$LOG
  mariabackup --prepare --target-dir=${FULL_DIR}/$TODAY 2>>$LOG
  # Store the latest full dir path
  echo ${FULL_DIR}/$TODAY > /tmp/last_full_backup.txt
  log "Full backup complete."
else
  # Other days: incremental
  LAST_FULL=$(cat /tmp/last_full_backup.txt)
  log "Starting INCREMENTAL backup based on ${LAST_FULL}..."
  mariabackup --backup \
    --user=$DB_USER --password="${DB_PASS}" \
    --target-dir=${INCR_DIR}/$TODAY \
    --incremental-basedir=$LAST_FULL 2>>$LOG
  log "Incremental backup complete."
fi

# Cleanup old backups
log "Cleaning up backups older than ${RETENTION_DAYS} days..."
find $FULL_DIR -maxdepth 1 -type d -mtime +$RETENTION_DAYS -exec rm -rf {} \;
find $INCR_DIR -maxdepth 1 -type d -mtime +$RETENTION_DAYS -exec rm -rf {} \;

log "Backup job finished. Size: $(du -sh ${BACKUP_ROOT} | cut -f1)"

Top Tables & Bloat Finder

Table Bloat & StatisticsSQL
-- Top 20 largest tables across all databases
SELECT
  table_schema AS 'Database',
  table_name   AS 'Table',
  engine,
  FORMAT(table_rows, 0)                                   AS rows,
  ROUND(data_length  / 1024 / 1024, 2)                    AS data_mb,
  ROUND(index_length / 1024 / 1024, 2)                    AS index_mb,
  ROUND((data_length + index_length) / 1024 / 1024, 2)    AS total_mb,
  ROUND(data_free   / 1024 / 1024, 2)                    AS free_mb,
  ROUND(data_free / (data_length + 1) * 100, 1)           AS bloat_pct
FROM information_schema.TABLES
WHERE table_schema NOT IN ('information_schema', 'mysql',
                            'performance_schema', 'sys')
  AND table_type = 'BASE TABLE'
ORDER BY (data_length + index_length) DESC
LIMIT 20;

-- Tables needing OPTIMIZE (>20% bloat)
SELECT
  CONCAT(table_schema, '.', table_name) AS full_name,
  ROUND(data_free / 1024 / 1024, 1) AS wasted_mb
FROM information_schema.TABLES
WHERE data_free / (data_length + 1) > 0.2
  AND data_free > 100 * 1024 * 1024
  AND engine = 'InnoDB'
ORDER BY data_free DESC;

Galera Cluster Status

Galera / Cluster MonitoringSQL
-- Cluster membership and sync state
SHOW STATUS LIKE 'wsrep_%';

-- Quick health summary
SELECT
  variable_name,
  variable_value
FROM information_schema.GLOBAL_STATUS
WHERE variable_name IN (
  'wsrep_cluster_size',
  'wsrep_cluster_status',
  'wsrep_connected',
  'wsrep_local_state_comment',
  'wsrep_ready',
  'wsrep_flow_control_paused'
);

-- Flow control check (pause > 0.1 = bottleneck)
SHOW STATUS LIKE 'wsrep_flow_control_paused';
⚡ Performance Reminder wsrep_flow_control_paused consistently above 0.1 indicates a lagging node is slowing the entire cluster. Check for long-running transactions, high write volume, or undersized nodes.

Useful Monitoring Queries (Daily DBA)

Daily Monitoring DashboardSQL
-- Active transactions with lock waits
SELECT
  r.trx_id waiting_id,
  r.trx_mysql_thread_id waiting_thread,
  r.trx_query waiting_query,
  b.trx_id blocking_id,
  b.trx_mysql_thread_id blocking_thread,
  b.trx_query blocking_query
FROM information_schema.INNODB_LOCK_WAITS w
JOIN information_schema.INNODB_TRX b ON b.trx_id = w.blocking_trx_id
JOIN information_schema.INNODB_TRX r ON r.trx_id = w.requesting_trx_id;

-- Open transactions (running > 60s)
SELECT
  trx_id, trx_started,
  TIMESTAMPDIFF(SECOND, trx_started, NOW()) AS age_sec,
  trx_query, trx_rows_modified
FROM information_schema.INNODB_TRX
WHERE TIMESTAMPDIFF(SECOND, trx_started, NOW()) > 60
ORDER BY trx_started;

-- Deadlock info
SHOW ENGINE INNODB STATUS\G

-- Per-database size summary
SELECT
  table_schema,
  COUNT(*) AS tables,
  ROUND(SUM(data_length + index_length) / 1024 / 1024 / 1024, 3) AS total_gb
FROM information_schema.TABLES
WHERE table_schema NOT IN ('information_schema', 'mysql',
                            'performance_schema', 'sys')
GROUP BY table_schema
ORDER BY total_gb DESC;

-- sys schema: top wait events
SELECT * FROM sys.waits_global_by_latency LIMIT 10;

-- sys schema: queries causing full table scans
SELECT query, exec_count, total_latency, no_index_used_count
FROM sys.statements_with_full_table_scans
ORDER BY total_latency DESC
LIMIT 10;

MariaDB-Specific Features

System Versioned (Temporal) Tables

Temporal TablesSQL
-- Create a versioned table (audit history built-in)
CREATE TABLE employee_salary (
  id       INT PRIMARY KEY,
  emp_id   INT,
  salary   DECIMAL(10,2),
  dept     VARCHAR(50)
) WITH SYSTEM VERSIONING;

-- Query historical data (as of a point in time)
SELECT * FROM employee_salary
FOR SYSTEM_TIME AS OF '2025-01-01 00:00:00'
WHERE emp_id = 42;

-- Query a time range
SELECT * FROM employee_salary
FOR SYSTEM_TIME BETWEEN '2024-01-01' AND '2025-01-01'
WHERE emp_id = 42;

-- Full history
SELECT * FROM employee_salary FOR SYSTEM_TIME ALL
WHERE emp_id = 42
ORDER BY row_start;

-- Dynamic columns (JSON-like, MariaDB native)
CREATE TABLE inventory (
  id   INT PRIMARY KEY,
  name VARCHAR(100),
  attrs BLOB
);
INSERT INTO inventory VALUES (1, 'Widget',
  COLUMN_CREATE('color', 'blue', 'weight_kg', 1.5));
SELECT COLUMN_GET(attrs, 'color' AS CHAR) FROM inventory;
💡 Oracle DBA Note System versioned tables are MariaDB's equivalent of Oracle Flashback Data Archive (FDA/Total Recall). Unlike Oracle, no extra licensing is required — it's built into every MariaDB installation.
🏁

Final Thoughts

MariaDB is not only a MySQL alternative. It is a serious database platform for OLTP, analytics, cloud workloads, and enterprise applications when it is configured and operated correctly.

As a DBA, always focus on four pillars:

Performance

Use the right indexes, review execution plans, size InnoDB memory correctly, and monitor slow queries.

Stability

Control connections, manage logs, tune I/O, and avoid risky production changes without testing.

Security

Use least privilege, remove anonymous users, avoid wildcard access, enforce SSL/TLS, and enable auditing.

Automation

Automate health checks, backups, replication checks, and daily monitoring to reduce manual errors.

💬 Connect With Me If you found this guide useful, connect with me and follow my DBA learning journey for more Oracle, MariaDB, AWS, AI, and real-world database engineering content.

Friday, May 8, 2026

Understanding the Oracle Optimizer Glossary – A DBA’s Quick Reference Guide

Understanding the Oracle Optimizer Glossary – A DBA’s Quick Reference Guide

As Oracle DBAs, we regularly work with execution plans, SQL tuning, statistics, cardinality estimates, histograms, adaptive plans, and optimizer hints. Oracle’s Optimizer is a very powerful component, but its terminology can sometimes become confusing for DBAs, developers, and performance engineers.

Oracle has published a very useful Oracle Optimizer Glossary, which acts as a quick reference for understanding important optimizer-related terms and concepts.


What is the Oracle Optimizer?

The Oracle Optimizer is the decision-making engine that determines how a SQL statement should be executed. Whenever a SQL query is submitted, the optimizer evaluates multiple possible execution paths and chooses the most efficient execution plan.

The optimizer considers factors such as:

  • Table and index statistics
  • Data distribution
  • Available indexes
  • Join methods
  • Partitioning
  • System resources
  • Estimated cost

In simple words, the optimizer tries to answer this question:

What is the best and most efficient way to execute this SQL statement?

Why This Glossary is Important for DBAs

For Oracle DBAs, understanding optimizer terminology is very important during performance troubleshooting. Many SQL performance issues are directly related to optimizer decisions.

This glossary is useful during:

  • SQL tuning
  • AWR and ASH analysis
  • Execution plan review
  • Database upgrades
  • Release Update patching
  • Oracle EBS performance troubleshooting
  • SQL regression analysis
  • Statistics refresh validation

Key Optimizer Concepts Every DBA Should Know

1. Cardinality

Cardinality means the estimated number of rows returned by a SQL operation. If the optimizer estimates the wrong number of rows, it may choose a poor execution plan.

Wrong cardinality estimates can cause:

  • Wrong join order
  • Unnecessary full table scans
  • Wrong index usage
  • Slow SQL performance
SELECT *
FROM orders
WHERE customer_id = 100;

If statistics are stale or incorrect, Oracle may estimate the wrong number of rows for this query.

2. Cost-Based Optimizer

Oracle uses the Cost-Based Optimizer, also known as CBO. The optimizer compares different execution plans and chooses the plan with the lowest estimated cost.

The cost is calculated based on statistics, CPU usage, I/O usage, selectivity, and other internal factors.

3. Histograms

Histograms help Oracle understand data skew. If a column has uneven data distribution, histograms can help the optimizer estimate rows more accurately.

Histograms are useful when:

  • Data is highly skewed
  • Some values appear much more frequently than others
  • SQL performance changes based on bind values

4. Execution Plan

An execution plan shows the steps Oracle will follow to execute a SQL statement.

Operation Meaning
TABLE ACCESS FULL Oracle reads the complete table
INDEX RANGE SCAN Oracle scans a range of values from an index
HASH JOIN Oracle joins tables using a hash-based method
NESTED LOOPS Oracle joins rows using loop-based processing
SORT ORDER BY Oracle sorts the result set

5. SQL Plan Management

SQL Plan Management helps control and stabilize SQL execution plans. It is very useful when you want to prevent performance regression after patching, upgrades, or statistics changes.

SQL Plan Management can help during:

  • Database upgrades
  • Release Update patching
  • Oracle EBS CPU patching
  • Statistics refresh
  • Execution plan instability

6. Adaptive Query Optimization

Adaptive Query Optimization allows Oracle to adjust execution strategies based on runtime information. This helps Oracle improve execution when the original estimates are not accurate.

Examples include:

  • Adaptive plans
  • Statistics feedback
  • Dynamic re-optimization
  • Adaptive cursor sharing

Why This Matters in Real DBA Life

In real production environments, SQL performance problems are often caused by optimizer-related issues. A query may run fine today and suddenly become slow after:

  • Statistics gathering
  • Patch application
  • Data growth
  • Database upgrade
  • Change in bind values
  • Index changes

That is why DBAs must understand optimizer concepts clearly.


Oracle EBS DBA Perspective

For Oracle E-Business Suite DBAs, optimizer knowledge is extremely important. EBS environments have large schemas, complex SQL, concurrent programs, custom reports, and heavy batch processing.

Optimizer knowledge helps in:

  • Troubleshooting long-running concurrent requests
  • Analyzing custom SQL performance
  • Reviewing execution plans
  • Managing EBS statistics
  • Validating performance after ADOP cycles
  • Checking SQL behavior after CPU/RU patching

My DBA Takeaway

The Oracle Optimizer is only as good as the information it receives. If statistics are missing, stale, or misleading, the optimizer may choose a bad execution plan.

As DBAs, we should regularly focus on:

  • Accurate object statistics
  • Proper histogram management
  • Execution plan comparison
  • SQL Plan Baselines where required
  • AWR and ASH review
  • Testing SQL performance after patching and upgrades

Final Thoughts

The Oracle Optimizer Glossary is a very useful learning reference for Oracle DBAs, SQL developers, performance engineers, and EBS administrators.

Understanding optimizer terminology helps us troubleshoot SQL performance issues more confidently and communicate better during production incidents, tuning discussions, and upgrade planning.

For any DBA working on Oracle 19c, 23ai, Oracle AI Database 26ai, or Oracle E-Business Suite, this glossary is worth bookmarking.


Reference


Sunday, May 3, 2026

CPU vs OCPU vs ECPU: The Simple Oracle Cloud Sizing Guide Every DBA Should Know

CPU, OCPU and ECPU Explained in Simple Words

CPU, OCPU and ECPU Explained in Simple Words

If you are moving Oracle Database from on-premises to Oracle Cloud Infrastructure, this is one concept you must understand clearly.

During Oracle Database cloud migration, many teams get confused with three words: CPU, OCPU and ECPU.

On-premises servers normally talk in terms of CPU cores. Oracle Cloud Infrastructure uses OCPU for many database services and ECPU for Autonomous Database. If we do not understand the difference, we may oversize the cloud database, increase cost, or undersize it and create performance issues.

1. The Golden Rule to Remember

1 Physical CPU Core = 1 OCPU = 2 vCPUs = 2 ECPUs

This is the most important line to remember. In simple words:

  • CPU Core is the on-premises language.
  • OCPU is the normal OCI database compute language.
  • ECPU is mostly used with Autonomous Database.
  • 1 OCPU = 2 ECPUs.
10 physical cores = 10 OCPUs = 20 ECPUs

2. CPU vs OCPU vs ECPU

Term Where Used Simple Meaning Easy Memory
CPU On-premises server Physical processor core in your server Traditional server world
OCPU OCI Base DB, ExaCS, ExaCC Oracle Cloud Processing Unit 1 OCPU = 1 physical core
ECPU Autonomous Database Elastic CPU 1 ECPU = half OCPU

3. Do Not Copy On-Prem CPU Count Blindly

Common mistake: “We have 10 CPU cores on-premises, so let us provision 10 OCPUs in OCI.”

This approach can be costly. Your on-premises server may have 10 cores, but it may not be using all 10 cores all the time. In many production databases, CPU usage may be around 50% to 70% during normal workload, with higher peaks during batch jobs, month-end processing, or reporting.

The better approach is to check real workload data from AWR and ASH reports. Size based on actual peak CPU usage, not just the number of cores installed on the physical server.

4. Simple Sizing Formula

Use this easy formula:

OCPUs Needed = Peak CPU Usage × Physical Cores + 15% to 20% Safety Buffer

Example

Suppose your on-premises database server has:

  • 10 physical CPU cores
  • Peak CPU usage around 70%
  • You want to add 20% safety buffer
0.70 × 10 = 7 OCPUs Add 20% buffer: 7 × 1.20 = 8.4 Round up: 9 OCPUs
So instead of blindly provisioning 10 OCPUs, you may start with around 9 OCPUs, validate workload performance, and then adjust if needed.

5. What About ECPU?

If you are moving to Autonomous Database, Oracle uses ECPU. Since 1 OCPU = 2 ECPUs, you simply double the OCPU number.

9 OCPUs = 18 ECPUs

Autonomous Database also supports auto-scaling. This means it can automatically add more ECPUs during workload spikes and scale down when the load reduces.

For ADB: Start with the right base ECPU count, keep auto-scaling enabled, monitor usage, and then right-size.

6. Which OCI Database Service Uses Which Unit?

OCI Database Service Compute Unit Best For
Base Database VM OCPU Lift-and-shift migrations, full DBA control, predictable workloads
Exadata Cloud Service OCPU Large, mission-critical, high-performance databases
Exadata Cloud@Customer OCPU Customers who need Exadata performance but must keep data in their data center
Autonomous Database ECPU Managed database, auto-scaling, less administration, variable workloads

7. Why More OCPUs or ECPUs Improve Performance

Think of CPU like workers in a kitchen. More workers can handle more orders in parallel. Similarly, more OCPUs or ECPUs allow Oracle Database to process more work at the same time.

  • Faster queries: More CPU can help large queries run faster using parallel processing.
  • More concurrent users: More sessions can be served without waiting for CPU.
  • Shorter batch window: Nightly jobs, reports, and month-end jobs can finish faster.
  • Better workload handling: More compute gives better room during peak business hours.

8. Exadata Point to Remember

Exadata is different from normal servers because of features like Smart Scan. Smart Scan can offload some work to Exadata storage cells, which means the database server CPU does not have to do all the heavy lifting.

On ExaCS or ExaCC, you may sometimes need fewer OCPUs than a normal server because Exadata architecture can process Oracle workloads more efficiently.

9. Important Checks Before Final Sizing

CPU is only one part of database sizing. Before finalizing your OCI configuration, check the complete picture.

  1. Collect 90 days of AWR/ASH data to understand real peak usage.
  2. Check RAM requirement because CPU alone cannot solve memory pressure.
  3. Check storage IOPS and throughput because many databases are I/O-bound, not CPU-bound.
  4. Understand batch windows such as month-end, payroll, reporting, and large data loads.
  5. Validate Oracle licensing because OCPU count can directly impact license cost.
  6. Add 15% to 20% buffer for growth and unexpected peaks.
  7. Test with real workload before production cutover.

10. Quick Cheat Sheet

On-Premises Situation OCI Sizing Thought Process Recommended Check
10 physical cores Start by mapping to 10 OCPUs or 20 ECPUs Then reduce or adjust based on AWR peak usage
70% peak CPU usage 10 × 70% = 7 OCPUs Add 15% to 20% buffer
Steady workload OCPU-based service may be suitable Base DB VM, ExaCS, or ExaCC
Variable or bursty workload ECPU-based ADB may be suitable Enable auto-scaling and monitor
Mission-critical workload Consider Exadata options Evaluate ExaCS or ExaCC

11. Key Takeaways

  • 1 Physical Core = 1 OCPU = 2 ECPUs = 2 vCPUs.
  • CPU is on-premises language.
  • OCPU is used for OCI Base DB, ExaCS, and ExaCC.
  • ECPU is used for Autonomous Database.
  • Do not copy on-prem CPU count blindly.
  • Use AWR/ASH reports to check actual peak CPU usage.
  • Always add 15% to 20% buffer.
  • Check CPU, memory, storage, IOPS, licensing, and real workload testing.
  • Right-sizing can save cost and avoid performance problems.
Easy memory line: CPU is on-prem, OCPU is OCI, ECPU is Autonomous DB. 1 OCPU = 2 ECPUs. Always size from AWR peak usage plus 20% buffer.

Conclusion

CPU, OCPU and ECPU look confusing at first, but the concept becomes simple once we remember the conversion rule. The biggest mistake during cloud migration is copying on-premises CPU count directly into OCI sizing.

The right approach is to collect AWR/ASH data, understand real peak usage, add a practical safety buffer, validate licensing, and test with real workload before production cutover.

For DBAs and cloud architects, this understanding is very important because it directly impacts performance, Oracle licensing, and monthly cloud cost.

Wednesday, April 29, 2026

Oracle Database 19.31 Release Update – What DBAs Need to Know

Oracle Database 19.31 Release Update – What DBAs Need to Know | Punit Oracle DBA
Oracle 19c Release Update April 2026 CPU Patching

Oracle Database 19.31 Release Update – What's Inside and What DBAs Need to Know

RU 19.31 (19.31.0.0.260421) is the April 2026 quarterly patch for Oracle Database 19c. It bundles the April 2026 CPU security fixes, cumulative bug fixes, and OJVM updates. Here is everything you need before your maintenance window.

⚠ Availability Notice RU 19.31 was delayed from its original April 15 release date. The revised Linux ETA was April 28–May 1, 2026. Other platforms (Windows, AIX, Solaris) target May 12, 2026. The OJVM patch follows 2 days after the RU. Monitor MOS KB106822 (Note 888.1) for the current confirmed availability date before starting downloads.

1What Is RU 19.31?

Oracle Database Release Update 19.31.0.0.260421 is the quarterly cumulative patch for Oracle Database 19c, released in the April 2026 CPU cycle. Like all RUs, it is cumulative — applying 19.31 automatically includes every fix from 19.1 through 19.30.

It covers the Oracle Database RDBMS home. For Grid Infrastructure environments (RAC, Restart), a separate GI RU must be applied to the Grid home. OJVM is also a separate patch applied on top of the RU. Plan your maintenance window to account for all three components if relevant to your stack.

🔒 RDBMS RU

19.31.0.0.260421 — Database Oracle Home. Core engine, optimizer, SQL, PL/SQL, security fixes.

⚡ OJVM RU

Separate patch. Apply after RU. Required if Java VM is installed and in use in your database.

🔌 GI RU

Grid Infrastructure home. Apply separately for RAC and Oracle Restart environments.

2April 2026 CPU Security Fixes Bundled in 19.31

The April 2026 CPU contains 481 new security patches across 28 Oracle product families. For Oracle Database Server specifically, there are 8 new security patches, of which 4 are remotely exploitable without authentication. These are the key CVEs relevant to Oracle Database 19c environments:

CVE Component CVSS Remote / No Auth Attack Vector
CVE-2026-33870 Clusterware / Micronaut 7.5 Yes Network / HTTP
CVE-2026-35229 Java VM (OJVM) 7.5 Yes Oracle Net
CVE-2026-31790 RDBMS / OpenSSL 7.2 Yes Confidentiality, Integrity, Availability
CVE-2026-26007 RDBMS / Python component 6.5 Yes Network
CVE-2026-21999 XML Database (XDB) 5.3 No HTTPS / XDB
CVE-2026-34312 RDBMS Row Access Method 2.4 No Local / Privileges required
🚫 DBA Action Required CVE-2026-33870 (Clusterware/Micronaut) and CVE-2026-35229 (OJVM) are both CVSS 7.5 and remotely exploitable without authentication. If you run RAC or have OJVM installed, treat these as priority. Do not skip the OJVM RU.

3Key Fix Areas in RU 19.31

Beyond the CPU security layer, Oracle RUs include cumulative bug fixes across all major database components. Based on the patch cycle and known areas, RU 19.31 covers fixes in:

Optimizer & SQL

Plan stability, adaptive plan regressions, cardinality estimates, hints handling.

Data Pump (expdp/impdp)

Proactive recommended patches included. Review KB107134 for your environment.

RAC & Clusterware

Micronaut component CVE fixes. Instance eviction, interconnect, and GRD improvements.

Data Guard

Redo apply stability, broker fixes, switchover/failover edge cases.

PL/SQL & Java VM

OJVM security fix (separate patch). PL/SQL compiler edge case corrections.

OpenSSL / Crypto

CVE-2026-31790 — OpenSSL fix for confidentiality and integrity exposure in RDBMS home.

XML Database (XDB)

CVE-2026-21999 fix. Relevant if HTTPS / XDB is active in your environment.

FIPS 140-3 Readiness

19.31 is the last RU before 19.32 removes 3DES in FIPS-compliant mode. Review now.

⚠ FIPS 140-3 Warning Oracle is transitioning from FIPS 140-2 to FIPS 140-3. Support for 3DES in FIPS-compliant databases is scheduled for removal in RU 19.32. If your environment uses 3DES with FIPS mode, plan your migration to AES before the next quarterly cycle.

4Release Availability Timeline

April 15, 2026 — Original expected release date. Patch not posted. No official reason given.
April 28, 2026 — Revised Linux ETA in KB106822. Further slip reported in community.
May 1, 2026 — Further revised ETA (community-reported). Monitor KB106822 for confirmation.
April 30 / May 1, 2026 — OJVM RU 19.31 (2 days after RU Linux release, projected).
May 12, 2026 — Windows, AIX, Solaris, HP-UX, and other platform ports.

5Pre-Patch Preparation Checklist

Use this time before the patch is available to complete all pre-work so your maintenance window is clean.

  • Validate OPatch version

    OPatch must be 12.2.0.1.41 or higher for 19.31. Download the latest from MOS before patching begins.

  • Capture current inventory

    Run opatch lsinventory and opatch lspatches for ORACLE_HOME and GRID_HOME. Store as baseline.

  • Check OJVM presence

    Query DBA_REGISTRY for JAVAVM. If present — even if unused — apply OJVM RU. CVE-2026-35229 applies.

  • Run conflict check

    Use opatch prereq CheckConflictAgainstOHWithDetail against the staged patch directory before applying.

  • Perform full backup

    RMAN backup of database and archivelogs. Backup Oracle Home and Grid Home. Create guaranteed restore point if applicable.

  • Validate Data Guard status

    Confirm standby is in sync. Check lag, apply lag, and DG broker status before maintenance window.

  • Test on lower environment first

    Apply and validate on DEV / SIT / UAT before production. Run full regression on critical workloads.

6Verification SQL — Run Before and After Patching

Check Current Version and Components

SQL – Pre-patch inventory
-- Database version
SELECT banner_full FROM v$version;

-- Installed components and status
SELECT comp_id, comp_name, version, status
FROM   dba_registry
ORDER BY comp_id;

-- Check OJVM specifically
SELECT comp_id, comp_name, version, status
FROM   dba_registry
WHERE  comp_id = 'JAVAVM';

-- Patch history
SELECT patch_id, patch_type, action, status, action_time, description
FROM   dba_registry_sqlpatch
ORDER BY action_time DESC;

OPatch Commands — Oracle Home

Shell – OPatch inventory and conflict check
# Verify OPatch version (must be 12.2.0.1.41+)
$ORACLE_HOME/OPatch/opatch version

# List currently applied patches
$ORACLE_HOME/OPatch/opatch lspatches

# Conflict check before applying (run from patch directory)
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail \
  -phBaseDir /path/to/19.31/patch

Apply RU 19.31 (Out-of-Place / In-Place)

Shell – Apply RU and run datapatch
# Shutdown database before applying
sqlplus / as sysdba <<EOF
  shutdown immediate;
EOF

# Apply the RU patch
cd /path/to/patch/19.31
$ORACLE_HOME/OPatch/opatch apply

# Start database and run datapatch
sqlplus / as sysdba <<EOF
  startup;
EOF

$ORACLE_HOME/OPatch/datapatch -verbose

Post-Patch Validation

SQL – Post-patch checks
-- Confirm RU applied successfully
SELECT patch_id, patch_type, action, status, action_time
FROM   dba_registry_sqlpatch
ORDER BY action_time DESC
FETCH FIRST 5 ROWS ONLY;

-- Check for invalid objects
SELECT owner, object_type, COUNT(*)
FROM   dba_objects
WHERE  status = 'INVALID'
GROUP BY owner, object_type
ORDER BY owner, object_type;

-- Recompile invalids if needed
@?/rdbms/admin/utlrp.sql

-- Confirm registry status
SELECT comp_id, comp_name, version, status
FROM   dba_registry
ORDER BY comp_id;
✅ For EBS R12.2 Environments After database patching, validate login page, Concurrent Managers, Workflow Mailer, and OAF pages. Run ETCC (checkDBpatch.sh) to confirm codelevel compliance. Validate ADOP readiness before any subsequent application patching cycles.

7Where to Get the Patch and Reference Notes

  • KB106822 — Primary Note for Database Quarterly Release Updates (888.1) — availability dates and patch numbers
  • CPU58 — Oracle Critical Patch Update April 2026 for Oracle Database Products
  • CPU56 — Executive Summary and Analysis (April 2026 CPU)
  • KB107134 — Data Pump Recommended Proactive Patches for 19.10 and Above
  • 2.4 MOS Note 2200506.1 — Patch Set Update and Critical Patch Update February 2026 Availability Document
  • Oracle Critical Patch Update Advisory – April 2026
  • Mike Dietrich's Upgrade Blog — community tracking for RU delays and known issues

8My DBA View

RU 19.31 is not a patch you can skip. With four remotely exploitable CVEs in the database home — including a CVSS 7.5 in Clusterware/Micronaut and another 7.5 in OJVM — this is a security-first patching cycle. Internal network exposure is not a valid reason to delay.

Use the delay period productively. Every DBA who waits for the patch file before starting preparation will end up rushing the maintenance window. Pre-work now — OPatch upgrade, inventory capture, conflict check staging, backup planning, DG validation, and lower environment scheduling — means a clean, fast production window when the patch finally posts.

For EBS R12.2 on 19c, coordinate with your Apps DBA team early. CPU patching in EBS environments requires aligning database patching with Fusion Middleware patches, ETCC compliance, and ADOP readiness. Do not treat the database layer in isolation.

Prepare early. Test thoroughly. Patch safely. Validate completely.