Saturday, February 21, 2026

PostgreSQL DBA Scripts from GitHub

Project POSTGRES

Exploring the postgres_dba GitHub Repository (PostgreSQL DBA Scripts Guide)

As part of my structured learning journey into PostgreSQL administration, focused on practical PostgreSQL DBA skills including monitoring, performance diagnostics, and production best practices.


Learning Objectives

  • Understand how to use Git for DBA script management
  • Clone and explore a PostgreSQL DBA repository
  • Identify useful monitoring and diagnostic scripts
  • Apply scripts safely in real environments

Repository Details

Repository Name: postgres_dba
Author: Nikolay Samokhvalov
Official GitHub Link:
https://github.com/NikolayS/postgres_dba


How to Clone the Repository

cd ~/Documents
git clone https://github.com/NikolayS/postgres_dba.git
cd postgres_dba
ls -l

The git clone command downloads the complete repository, including full version history.


Repository Structure (High-Level View)

After cloning, review the file structure:

find . -type f | sort

The repository contains multiple SQL scripts focused on:

  • Active session monitoring
  • Long-running query detection
  • Index usage analysis
  • Vacuum and maintenance monitoring
  • Replication diagnostics

Key DBA Scripts to Start With

1️⃣ Active Session Monitoring

Helps identify currently running queries and wait events.

2️⃣ Long Running Queries

Detects queries running longer than expected.

3️⃣ Index Usage Statistics

Helps identify tables performing sequential scans.

4️⃣ Vacuum / Autovacuum Monitoring

Validates maintenance effectiveness and detects bloat risk.

5️⃣ Replication Status

Monitors replication lag and WAL shipping status.


Oracle DBA Mapping

  • Oracle v$session ↔ PostgreSQL pg_stat_activity
  • Oracle ASH ↔ PostgreSQL extensions / Performance Insights
  • Oracle wait events ↔ wait_event_type / wait_event

Best Practices

  • Always review scripts before running in production
  • Test heavy queries in non-production first
  • Capture output for trend analysis
  • Integrate useful scripts into your DBA toolkit

Conclusion

Instead of copying full repositories, the correct approach is: Clone → Analyze → Curate → Learn → Apply.

Project POSTGRES will continue with deeper script analysis and performance diagnostics.


References & Credits

All original scripts belong to the author: Nikolay Samokhvalov.

Official Repository: https://github.com/NikolayS/postgres_dba

This article is written for educational and knowledge-sharing purposes.

PostgreSQL Active Session History (pg_ash) – Complete DBA Guide

As Oracle DBAs, we are very familiar with Active Session History (ASH) for performance diagnostics. PostgreSQL does not provide built-in ASH like Oracle — but thanks to the open-source project pg_ash, we can implement similar functionality in PostgreSQL environments.


What is pg_ash?

pg_ash is a PostgreSQL extension that collects and stores Active Session History samples, similar to Oracle ASH. It helps DBAs analyze wait events, long-running queries, blocking sessions, and overall workload behavior.

  • Tracks active sessions over time
  • Captures wait events
  • Stores query execution samples
  • Helps in performance troubleshooting
  • Provides historical workload visibility

Installation Steps

# Clone repository
git clone https://github.com/NikolayS/pg_ash.git

cd pg_ash

# Build and install
make
make install

Then connect to PostgreSQL and enable extension:

CREATE EXTENSION pg_ash;

Sample Queries for Monitoring

1️⃣ View ASH Samples:

SELECT * FROM pg_ash;

2️⃣ Top Queries by Activity:

SELECT queryid, count(*) AS samples
FROM pg_ash
GROUP BY queryid
ORDER BY samples DESC;

3️⃣ Analyze Wait Events:

SELECT wait_event_type, wait_event, count(*)
FROM pg_ash
GROUP BY wait_event_type, wait_event
ORDER BY count(*) DESC;

Why pg_ash is Important for DBAs

  • Provides Oracle-like ASH visibility in PostgreSQL
  • Helps identify performance bottlenecks
  • Useful for production troubleshooting
  • Great learning tool for PostgreSQL internals
  • Bridges monitoring gap for hybrid Oracle + PostgreSQL DBAs

DBA Perspective

For DBAs transitioning from Oracle to PostgreSQL (especially in cloud environments like AWS RDS), pg_ash provides a familiar monitoring model. It helps us analyze workload patterns in a structured way, similar to Oracle ASH/AWR concepts.

This makes it a powerful tool for performance diagnostics in modern PostgreSQL deployments.


Conclusion

Open-source tools like pg_ash demonstrate the strength of the PostgreSQL ecosystem. Understanding ASH-style monitoring is essential for DBAs working in hybrid Oracle and PostgreSQL environments.

Stay tuned for more PostgreSQL performance deep dives as part of my learning journey.


References & Credits

This open-source extension is developed by Nikolay Samokhvalov.

Official GitHub Repository:
https://github.com/NikolayS/pg_ash

All credits for the original project belong to the author. This article is written for learning and DBA knowledge-sharing purposes.

Friday, February 20, 2026

Oracle Database Scripts – Full Script Listing

Oracle Database Scripts Repository

This repository contains a large collection of Oracle DBA utility scripts covering performance tuning, monitoring, RMAN, RAC, ASM, Data Guard, AWR, ASH, and general administration.

GitHub Repository:
https://github.com/fatdba/Oracle-Database-Scripts

Download Full Repository:
Click Code → Download ZIP from GitHub.


Complete Script Listing (Generated from Downloaded ZIP)

Click Here to Expand Full Script List
./++
./+imp_scr_oracle_daily
./+plusenv.sql
./+simplecool.sql
./activesessions_ash.sql
./Admin_General/+login.sql
./Admin_General/+plusenv.sql
./Admin_General/+simplecool.sql
./Admin_General/archive_new.sql
./Admin_General/archive.sql
./Admin_General/Bigtable_fortest.sql
./Admin_General/blocking_local.sql.com
./Admin_General/columns_usage.sql
./Admin_General/create_schedule_job.sql
./Admin_General/current_redologfile_usage.sql
./Admin_General/datafiles_fora_tablespace.sql
./Admin_General/dba_schedulerjobs.sql
./Admin_General/dbgrowth_permonth.sql
./Admin_General/dblink_checker_withJob.sql
./Admin_General/dblink_details.sql
./Admin_General/dbms_jobs_broken.sql
./Admin_General/dbserverdetails.sql
./Admin_General/dbtime.sql
./Admin_General/dbupgdiag.sql
./Admin_General/ddl_schema.sql
./Admin_General/ddl-objects.sql
./Admin_General/Fat200objects.sql
./Admin_General/file_block.sql
./Admin_General/file_border.sql
./Admin_General/find_objects_1.sql
./Admin_General/find_objects_2.sql
./Admin_General/find_objects.sql
./Admin_General/find_x$_basetables.sql
./Admin_General/flashback_main.sql
./Admin_General/get_dependent_ddl.sql
./Admin_General/hidden_params.sql
./Admin_General/hugepages.sh
./Admin_General/iostat_summary.sql
./Admin_General/jobs.sql
./Admin_General/killer.sql
./Admin_General/latest_tab_stats.sql
./Admin_General/logmining.sql
./Admin_General/materializedview.sql
./Admin_General/object_blocks.sql
./Admin_General/object_changes.sql
./Admin_General/object_dependencies.sql
./Admin_General/object_details.sql
./Admin_General/object_growth_trend.sql
./Admin_General/object_username.sql
./Admin_General/objectid.sql
./Admin_General/objects_changed.sql
./Admin_General/objects_matching_pattern.sql
./Admin_General/oem_informationpublishereport.sql
./Admin_General/oracle_license_info.sql
./Admin_General/Oracle_Test1.sql
./Admin_General/params.sql
./Admin_General/partitions_drop_future.sql
./Admin_General/partitions_expimp_stats.sql
./Admin_General/partitions_rowcount_each.sql
./Admin_General/partitions_synopsys_incremental.sql
./Admin_General/privileges_copy.sql
./Admin_General/redostats.sql
./Admin_General/resize-datafiles.sql
./Admin_General/running_jobs.sql
./Admin_General/scheduler_shellscript_dbmsscheduler.sql
./Admin_General/securefile_testdata.sql
./Admin_General/segments_inside_datafile.sql
./Admin_General/sequences.sql
./Admin_General/sessions_active.sql
./Admin_General/sessions_main.sql
./Admin_General/sessionusage_upanddownstats.sql
./Admin_General/setnlsdateformat.sql
./Admin_General/shrink_datafiles.sql
./Admin_General/sid_from_ospid.sql
./Admin_General/smon_rollback_monitoring.sql
./Admin_General/states.sql
./Admin_General/tab_ind_detailsschema.sql
./Admin_General/table_set_preference.sql
./Admin_General/tablespace_boundary.sql
./Admin_General/tablespace_free-sum.sql
./Admin_General/tablespaceinfo.sql
./Admin_General/tmp_stats.sql
./Admin_General/top_ts_segments.sql
./Admin_General/top_workarea.sql
./Admin_General/top10.sql
./Admin_General/trace_all_session_ofauser.sql
./Admin_General/tracking_database_usage.sql
./Admin_General/undo_another.sql
./Admin_General/undo_somemore.sql
./Admin_General/undo_stats.sql
./Admin_General/undomain.sql
./Admin_General/unregister_mview_snap.sql
./Admin_General/unused_space.sql
./Admin_General/what_has_changed.sql
./all_stalestats.sql
./archive_new.sql
./archive.sql
./ash_cachebufferchain.sql
./ash_event_trend.sql
./ash_last_x_mins.sql
./ash_last_x_secs.sql
./ash_latch.sql
./ash_object_enq_contention.sql
./ash_pio_events.sql
./ash_sid.sql
./ash_sqlid_mins.sql
./ash_top_events.sql
./ash_toppga_2mb.sql
./asm_diskdrop.sql
./ASM/+login.sql
./ASM/+plusenv.sql
./ASM/asm_diskdrop.sql
./ASM/asmdiskgroup.sql
./ASM/asmfilevolinfo.sql
./ASM/asmspaceused.sql
./asmdiskgroup.sql
./asmfilevolinfo.sql
./asmspaceused.sql
./autostatsgatherdet.sql
./awr_manualsnap.sql
./awr_modify_topnsql.sql
./awretention.sql
./BackupAndRecovery/+login.sql
./BackupAndRecovery/+plusenv.sql
./BackupAndRecovery/rman_bkpdetails.sql
./BackupAndRecovery/rman_datafile_backup.sql
./BackupAndRecovery/rman_everything.sql
./BackupAndRecovery/rman_fulldb_backupscript.sql
./BackupAndRecovery/rman_incremental_backup.sql
./BackupAndRecovery/rman_main.sql
./BackupAndRecovery/rman_monitoring.sql
./BackupAndRecovery/rman_tablespace_backup.sql
./BackupAndRecovery/rman1.sql
./BackupAndRecovery/rmanbest.sql
./Bigtable_fortest.sql
./bindvariables_fromawr.sql
./blocking_local.sql
./check_dataguard_config.sql
./check_modify_awrretention.sql
./coe_xfr_sql_profile.sql
./columns_usage.sql
./cpu_stats.sql
./cpubusytime_db.sql
./cpuusage.sql
./create_schedule_job.sql
./create_sql_patch.sql
./current_redologfile_usage.sql
./cursor_notsharing_reason.sql
./datampump_monitoring.sql
./datapump_10046trace_process.sql
./datapump_cleanup_orphanedjobs.sql
./datapump_samplescript.sh
./DataPump/data_pump_monitor_core.sql
./DataPump/datampump_monitoring.sql
./DataPump/datapump_10046trace_process.sql
./DataPump/datapump_cleanup_orphanedjobs.sql
./DataPump/datapump_samplescript.sh
./DataPump/partitions_expimp_stats.sql
./db_hang.sql
./dba_hist_snapshot.sql
./dba_hist_sqlbind.sql
./dba_schedulerjobs.sql
./dbgrowth_permonth.sql
./dblink_details.sql
./dbms_jobs_broken.sql
./dbserverdetails.sql
./dbtime.sql
./dbupgdiag.sql
./ddl_schema.sql
./ddl-objects.sql
./disable_resourcemanager.sql
./dop_tablesindexes.sql
./drop_phv_baseline.sql
./drop_sql_baseline.sql
./drop_sql_profile.sql
./drop_sqlplan_baselines.sql
./dynamic_sampling_objects.sql
./enq_tx_indexcontention_split.sql
./exadata_cellperf.sql
./Exadata/+login.sql
./Exadata/+plusenv.sql
./Exadata/exadata_cellperf.sql
./failover_stats.sql
./Fat200objects.sql
./file_block.sql
./file_border.sql
./find_objects_1.sql
./find_objects_2.sql
./find_objects.sql
./find_sql_2.sql
./find_sql_pio_1.sql
./find_sql_pio.sql
./find_sql.sql
./find_x$_basetables.sql
./fklocking.sql
./flashback_main.sql
./flushsql_cursor.sql
./FULL_FILE_LIST.txt
./gather_stats.sql
./generate_set_tablestats.sql
./get_dependent_ddl.sql
./get_optimizer_trace.sql
./getbinds-sqlid.sql
./goldengate_healthcheck.sql
./GoldenGate/+login.sql
./GoldenGate/+plusenv.sql
./GoldenGate/goldengate_healthcheck.sql
./hidden_params.sql
./hidden.sql
./high_version_count.sql
./hints.sql
./hot_librarycache_objects.sql
./hourly_segment_stats.sql
./hugepages.sh
./idx_2.sql
./idx_3.sql
./idx_fragmentation.sql
./idx_main.sql
./idx_split_2.sql
./idx_split_session.sql
./idx_split.sql
./idx_usage.sql
./indexlargerthantable.sql
./iointensive_sql_mem.sql
./iostat_summary.sql
./iousage_tempfiles.sql
./latchdetails.sql
./librarycache_locks.sql
./lockingmother.sql
./logfilesync_lfsdiag.sql
./mismatch.sql
./mutex_sleep.sql
./mypoormanscript_fancy.sql
./mypoormanscript.sh
./mypoormanscript.sql
./mysid.sql
./Nonindexedfkconstraints.sql
./object_dependencies.sql
./objects_waiting_miss_sleep.sql
./OFE_12.1.0.2_in_12.2.0.1.sql
./oracle_license_info.sql
./Oracle_Test1.sql
./outlinedata_for_spmsqlbaselines.sql
./p_ash_blockers.sql
./p_ash_events.sql
./parentchild_rel.sql
./partitions_drop_future.sql
./partitions_expimp_stats.sql
./partitions_rowcount_each.sql
./partitions_synopsys_incremental.sql
./pdhc_v1.2.sql
./PerformanceTuning_Mgmt/+login.sql
./PerformanceTuning_Mgmt/+plusenv.sql
./PerformanceTuning_Mgmt/active_sql_workarea.sql
./PerformanceTuning_Mgmt/activesessions_ash.sql
./PerformanceTuning_Mgmt/all_stalestats.sql
./PerformanceTuning_Mgmt/archive_new.sql
./PerformanceTuning_Mgmt/ash_cachebufferchain.sql
./PerformanceTuning_Mgmt/ash_event_trend.sql
./PerformanceTuning_Mgmt/ash_last_x_mins.sql
./PerformanceTuning_Mgmt/ash_last_x_secs.sql
./PerformanceTuning_Mgmt/ash_latch.sql
./PerformanceTuning_Mgmt/ash_object_enq_contention.sql
./PerformanceTuning_Mgmt/ash_pio_events.sql
./PerformanceTuning_Mgmt/ash_sid.sql
./PerformanceTuning_Mgmt/ash_sqlid_mins.sql
./PerformanceTuning_Mgmt/ash_top_events.sql
./PerformanceTuning_Mgmt/ash_toppga_2mb.sql
./PerformanceTuning_Mgmt/auto_stats_collection.sql
./PerformanceTuning_Mgmt/autostatsgatherdet.sql
./PerformanceTuning_Mgmt/awr_manualsnap.sql
./PerformanceTuning_Mgmt/awr_modify_topnsql.sql
./PerformanceTuning_Mgmt/awretention.sql
./PerformanceTuning_Mgmt/bindvariables_fromawr.sql
./PerformanceTuning_Mgmt/check_modify_awrretention.sql
./PerformanceTuning_Mgmt/coe_xfr_sql_profile.sql
./PerformanceTuning_Mgmt/columns_usage.sql
./PerformanceTuning_Mgmt/cpu_stats.sql
./PerformanceTuning_Mgmt/cpubusytime_db.sql
./PerformanceTuning_Mgmt/cpuusage.sql
./PerformanceTuning_Mgmt/create_sql_patch.sql
./PerformanceTuning_Mgmt/cursor_notsharing_reason.sql
./PerformanceTuning_Mgmt/db_hang.sql
./PerformanceTuning_Mgmt/dba_hist_snapshot.sql
./PerformanceTuning_Mgmt/dba_hist_sqlbind.sql
./PerformanceTuning_Mgmt/dbgrowth_permonth.sql
./PerformanceTuning_Mgmt/dbtime.sql
./PerformanceTuning_Mgmt/disable_resourcemanager.sql
./PerformanceTuning_Mgmt/dop_tablesindexes.sql
./PerformanceTuning_Mgmt/drop_phv_baseline.sql
./PerformanceTuning_Mgmt/drop_sql_baseline.sql
./PerformanceTuning_Mgmt/drop_sql_profile.sql
./PerformanceTuning_Mgmt/drop_sqlplan_baselines.sql
./PerformanceTuning_Mgmt/dynamic_sampling_objects.sql
./PerformanceTuning_Mgmt/enq_tx_indexcontention_split.sql
./PerformanceTuning_Mgmt/find_sql_2.sql
./PerformanceTuning_Mgmt/find_sql_pio_1.sql
./PerformanceTuning_Mgmt/find_sql_pio.sql
./PerformanceTuning_Mgmt/find_sql.sql
./PerformanceTuning_Mgmt/fklocking.sql
./PerformanceTuning_Mgmt/flushsql_cursor.sql
./PerformanceTuning_Mgmt/gather_stats.sql
./PerformanceTuning_Mgmt/generate_set_tablestats.sql
./PerformanceTuning_Mgmt/get_optimizer_trace.sql
./PerformanceTuning_Mgmt/getbinds-sqlid.sql
./PerformanceTuning_Mgmt/high_version_count.sql
./PerformanceTuning_Mgmt/hints.sql
./PerformanceTuning_Mgmt/hot_librarycache_objects.sql
./PerformanceTuning_Mgmt/hourly_segment_stats.sql
./PerformanceTuning_Mgmt/idx_2.sql
./PerformanceTuning_Mgmt/idx_3.sql
./PerformanceTuning_Mgmt/idx_fragmentation.sql
./PerformanceTuning_Mgmt/idx_main.sql
./PerformanceTuning_Mgmt/idx_split_2.sql
./PerformanceTuning_Mgmt/idx_split_session.sql
./PerformanceTuning_Mgmt/idx_split.sql
./PerformanceTuning_Mgmt/idx_usage.sql
./PerformanceTuning_Mgmt/indexlargerthantable.sql
./PerformanceTuning_Mgmt/iointensive_sql_mem.sql
./PerformanceTuning_Mgmt/iousage_tempfiles.sql
./PerformanceTuning_Mgmt/latch_waits.sql
./PerformanceTuning_Mgmt/latchdetails.sql
./PerformanceTuning_Mgmt/latchholder.sql
./PerformanceTuning_Mgmt/libcache_lock_bysid.sql
./PerformanceTuning_Mgmt/libcache_lock_waits.sql
./PerformanceTuning_Mgmt/libcache_pin_waits.sql
./PerformanceTuning_Mgmt/libcache_pins_bysid.sql
./PerformanceTuning_Mgmt/Librarycache_lock_holders.sql
./PerformanceTuning_Mgmt/librarycache_locks.sql
./PerformanceTuning_Mgmt/locked_objects_summary.sql
./PerformanceTuning_Mgmt/locked_objects.sql
./PerformanceTuning_Mgmt/lockedstats.sql
./PerformanceTuning_Mgmt/lockingmother.sql
./PerformanceTuning_Mgmt/logfilesync_lfsdiag.sql
./PerformanceTuning_Mgmt/mismatch.sql
./PerformanceTuning_Mgmt/module_pio.sql
./PerformanceTuning_Mgmt/mutex_sleep.sql
./PerformanceTuning_Mgmt/mutex.sql
./PerformanceTuning_Mgmt/mysid.sql
./PerformanceTuning_Mgmt/Nonindexedfkconstraints.sql
./PerformanceTuning_Mgmt/objects_waiting_miss_sleep.sql
./PerformanceTuning_Mgmt/OFE_12.1.0.2_in_12.2.0.1.sql
./PerformanceTuning_Mgmt/outlinedata_for_spmsqlbaselines.sql
./PerformanceTuning_Mgmt/p_ash_blockers.sql
./PerformanceTuning_Mgmt/p_ash_events.sql
./PerformanceTuning_Mgmt/parentchild_rel.sql
./PerformanceTuning_Mgmt/pga_size_estim.sql
./PerformanceTuning_Mgmt/pgaintensivesql.sql
./PerformanceTuning_Mgmt/pretty_metadata.sql
./PerformanceTuning_Mgmt/procwatcher_prw.sql
./PerformanceTuning_Mgmt/px_query_details.sql
./PerformanceTuning_Mgmt/px_query_monitoring.sql
./PerformanceTuning_Mgmt/PXHCDR.sql
./PerformanceTuning_Mgmt/rat_approx_replayprogress.sql
./PerformanceTuning_Mgmt/rat_blocking_sessions.sql
./PerformanceTuning_Mgmt/rat_replay_divergence.sql
./PerformanceTuning_Mgmt/rat_waitevents.sql
./PerformanceTuning_Mgmt/redundantindex.sql
./PerformanceTuning_Mgmt/segments_stats_daily.sql
./PerformanceTuning_Mgmt/sequenceexhaustion80.sql
./PerformanceTuning_Mgmt/session-wait-by-sid.sql
./PerformanceTuning_Mgmt/sessions_librarycachelock.sql
./PerformanceTuning_Mgmt/sga-details.sql
./PerformanceTuning_Mgmt/share.sql
./PerformanceTuning_Mgmt/shared_sqlcursor_reason.sql
./PerformanceTuning_Mgmt/sharedpool_advisory.sql
./PerformanceTuning_Mgmt/sharedpool_lru.sql
./PerformanceTuning_Mgmt/sharedpool_pinned.sql
./PerformanceTuning_Mgmt/sharedpool_stats.sql
./PerformanceTuning_Mgmt/sharedpool_statssubpool.sql
./PerformanceTuning_Mgmt/sharedpool_subpool.sql
./PerformanceTuning_Mgmt/showlock.sql
./PerformanceTuning_Mgmt/snap-session-waits.sql
./PerformanceTuning_Mgmt/sp-reserved.sql
./PerformanceTuning_Mgmt/sparse_Indexes.sql
./PerformanceTuning_Mgmt/sql_changed_elapsedtime.sql
./PerformanceTuning_Mgmt/sql_from_procedures.sql
./PerformanceTuning_Mgmt/sql_plan_baselines.sql
./PerformanceTuning_Mgmt/sql_planflip_new.sql
./PerformanceTuning_Mgmt/sqlflip_newbetter.sql
./PerformanceTuning_Mgmt/sqlflip1.sql
./PerformanceTuning_Mgmt/sqlflip2.sql
./PerformanceTuning_Mgmt/sqlhistory.sql
./PerformanceTuning_Mgmt/sqlid_info.sql
./PerformanceTuning_Mgmt/sqlid_main.sql
./PerformanceTuning_Mgmt/sqlid_phv.sql
./PerformanceTuning_Mgmt/sqlid_topworkarea.sql
./PerformanceTuning_Mgmt/sqloffload_iosaved.sql
./PerformanceTuning_Mgmt/sqlprofile_hints.sql
./PerformanceTuning_Mgmt/sqls_doing_fulltablescan.sql
./PerformanceTuning_Mgmt/sqlstats_history_sqlid.sql
./PerformanceTuning_Mgmt/sqlstats_main.sql
./PerformanceTuning_Mgmt/STA_SQLID_Cursor.sql
./PerformanceTuning_Mgmt/stale_stats.sql
./PerformanceTuning_Mgmt/stats_delete.sql
./PerformanceTuning_Mgmt/stats_get_preference.sql
./PerformanceTuning_Mgmt/stats_incremental.sql
./PerformanceTuning_Mgmt/stats_lock.sql
./PerformanceTuning_Mgmt/table_ind_pkfkcons_DDL.sql
./PerformanceTuning_Mgmt/table_stats_details.sql
./PerformanceTuning_Mgmt/tempintensivesql.sql
./PerformanceTuning_Mgmt/test.sql
./PerformanceTuning_Mgmt/top_sql_cpu.sql
./PerformanceTuning_Mgmt/top_sqls.sql
./PerformanceTuning_Mgmt/top10activesessions.sql
./PerformanceTuning_Mgmt/trace-onoff.sql
./PerformanceTuning_Mgmt/tracemy_sqlid.sql
./PerformanceTuning_Mgmt/treedump.sql
./PerformanceTuning_Mgmt/wait_Chains.sql
./PerformanceTuning_Mgmt/waiteventcontributors.sql
./PerformanceTuning_Mgmt/waits_lfs.sql
./PerformanceTuning_Mgmt/waits_sum.sql
./PerformanceTuning_Mgmt/waitstats.sql
./PerformanceTuning_Mgmt/whattypeof_histogram.sql
./PerformanceTuning_Mgmt/workarea.sql
./PerformanceTuning_Mgmt/xplan_adv_sqlid.sql
./PerformanceTuning_Mgmt/xplan_awr_phash.sql
./PerformanceTuning_Mgmt/xplan_awr.sql
./PerformanceTuning_Mgmt/xplan_objectname.sql
./PerformanceTuning_Mgmt/xplan_phash.sql
./PerformanceTuning_Mgmt/xplan_sid.sql
./PerformanceTuning_Mgmt/xplan_sqlhash.sql
./PerformanceTuning_Mgmt/xplan_sqlid_incursor.sql
./pgaintensivesql.sql
./privileges_copy.sql
./procwatcher_prw.sql
./px_query_details.sql
./px_query_monitoring.sql
./PXHCDR.sql
./rac_clock_sync.sql
./rac_diag.sql
./rac_longops.sql
./rac_main.sql
./RAC/+login.sql
./RAC/+plusenv.sql
./RAC/rac_clock_sync.sql
./RAC/rac_diag.sql
./RAC/rac_longops.sql
./RAC/rac_main.sql
./RAC/racdbstatus.sql
./racdbstatus.sql
./rat_approx_replayprogress.sql
./rat_blocking_sessions.sql
./rat_replay_divergence.sql
./rat_waitevents.sql
./redostats.sql
./redundantindex.sql
./rman_bkpdetails.sql
./rman_datafile_backup.sql
./rman_fulldb_backupscript.sql
./rman_incremental_backup.sql
./rman_main.sql
./rman_monitoring.sql
./rman_tablespace_backup.sql
./rman1.sql
./rowlocking.sql
./scheduler_shellscript_dbmsscheduler.sql
./segments_inside_datafile.sql
./segments_stats_daily.sql
./sequenceexhaustion80.sql
./sessions_active.sql
./sessions_librarycachelock.sql
./sessions_main.sql
./sessionusage_upanddownstats.sql
./sga-details.sql
./sharedpool_advisory.sql
./showlock.sql
./sid_from_ospid.sql
./smon_rollback_monitoring.sql
./sparse_Indexes.sql
./sql_changed_elapsedtime.sql
./sql_from_procedures.sql
./sql_planflip_new.sql
./sqlflip_newbetter.sql
./sqlflip1.sql
./sqlflip2.sql
./sqlhistory.sql
./sqlid_info.sql
./sqlid_main.sql
./sqlid_phv.sql
./sqloffload_iosaved.sql
./sqls_doing_fulltablescan.sql
./sqlstats_history_sqlid.sql
./STA_SQLID_Cursor.sql
./standby_core_collection.sql
./standby_monitorlag_rac.sql
./standby_single.sql
./Standby/+login.sql
./Standby/+plusenv.sql
./Standby/check_dataguard_config.sql
./Standby/DGDiagnostic.sh
./Standby/failover_stats.sql
./Standby/quick_gap_check.sql
./Standby/srdc_DGlogicalStby_diag.sql
./Standby/srdc_DGPhyStby_diag.sql
./Standby/srdc_DGPrimary_diag.sql
./Standby/standby_core_collection.sql
./Standby/standby_monitor.sql
./Standby/standby_monitorlag_rac.sql
./Standby/standby_single.sql
./Standby/standbygap.sql
./standbygap.sql
./stats_delete.sql
./stats_get_preference.sql
./stats_incremental.sql
./stats_lock.sql
./table_ind_pkfkcons_DDL.sql
./tablespaceinfo.sql
./tempintensivesql.sql
./Tools_And_Monitoring/+login.sql
./Tools_And_Monitoring/+plusenv.sql
./Tools_And_Monitoring/ashtop.sql
./Tools_And_Monitoring/dashtop.sql
./Tools_And_Monitoring/mypoormanscript_fancy.sql
./Tools_And_Monitoring/mypoormanscript.sh
./Tools_And_Monitoring/mypoormanscript.sql
./Tools_And_Monitoring/pdhc_v1.2.sql
./Tools_And_Monitoring/snapper.sql
./top_sqls.sql
./top10activesessions.sql
./trace_all_session_ofauser.sql
./tracemy_sqlid.sql
./tracking_database_usage.sql
./undo_somemore.sql
./undomain.sql
./wait_Chains.sql
./waiteventcontributors.sql
./what_has_changed.sql
./whattypeof_histogram.sql


Note: This list was generated from the downloaded repository package. All credits belong to the original author of the repository.

Wednesday, February 18, 2026

ORA-00600 [kqlhdlod-bad-base-objn] in 19c PDB – Dictionary Corruption Repair Using DBMS_DICTIONARY_CHECK


Environment: Oracle 19c Multitenant Database
PDB: FNDEV30
Error Type: ORA-00600 (Internal Error)
Component: Data Dictionary / DBMS_PDB


🚨 Problem Description

The following error was observed in the alert log:

ORA-00600: internal error code, arguments: [kqlhdlod-bad-base-objn], [1403], [849005], [2081137], [88], [5]
ORA-06512: at "SYS.DBMS_PDB", line 85

Trace file location:

/u01/app/oracle/diag/rdbms/fnutcdb/FNUTCDB/trace/FNUTCDB_j000_2280695.trc

Oracle generated incident files and recommended packaging via ADRCI or Support Workbench.


🔎 Understanding the Error

The error ORA-00600 [kqlhdlod-bad-base-objn] indicates a problem related to:

  • Dictionary metadata inconsistency
  • Invalid base object number reference
  • Possible orphaned or corrupted dictionary entry
  • Failure inside DBMS_PDB package

This is an internal error and typically requires Oracle Support involvement.


🛠 Oracle Recommended Solution

Oracle Support advised running the dictionary repair procedure using:

Step 1 – Connect to the affected PDB

SQL> ALTER SESSION SET CONTAINER=FNDEV30;

Step 2 – Run Dictionary Repair

SQL> SET SERVEROUTPUT ON SIZE UNLIMITED;
SQL> EXECUTE DBMS_DICTIONARY_CHECK.FULL(repair=>TRUE);

This command scans and repairs dictionary inconsistencies.


📋 Step 3 – Validate After Repair

SQL> SPOOL dictionary_check_after_repair.log
SQL> SET SERVEROUTPUT ON SIZE UNLIMITED;
SQL> EXECUTE DBMS_DICTIONARY_CHECK.FULL;
SQL> SPOOL OFF;

Review the spool log for remaining issues.


⚠️ Important Notes

WARNING:

• Always take a valid backup before running dictionary repair.
• Run repair only under Oracle Support guidance.
• Test in lower environment if possible.
• Never manually modify dictionary tables.

📌 Best Practices

  • Monitor alert log regularly
  • Package incidents using ADRCI before repair
  • Validate dictionary health during PDB clone/upgrade operations
  • Keep quarterly patches up to date

🎯 Conclusion

ORA-00600 errors involving dictionary corruption can look critical, but Oracle provides a supported repair mechanism using DBMS_DICTIONARY_CHECK.

Always involve Oracle Support, perform backups, and validate thoroughly after repair.

Stay proactive. Monitor. Validate. Repair safely.


Author: Punit – Oracle EBS & Database Specialist

Monday, February 16, 2026

Project PerformanceShield – Restoring Optimal Execution Plan Using SQL Plan Management (SPM) from AWR

Author: Punit Kumar
Category: Oracle Performance Tuning | SQL Plan Management | AWR | SPM
Database Version: Oracle 19c


⚠️ Production Advisory
Before enforcing SQL Plan Baselines in production, validate the SQL in a lower environment (if possible) and confirm:
  • Bind variable behavior / peeking sensitivity
  • Statistics freshness (table/index/system stats)
  • Adaptive plan behavior / SQL patches already applied
  • That the historical plan truly matches the same SQL text and environment

📌 Problem Statement

A SQL query (SQL_ID: xxxxxxxxxxxxxx) started performing poorly due to a plan change.

📊 Performance Observation

Plan Type Plan Hash Value Avg Elapsed Time (sec) Last Snapshot
Current Plan (Last Hour) 3405316313 78.261 2026-02-11 16:00:00
Best Plan (History) 1234567890 0.0 2026-02-05 15:00:43

Justification:
DEFAULT: We have identified a better plan in plan history.
A potentially better execution plan was found. Estimated benefit 100.0%.


🎯 Objective

Load the best historical plan from AWR into SQL Plan Management (SPM) and enable it as an accepted SQL Plan Baseline to restore stable performance.


🧠 Solution Overview

  1. Create a SQL Tuning Set (STS)
  2. Capture the desired plan from AWR into STS
  3. Load the plan from STS into SPM (SQL Plan Baseline)
  4. Enable the baseline
  5. Drop STS (cleanup)

🛠️ Step-by-Step Implementation

Step 1 – Create SQL Tuning Set

EXEC DBMS_SQLTUNE.CREATE_SQLSET('sts_xxxxxxxxxxxxxx_1234567890');

Step 2 – Load Plan from AWR into SQL Tuning Set and then into SPM

SET SERVEROUTPUT ON

DECLARE
  baseline_cursor DBMS_SQLTUNE.SQLSET_CURSOR;
  x PLS_INTEGER;
BEGIN
  OPEN baseline_cursor FOR
  SELECT VALUE(p)
  FROM TABLE (
    DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(
      245,
      412,
      'sql_id='||CHR(39)||'xxxxxxxxxxxxxx'||CHR(39)||
      ' and plan_hash_value=1234567890',
      NULL,NULL,NULL,NULL,NULL,NULL,'ALL'
    )
  ) p;

  DBMS_SQLTUNE.LOAD_SQLSET(
      'sts_xxxxxxxxxxxxxx_1234567890',
      baseline_cursor
  );

  x := DBMS_SPM.LOAD_PLANS_FROM_SQLSET(
        sqlset_name => 'sts_xxxxxxxxxxxxxx_1234567890',
        fixed       => 'NO',
        enabled     => 'YES'
       );

  DBMS_OUTPUT.PUT_LINE(TO_CHAR(x) || ' plan baselines loaded');
END;
/

Step 3 – Drop Temporary SQL Tuning Set (Cleanup)

EXEC DBMS_SQLTUNE.DROP_SQLSET('sts_xxxxxxxxxxxxxx_1234567890');

🔍 Validation

Confirm that the baseline is created and enabled:

SELECT sql_handle,
       plan_name,
       origin,
       enabled,
       accepted,
       fixed
FROM   dba_sql_plan_baselines
WHERE  sql_text LIKE '%querystring%';

🔄 Backout Plan (If Required)

1️⃣ Identify SQL Handle and Plan Name

SELECT sql_handle,
       plan_name,
       origin,
       enabled,
       accepted,
       fixed
FROM   dba_sql_plan_baselines
WHERE  sql_text LIKE '%querystring%';

2️⃣ Drop the SQL Plan Baseline

SET SERVEROUTPUT ON
DECLARE
  rc NUMBER;
BEGIN
  rc := DBMS_SPM.DROP_SQL_PLAN_BASELINE(
          sql_handle => '(sql_handle)',
          plan_name  => '(plan_name)'
        );
  DBMS_OUTPUT.PUT_LINE(rc);
END;
/

📌 Plan Restoration Flow

AWR (Historical Best Plan)
        ↓
SQL Tuning Set (STS)
        ↓
SQL Plan Management (SPM)
        ↓
Accepted SQL Plan Baseline
        ↓
Stable Performance

✅ Why Use SPM Instead of Adding Hints?

  • No SQL code change required
  • Controlled and supported method to enforce stable plans
  • Can be enabled/disabled quickly
  • Rollback is clean and safe
  • Helps prevent future regression

📚 Technical References & Best Practice Notes

  • Loading SQL Plans from AWR into SQL Plan Management (SPM)
    Oracle provides a supported approach to capture historical execution plans from AWR and load them as SQL Plan Baselines using STS.
    Reference: MOS Doc ID 789888.1

  • SQL Tuning Health Check Script (SQLHC)
    SQLHC helps analyze execution plans, statistics, optimizer environment, and common misconfigurations impacting SQL performance.
    Reference: MOS Doc ID 1366133.1

  • Oracle Database Performance Tuning Guide
    Refer to the official documentation section on SQL Plan Management (SPM) for concepts and lifecycle management.

  • AWR Best Practices
    Always confirm that the historical plan represents the same SQL (and similar bind patterns) before enforcing it.

🏁 Conclusion

By loading the best historical plan (Plan HV: 1234567890) from AWR into SPM, we can restore stable performance and prevent regression. This approach is production-friendly because it avoids code changes and includes a clean backout option.


Author: Punit Kumar
Oracle EBS | Oracle 19c | Performance Tuning | SPM | AWR

Sunday, February 15, 2026

Workflow Services Not Starting in Oracle E-Business Suite R12 – BES Control Queue Fix

Workflow Services Not Running or Starting in Oracle E-Business Suite R12 (Target=1, Actual=0) – Complete Fix

In Oracle E-Business Suite R12, Oracle Workflow Services may sometimes fail to start from the frontend (System Administrator responsibility) or from the Concurrent > Manager > Administer form. In this scenario, Workflow service components remain stuck with:

  • Target = 1
  • Actual = 0

This post provides a clean, production-friendly fix to bring the Workflow services back online, along with the MOS-based advanced recovery for the common Service Component Container error.


Affected Workflow Components (Service Managers)

  • Workflow Agent Listener ServiceWFALSNRSVC
  • Workflow Mailer ServiceWFMLRSVC
  • Workflow Document Web Services ServiceWFWSSVC

How to Confirm the Service Short Names (SQL)

You can confirm the concurrent queue short names using the following queries:

Workflow Agent Listener Service

SELECT concurrent_queue_name
FROM apps.fnd_concurrent_queues_tl
WHERE user_concurrent_queue_name = 'Workflow Agent Listener Service';

Workflow Mailer Service

SELECT concurrent_queue_name
FROM apps.fnd_concurrent_queues_tl
WHERE user_concurrent_queue_name = 'Workflow Mailer Service';

Workflow Document Web Services Service

SELECT concurrent_queue_name
FROM apps.fnd_concurrent_queues_tl
WHERE user_concurrent_queue_name = 'Workflow Document Web Services Service';

Solution (Primary Fix) – Reset Workflow Service Manager Definitions

⚠️ Important: Run the below updates carefully (preferably during a controlled window). Take a backup / snapshot before changes in Production. Execute as APPS.

Step 1 – Set process values to zero

UPDATE fnd_concurrent_queues
   SET running_processes = 0,
       max_processes     = 0
 WHERE concurrent_queue_name IN ('WFWSSVC','WFALSNRSVC','WFMLRSVC');

Step 2 – Reset invalid control codes (if applicable)

UPDATE fnd_concurrent_queues
   SET control_code = NULL
 WHERE concurrent_queue_name IN ('WFWSSVC','WFALSNRSVC','WFMLRSVC')
   AND control_code NOT IN ('E','R','X')
   AND control_code IS NOT NULL;

Step 3 – Clear target node (remove node binding)

UPDATE fnd_concurrent_queues
   SET target_node = NULL
 WHERE concurrent_queue_name IN ('WFWSSVC','WFALSNRSVC','WFMLRSVC');

Step 4 – Commit

COMMIT;

What Happens Next?

After the above reset, wait a few minutes. The Internal Concurrent Manager (ICM) typically brings the services up automatically.

Verification – Confirm Workflow Service Managers Are Up

SELECT concurrent_queue_name,
       control_code,
       running_processes,
       max_processes
  FROM fnd_concurrent_queues
 WHERE concurrent_queue_name IN ('WFALSNRSVC','WFMLRSVC','WFWSSVC');

Expected state:

  • CONTROL_CODE should typically be C
  • RUNNING_PROCESSES should be 1
  • MAX_PROCESSES should be 1 (or as configured)

Common Error Seen (When Services Still Do Not Start)

In some cases, the services still fail with the following error:

ERROR:[SVC-GSM-WFALSNRSVC-9700 : oracle.apps.fnd.cp.gsc.SvcComponentContainer.startBusinessEventListener()]:
BES system could not establish connection to the control queue after 180 seconds

oracle.apps.fnd.cp.gsc.SvcComponentContainerException:
Could not start Service Component Container

This points to an issue with the Business Event System (BES) control queue / container configuration.


Advanced Fix (MOS) – Rebuild Workflow Control Queue / Container

🚨 MOS-Based Fix: Use this only when the primary fix does not resolve the issue and the error shows SvcComponentContainerException.

Refer to Oracle Support Document:

Starting Workflow Services Fails With Error oracle.apps.fnd.cp.gsc.SvcComponentContainerException Could not start Service Component Container
Doc ID 1663093.1

As per the document, run the following script to rebuild the Workflow control queue/container configuration:

Run as APPS user:

sqlplus apps/xxxxxxx @$FND_TOP/patch/115/sql/wfctqrec.sql APPLSYS xxxx

Replace:

  • xxxxxxx → APPS password
  • xxxx → APPLSYS password

Important Notes

  • Ensure Workflow Services are stopped before running the script.
  • Take a database backup / snapshot before executing in Production.
  • After script execution, bounce Concurrent Managers (or at least ICM).
  • Wait a few minutes and recheck the service status.

Post-Fix Verification

SELECT concurrent_queue_name,
       control_code,
       running_processes,
       max_processes
  FROM fnd_concurrent_queues
 WHERE concurrent_queue_name IN ('WFALSNRSVC','WFMLRSVC','WFWSSVC');

Once the Workflow services are healthy, you should see the managers running and the frontend should reflect:

  • Target = 1
  • Actual = 1

Root Cause (Why This Happens)

  • Corrupt / stale Service Component Container configuration
  • Invalid node binding (TARGET_NODE) after cloning or node changes
  • Improper shutdown causing inconsistent queue state
  • BES control queue connection timeouts

Conclusion

Workflow services stuck at Target=1 / Actual=0 can be fixed safely by resetting the service manager definitions in FND_CONCURRENT_QUEUES. If the environment throws the Service Component Container exception, the MOS script (wfctqrec.sql) provides the advanced recovery to rebuild the container/control queue configuration.

— Punit Kumar
Oracle EBS Techno Functional Consultant


Tags

Oracle EBS, R12, Workflow Mailer, WFALSNRSVC, WFMLRSVC, WFWSSVC, Concurrent Manager, ICM, BES, Service Component Container, wfctqrec.sql, Doc ID 1663093.1

Saturday, February 14, 2026

Project CloudBridge – Day 3: Designing & Deploying Amazon RDS PostgreSQL (OLTP Foundation)

Project CloudBridge – Day 3: Building the OLTP Foundation with Amazon RDS PostgreSQL

Series Context: After understanding why enterprises separate OLTP and Analytics (Day 1) and designing a real-time analytics architecture (Day 2), today we move into hands-on infrastructure setup.

Day 3 is where architecture thinking transforms into real implementation.


🎯 Day 3 Objective

Design and deploy a production-grade Amazon RDS PostgreSQL instance that will act as the OLTP source database for our real-time analytics pipeline.

This database will:

  • Act as the enterprise transactional system
  • Serve as the source for AWS DMS
  • Feed analytics data into Amazon Redshift (later days)

🧱 What We Are Building Today

Today we focus on the OLTP layer of our architecture:

  1. Understand RDS PostgreSQL architecture components
  2. Design production-ready configuration
  3. Create and configure the RDS instance
  4. Validate connectivity and basic database operations

📐 RDS PostgreSQL Architecture Overview

Before creating the database, we must understand key architectural components:

  • DB Instance vs Database
  • Multi-AZ vs Single-AZ deployment
  • Storage types (gp3, io1)
  • Subnet Groups and VPC isolation
  • Security Groups and access control
  • Parameter Groups
  • Backup and maintenance configuration

This is not just “Create Database”. This is infrastructure design.


⚙ Production Design Decisions

We will make enterprise-style configuration choices:

  • Instance class selection (compute vs memory optimized)
  • Storage sizing and autoscaling
  • Backup retention period
  • Monitoring configuration
  • Encryption using AWS KMS
  • Public vs Private accessibility

The goal is to think like a Cloud Architect — not just a console user.


🛠 Hands-On Tasks

By the end of today, we will:

  • Launch Amazon RDS PostgreSQL
  • Configure VPC and Subnet Group
  • Attach Security Group
  • Enable automated backups
  • Create initial database
  • Create application user
  • Test connectivity using psql

🔍 Validation Step

After deployment, we will verify:

SELECT version();

This confirms:

  • Database is operational
  • Network configuration is correct
  • Security rules are properly defined

🧠 Why Day 3 Matters

Without a properly designed OLTP source database:

  • AWS DMS cannot replicate data
  • Change Data Capture (CDC) will not work
  • Analytics pipeline will fail

Day 3 builds the foundation for the entire CloudBridge architecture.


📌 End of Day 3 Expected Outcome

  • One production-ready RDS PostgreSQL instance
  • Verified connectivity
  • Clear understanding of AWS database architecture
  • Ready to configure AWS DMS on Day 4

Next: Day 4 – Configuring AWS Database Migration Service (DMS) for Continuous Data Replication

Friday, February 13, 2026

Project CloudBridge – Day 1 & 2 Summary (Architect Consolidation)

Project CloudBridge – Brief Summary (Day 1 & Day 2)

This short recap consolidates the core learning from Day 1 and Day 2, so we can move into Day 3 (hands-on build) with a clear production mindset.


✅ Day 1 – Why Enterprises Separate OLTP and Analytics

Day 1 established the foundational enterprise principle: Transactional systems (OLTP) and analytical systems (Analytics/OLAP) are built for different workload patterns.

OLTP Analytics
Small, frequent transactions Large scans & aggregations
Low latency (milliseconds) High throughput reporting
Predictable response time Long-running queries
Enterprise Rule: Never run heavy analytics workload on the production OLTP database. It creates CPU/I/O contention, lock waits, and SLA impact.

The enterprise solution pattern introduced: separate OLTP and Analytics and connect them using a replication layer.


✅ Day 2 – Enterprise Real-Time Architecture Design (Deep Mode)

Day 2 moved from principle to production design thinking. We designed a real-time architecture using: RDS PostgreSQL (OLTP), AWS DMS (Full Load + CDC), and Amazon Redshift (Analytics).

Customer Applications ↓ Amazon RDS PostgreSQL (OLTP) ↓ AWS DMS (Full Load + CDC) ↓ Amazon Redshift (Analytics Warehouse) ↓ BI / Dashboards

Key technical decisions and learnings:

  • RDS (OLTP): design for low latency, high availability (Multi-AZ), and predictable storage performance.
  • CDC (Change Data Capture): DMS reads PostgreSQL WAL to capture INSERT/UPDATE/DELETE with minimal OLTP impact.
  • WAL/Slot Risk: if DMS lags or stops, WAL can accumulate (replication slot retention), creating storage pressure.
  • DMS Sizing Matters: under-sized replication instances lead to replication lag and stale analytics.
  • Redshift (Analytics): purpose-built warehouse for heavy aggregations and large scans; scales independently from OLTP.
Day 2 Outcome: We shifted from “tool learning” to architect thinking — workload isolation, CDC design, sizing awareness, and failure scenario planning.

🔜 Day 3 – What Comes Next (Hands-on Build)

Day 3 is execution with a production mindset:

  • Create RDS PostgreSQL with proper sizing and storage
  • Configure parameter group for CDC (WAL/logical replication)
  • Plan networking (subnets, security groups) and access model
  • Prepare DMS replication instance sizing and monitoring approach

Tags: #AWS #CloudBridge #RDS #PostgreSQL #DMS #CDC #Redshift #RealTimeAnalytics #DataEngineering

Project CloudBridge – Day 2: Enterprise Real-Time Analytics Architecture on AWS

Project CloudBridge – Day 2: Enterprise Real-Time Analytics Architecture on AWS

Day 2 of Project CloudBridge focuses on designing an enterprise-grade real-time analytics architecture using AWS database services. Before building anything hands-on, we must think like architects.


1️⃣ The Enterprise Problem

Organizations need near real-time dashboards, reporting, and analytics — but running heavy queries directly on the production OLTP database can slow down customer-facing applications.

Rule: Never mix heavy analytics workload with OLTP transactions in production.

2️⃣ Master Architecture – End-to-End Design

This pattern cleanly separates:

  • OLTP Layer – Amazon RDS PostgreSQL
  • Replication Layer – AWS DMS (Full Load + CDC)
  • Analytics Layer – Amazon Redshift
  • BI Layer – QuickSight / Tableau / Power BI
Customer Applications ↓ Amazon RDS PostgreSQL (OLTP) ↓ AWS DMS (Full Load + CDC) ↓ Amazon Redshift (Analytics Warehouse) ↓ BI / Reporting / Dashboards

3️⃣ Amazon RDS PostgreSQL – OLTP Layer

Purpose:

  • Handle live transactions (INSERT / UPDATE / DELETE)
  • Maintain fast response time for customer-facing apps
  • Use Multi-AZ for high availability (production best practice)
  • Protect OLTP from reporting workload by isolating analytics

4️⃣ CDC Flow – PostgreSQL WAL to DMS

CDC (Change Data Capture) reads PostgreSQL transaction logs (WAL) and captures changes such as:

  • INSERT
  • UPDATE
  • DELETE

This enables near real-time synchronization to the analytics platform without repeatedly running full reloads.


5️⃣ AWS DMS – Full Load + Continuous Replication

DMS typically runs in two phases:

  • Full Load – One-time initial data copy
  • CDC – Ongoing continuous replication of changes
Operational Note: If DMS stops, OLTP continues normally. Analytics becomes stale until replication resumes.

6️⃣ Amazon Redshift – Analytics Warehouse

Redshift is designed for fast analytics at scale:

  • Columnar storage for efficient scanning
  • Massively Parallel Processing (MPP)
  • Fast aggregation queries across large datasets
  • Easy BI integration (QuickSight / Tableau / Power BI)

7️⃣ Why This Architecture Is Enterprise-Grade

Layer Responsibility
RDS PostgreSQL Transactions (OLTP)
AWS DMS Replication (Full Load + CDC)
Amazon Redshift Analytics & Reporting

🔐 Production Considerations (What Architects Always Add)

  • Enable Multi-AZ and automated backups for RDS
  • Ensure PostgreSQL WAL settings support CDC requirements
  • Monitor DMS replication lag and task health
  • Use least-privilege IAM roles for DMS access
  • Secure connectivity with VPC Security Groups and encryption in transit

8️⃣ Day 3 Preview

Next, we go hands-on:

  • Create RDS PostgreSQL (production-minded configuration)
  • Enable WAL settings needed for CDC
  • Create DMS replication instance
  • Configure source & target endpoints
  • Start Full Load + CDC and validate data flow

Tags: #AWS #CloudBridge #RDS #PostgreSQL #DMS #CDC #Redshift #RealTimeAnalytics #DataEngineering


📘 Project CloudBridge – Series Navigation


Thursday, February 12, 2026

Project CloudBridge – Day 1: Why Enterprises Separate OLTP and Analytics Using RDS PostgreSQL, AWS DMS, and Amazon Redshift

Project CloudBridge: Enterprise Real-Time Data Integration on AWS

A Hands-On Learning Series Designing Enterprise-Grade Real-Time Data Pipelines Using AWS Database Services

Series Overview: This is a structured hands-on learning and implementation series focused on designing enterprise real-time data integration pipelines using AWS database technologies including Amazon RDS PostgreSQL, AWS Database Migration Service (DMS), and Amazon Redshift.

Series Goal: To build industry-level architecture knowledge, hands-on cloud database integration skills, and real-world production design understanding.


Project CloudBridge – Series Roadmap

  • Day 1 – Why Enterprises Separate OLTP and Analytics
  • Day 2 – Industry Case Study: Real-Time Analytics Pipeline
  • Day 3 – Configuring RDS PostgreSQL for CDC
  • Day 4 – Amazon Redshift Fundamentals
  • Day 5 – AWS DMS Deep Dive
  • Day 6 – End-to-End Pipeline Implementation
  • Day 7 – Data Validation Strategies
  • Day 8 – Performance Optimization
  • Day 9 – Security and Compliance
  • Day 10 – Production Runbook and Lessons Learned

Day 1: Why Enterprises Separate OLTP and Analytics — Building Real-Time Data Pipelines Using RDS PostgreSQL, AWS DMS, and Redshift

Description: In this Day 1 article of Project CloudBridge, we’ll understand why modern enterprises separate transactional (OLTP) workloads from analytics (OLAP) workloads—and how Amazon RDS PostgreSQL, AWS DMS, and Amazon Redshift work together to deliver near real-time reporting without impacting production performance.


Introduction

In modern enterprise environments, databases are no longer used only for storing application data. They also power dashboards, compliance reporting, fraud detection, and analytics platforms.

One of the biggest architectural mistakes organizations make is running heavy reporting workloads directly on production transactional databases. It may work in early stages, but over time it leads to performance degradation, user complaints, and scalability challenges.

Enterprises solve this by separating transactional workloads from analytics workloads. In this article, we will explore why this separation is critical and how AWS services like Amazon RDS PostgreSQL, AWS Database Migration Service (DMS), and Amazon Redshift enable this architecture.


1) OLTP vs OLAP — The Core Concept

Enterprise data platforms typically support two very different workload types:

OLTP (Online Transaction Processing)

OLTP systems handle day-to-day business transactions such as claims, payments, orders, billing, and user activity.

  • High number of small, frequent transactions
  • Fast response time is critical
  • Mostly INSERT/UPDATE operations
  • Strong consistency and concurrency

OLAP (Online Analytical Processing)

OLAP systems support reporting, dashboards, trends, and decision-making analytics.

  • Large data scans and aggregations
  • Complex joins
  • Historical trend analysis
  • High concurrency for business users and BI tools

2) Why Running Analytics on Production Databases is Risky

Running heavy reporting directly on production OLTP databases creates real operational risks:

  • Performance impact: Analytics queries can scan large tables and consume CPU, memory, and I/O needed for production transactions.
  • Lock contention: Long-running queries can create contention that slows business-critical operations.
  • Scalability limits: OLTP databases are optimized for transactions, not large-scale analytics processing.
  • Availability risk: Reporting spikes can contribute to slowdowns and outages during peak business hours.

3) The Enterprise Pattern: Workload Separation

To solve this, enterprises adopt a proven pattern:

  • OLTP database remains dedicated to the application workload.
  • Analytics warehouse handles reporting and insights at scale.
  • Replication/CDC pipeline keeps analytics data updated with minimal impact on production.

4) Where AWS Services Fit In

Amazon RDS PostgreSQL (OLTP)

Amazon RDS PostgreSQL is a strong OLTP platform because it offers managed operations, backups, and high availability options. It is ideal for application transactions—however it is not the best place for heavy analytics.

Amazon Redshift (OLAP)

Amazon Redshift is a cloud-native data warehouse designed for analytics workloads. With columnar storage and massively parallel processing (MPP), it is well-suited for complex queries at scale.

AWS Database Migration Service (DMS) — The Bridge

AWS DMS helps keep analytics systems updated by enabling:

  • Full Load: Move historical data initially
  • CDC (Change Data Capture): Continuously replicate ongoing changes
  • Near real-time analytics: Keep Redshift updated without overloading production

5) High-Level Architecture

Application Users
       |
       v
Amazon RDS PostgreSQL (OLTP)
       |
       v
AWS DMS (Full Load + CDC)
       |
       v
Amazon Redshift (Analytics / OLAP)
       |
       v
BI Dashboards / Reporting

6) Business Benefits

  • Better production performance by removing reporting load from OLTP
  • Near real-time dashboards powered by CDC replication
  • Scalable analytics without impacting application users
  • Improved compliance reporting and audit readiness

7) Real-World Adoption

This pattern is widely used across industries such as:

  • Healthcare (claims analytics, fraud detection)
  • Finance (risk analytics, compliance reporting)
  • Retail (customer behavior analytics, demand forecasting)
  • Telecom (billing analytics, usage reporting)

What’s Next (Day 2 Preview)

In Day 2, I will share an industry-level case study showing how an enterprise implements a real-time analytics pipeline using RDS PostgreSQL → AWS DMS → Redshift, including key design decisions and common challenges.


Project CloudBridge – Daily Enterprise Learning Series

Follow this series to learn how modern enterprises design scalable real-time data pipelines using AWS database technologies.

If you are working on cloud data modernization or AWS database integration, feel free to share your experiences or questions in the comments.


📘 Project CloudBridge – Series Navigation