Sunday, November 14, 2021

Oracle E-businness Suite Performance Analysis

 Oracle E-businness Suite Performance ANALYSIS

Purpose of this article is how to resolve performance issues on Databases and Oracle e-business Suite 
  • The typical process flow for resolving performance issues on Oracle E-Business Suite.

  • How to determine which diagnostic tools and data to use.

  • The methods available for gathering diagnostics.

  • The common mistakes to avoid.


Solution Flow Diagram

The following diagram is a very simplified view of the process.

There are many other paths and actions that can be taken depending on the information received and the analysts level of experience. There are also other diagnostics that can be obtained. However, this diagram covers the most likely paths.




Where to Start

The usual place to start is the SQL Trace and TKPROF. However this is not always the best place.

If the performance issue is not (yet) linked to a specific module or is system wide then an AWR report would be the best starting point.

If the specific SQLs with the performance issues are known, the SQL is still in memory (the cursor is still loaded in the cursor cache) and the "statistics_level" initialization parameter is set to ALL (or the initialization parameter "_rowsource_execution_statistics" is set to TRUE) when the SQLs are executed, then the display cursor or SQL Monitor report can be a good place to start.

Solution Process Overview

SQL Trace / TKPROF

Around 90% of performance issues on Oracle E-Business Suite or Databases  are SQL related.

So, if the module/program is known, then start with the SQL Trace (Level 8 – with waits) and TKPROF.

The timed_statistics initialization parameter must be set to TRUE, so that the SQL Trace contains row source statistics (STAT lines) and CPU times.

In most circumstances this will show the actual runtime execution plan with row source statistics, so it should be possible to identify any inefficient execution plan steps from the TKPROF. However, it does not show which predicates are applied on each execution plan line.

Row source statistics are the actual (not estimated) statistics (elapsed time, physical reads/writes, consistent reads) for each execution plan line.

It will also identify any time consuming wait events.

For more information on obtaining and interpreting SQL Trace and TKPROF see My Oracle Support document "Oracle E-Business SQL Trace and TKPROF Guide (Document 1674024.1)".


AWR (AUTOMATIC WORKLOAD REPOISTORY)

The AWR report should normally be obtained on the first pass as well. This should be for the period (between snapshots) that the module / program was running.

This can help identify expensive SQLs, time consuming wait events, system resource issues or DB configuration issues.

It does not report execution plans (runtime or explain plan).

Even if the module/program or SQLs with the performance issue are known:

  • It could be caused or amplified by general performance issues, configuration or system capacity (I/O, memory, processing etc).

  • It could be due to interaction or contention with other sessions/processes.

  • The AWR may have supplementary information that can help diagnose the issue.

In some cases Active Session History (ASH) reports can be used to give more detailed information on which SQLs, Wait Events, Objects and Row Sources the performance issue occurs.

For more information on obtaining and interpreting AWR reports see My Oracle Support document "Performance Diagnosis with Automatic Workload Repository (AWR) (Document 1674086.1)".


Display Cursor / SQL Monitor Report

If specific SQLs with the potential performance issues are known, the SQL is still in memory (the cursor is still loaded in the cursor cache) and "statistics_level" is set to ALL (or "_rowsource_execution_statistics" is set to TRUE) when the SQL is executed, then the Display Cursor or SQL Monitor report can be a good place to start.

These reports can be produced whilst a SQL is still executing.

To produce these reports it will normally be necessary to know the sql_id of SQLs with potential performance issues. 

The Display Cursor report provides the actual runtime execution plan and row source statistics along with the predicates applied on each execution plan line. 

Row source statistics are the actual (not estimated) statistics (starts, number of rows, elapsed time, physical reads, buffer gets, memory used (in PGA)) for each execution plan line.

The Display Cursor report will only provide actual row source statistics if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed.

The SQL Monitor report also provides the actual runtime execution plan. It shows row source statistics (executions, rows, memory usage, I/O Requests, CPU Activity, Wait Activity) for each runtime execution plan line, but there are differences with those provided on the Display Cursor report.

It will show executions, rows and memory usage and I/O requests even if the "statistics_level" initialization parameter is not set to ALL ( or "_rowsource_execution_statistics" is not set to TRUE) at the time the SQL is executed.

However, it will only show Active Period, CPU Activity and Wait Activity for execution plan lines if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed. So these parameter values are preferred for SQL Monitor reports.

It also gives a good view of how parallel SQL/DML performs across stages of the plan and parallel child (secondary) processes.

Display Cursor report is preferred if the SQL/DML is non-parallel.

SQL Monitor report is preferred if the SQL/DML is parallel.

The advantage of these diagnostics is that they can be obtained quickly and after the event. There is often no need for another iteration to obtain the SQL Trace / TKPROF.


Most of the time spent in the application tier

High levels of "SQL*Net message from client" can indicate a large amount of time spent in the client application or on the application tier. This will not just be a high total time, but high average wait time as well.

Normally the "SQL*Net message from client" waits will average around 1ms or less. However, it really depends on the hardware/network configuration and bandwidth between tiers.

Note that:

  • Very large "SQL*Net message from client" waits are usually due to waiting for user entry on the client, particularly at the start and end of the trace.

  • A high number of short "SQL*Net message from client" waits usually indicates a high number of SQL executions (which should closely match the execution count of the SQLs) and does not indicate a performance issue on the application tier.

  • A consistently high level of "SQL*Net message from client" wait times might also indicate network bandwidth or latency issues.


Operating System Statistics

On linux/unix environments there are a number of individual operating system commands that can be used to monitor overall resource usage, or the resource usage for individual processes.

 (e.g. "sar", "ps", "top", "ifconfig", "mpstat", "iostat", "netstat" "vmstat" etc.).

SQL Tuning

There are many ways to fix an inefficient execution plan.

  • Creating a SQL Profile to replicate a previous efficient execution plan

  • Applying recommendations from the ADDM report

  • Using SQL Tuning Advisor (if execution plan is very complex)

  • Removing a throwaway issue. (Where number of rows processed on one or more steps of the execution plan are unusually high. This is normally because filters, predicates are applied too late and unnecessary rows are included in result sets). This is often done by changing/specifying the join order, join method, adding more selective indexes or expanding OR conditions.

  • Creating a histogram for columns that have non-uniform distributions

  • Creating custom indexes to give more selective access

  • Creating a SQL Profile to apply hints

  • Controlling the use of SQL Plan Directives (SPD) (12c only)

  • Using the optimizer_features_enable CBO initialization parameter to revert to CBO functionality from a previous database version (if execution plan has regressed after a database upgrade)

In some cases the execution plan could be the most efficient, but there are still high levels of buffer gets (logical reads) or disk reads (physical reads) for certain execution plan lines.

In this case the time taken to read rows for specific objects (tables / indexes) could be reduced by:

  • Rebuilding fragmented objects

  • Caching frequently used large objects in the shared pool


Performance Tuning

The issue could be a system or application module wide performance issue caused by:

  • System or application configuration

  • Unecessary historic data in the application tables that requires purging. 

  • Incorrect database initialization parameters (or CBO parameters). Incorrect values for SGA or PGA. For example increasing them may reduce I/O.
  • Inaccurate or incomplete CBO statistics

  • New database features (e.g. Adaptive Query Optimization)

How to tune the issue SQL

  • Correct missing, incomplete or out of date CBO statistics. 

  • Controlling the use of SQL Plan Directives (SPD) (12c only)

  • Using the optimizer_features_enable CBO initialization parameter to revert to CBO functionality from a previous database version (if execution plan has regressed after a database upgrade)

  • Rebuilding fragmented objects

  • Caching frequently used large objects in the shared pool



==================
DOC ID- - 1672174.1
==================

No comments: