Monday, February 19, 2018

CPU intensive long running Concurrent Program

long running Concurrent program



Find the process that are consuming high CPU on DB Node.


oracledb=>prstat

   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
 26477 oracledb  2407M 2369M cpu2     0    0   2:59:40  13% oracle/39
  1958 oracledb  2393M 2355M cpu6     0    0   1:54:27  12% oracle/1
  2124 oracledb  2393M 2355M cpu2     0    0   1:53:35  12% oracle/1
  2202 oracledb  2393M 2355M cpu5     0    0   1:52:07  12% oracle/1



 Identify the PROCESS(Apps Node) from DB.


SQL>select s.sid, s.serial#, s.process, s.program, s.module 
    from v$session s, v$process p
    where s.paddr =p.addr and p.spid in ('26477','2124','1958','2202')
;


       SID    SERIAL# PROCESS      PROGRAM                                          MODULE
---------- ---------- ------------ ------------------------------------------------ --------------------
        93      35457 17991
       

On the APPLICATION Server, grep for the PROCESS:



apps=>ps -ef|grep 17991


apps 27982 25127  0 14:14:24 pts/2    0:00 grep 17991
apps 17991  9806  0 10:40:02 ?        0:02 ar60run P_CONC_REQUEST_ID=13318110 P_CORP='055' P_COMPANY='004' P_AS_OF_DATE='1


Get the Concurrent Request details



select /*+ CHOOSE*/
      'Node Name..............................: ' || q.node_name                          || chr(10) ||
      'Req id.................................: ' || Request_Id                           || chr(10) ||
      'Requestor..............................: ' || User_Name                            || chr(10) ||
      'Manager................................: ' || Q.User_Concurrent_Queue_Name         || chr(10) ||
      'Status code............................: ' || decode(status_code,'A', 'Waiting','B',
      'Resuming','D', 'Cancelled','E', 'Error', 'G', 'Warning', 'H',
      'On Hold', 'R', 'Normal','S', 'Suspended', 'T', 'Terminating',
                      'W', 'Paused', 'X','Terminated', status_code)                                       || chr(10) ||
      'Phase code.............................: '  || decode(phase_code, 'C', 'Completed',
      'I', 'Inactive', 'P', 'Pending', 'R', 'Running', phase_code)                        || chr(10) ||
      'Priority...............................: ' || Fcr.priority                         || chr(10) ||
      'Program................................: ' || Fcp.User_Concurrent_Program_Name     || chr(10) ||
      'Time so far ...........................: ' || trunc((sysdate-Fcr.actual_start_date)*24*60,2)||' min'|| chr(10) ||
      'Avg execution time in  30 days.........: ' ||  trunc(AVG_TIME,2)  ||' min'          || chr(10) ||
      'Max execution time in 30 days..........: ' ||  trunc(MAX_TIME,2)  ||' min'         || chr(10) ||
      'Fastest execution Time in 30 days......: ' ||trunc( MIN_TIME,2) || ' min'          || chr(10) ||
      'Number of executions in last 30 days...: ' ||occurance                             || chr(10) ||
      'ClientPID..............................: ' || Fcr.OS_PROCESS_ID                    || chr(10) ||
      'ServerPID..............................: ' || Fcr.ORACLE_PROCESS_ID     || chr(10) ||
      'Arguments passed to the program .......: ' ||Fcr.argument_text
       from apps.Fnd_Concurrent_Requests Fcr,
    apps.Fnd_Concurrent_Programs_vl Fcp,
    apps.Fnd_Oracle_Userid O,
    apps.Fnd_Concurrent_Processes P,
    apps.Fnd_Concurrent_Queues_vl Q,
    apps.Fnd_User,(select
    concurrent_program_id
    ,count(concurrent_program_id) occurance
    ,min(actual_completion_date-actual_start_date)*24*60 MIN_TIME
    ,max(actual_completion_date-actual_start_date)*24*60 MAX_TIME
    , avg(actual_completion_date-actual_start_date)*24*60 AVG_TIME
    from apps.fnd_concurrent_requests
    where status_code='C' and phase_code='C'
    and trunc(actual_start_date)>trunc(sysdate-30)
    group by concurrent_program_id
    having max(actual_completion_date-actual_start_date)*24*60 >5) CREQ
    where Controlling_Manager = Concurrent_Process_ID
    and ( P.Concurrent_Queue_ID=Q.Concurrent_Queue_ID AND P.Queue_Application_ID = Q.Application_ID )
    and O.Oracle_Id = Fcr.Oracle_Id and Request_Id in ('&REQID')
    and( Fcr.Program_Application_Id=Fcp.Application_Id
    and  Fcr.Concurrent_Program_Id=Fcp.Concurrent_Program_Id)
    and Requested_By = User_Id
    and Phase_Code = 'R' and status_code in ('R','T')
    --adding joins with new
    and Fcr.Concurrent_Program_Id=CREQ.Concurrent_Program_Id(+)
    Order By   Q.User_Concurrent_Queue_Name,q.node_name, Actual_Start_Date,Request_Id;



Enter value for reqid: 13318110
old  40:     and O.Oracle_Id = Fcr.Oracle_Id and Request_Id in ('&REQID')
new  40:     and O.Oracle_Id = Fcr.Oracle_Id and Request_Id in ('13318110')

Node Name..............................: server02
Req id.................................: 13318110
Requestor..............................: USER10
Manager................................: Standard Manager
Status code............................: Normal
Phase code.............................: Running
Priority...............................: 50
Program................................: Some Long Running Program
Time so far ...........................: 218.63 min
Avg execution time in  30 days.........:  min
Max execution time in 30 days..........:  min
Fastest execution Time in 30 days......:  min
Number of executions in last 30 days...:
ClientPID..............................:
ServerPID..............................:
Arguments passed to the program .......: , , , , 055, 004, , , , , 10-JUL-2014, , Merchant Number, , 71, Y, N

GATHER_STATS_JOB job log history




Reference taken from -

history of last run of GATHER_STATS_JOB


col job_name FOR a30
SET lines 150
SELECT * FROM
     (SELECT log_date,job_name,status,actual_start_date,run_duration
     FROM DBA_SCHEDULER_JOB_RUN_DETAILS
     WHERE job_name='GATHER_STATS_JOB'
     ORDER BY log_id DESC)
     WHERE rownum<=2;

Another Script

SET lines 150 col OPERATION FOR a30 col TARGET FOR a5 col START_TIME FOR a40 col END_TIME FOR a40 SELECT * FROM dba_optstat_operations ORDER BY start_time DESC
Satistics on a table have changed by using DBA_TAB_STATS_HISTORY table
(job details is unfortunately not available):

 SET lines 180
 SELECT owner, table_name, stats_update_time
     FROM dba_tab_stats_history
     WHERE owner='&OWNER'
     AND table_name='&TABLE NAME'
     ORDER BY owner, table_name, stats_update_time DESC;


With above query I can easily see when statistics have changed.
I can’t know if they have been computed manually or
by GATHER_STATS_JOB but time of the gathering give some clues of
who/what has done it…So to have number of objects per day
which had their statistics changed (including GATHER_STATS_JOB
and manual gathering) you can use

SET lines 150
 SET pages 200
 SELECT TO_CHAR(stats_update_time,'yyyy-mm-dd') AS stats_update_time, COUNT(*)
     FROM dba_tab_stats_history
     GROUP BY TO_CHAR(stats_update_time,'yyyy-mm-dd')
     ORDER BY 1 DESC;

Saturday, February 17, 2018

Oracle flashback logs are not deleting


Oracle flashback logs are not deleting


SQL> select name,open_mode,flashback_on from v$database

We have issue on flash_recovery_area_usage usage is 100%,
So we thought to remove the logs and we can use below option to remove logs rather than OS level.

To see the current size of db_recovery_file_dest_size and location.

SQL> show parameter db_recovery

To see % used

SQL>select space_used/(1024*1024*1024),space_limit/(1024*1024*1024) from v$recovery_file_dest;

SQL> select * from v$restore_point;

no rows selected

SQL> select sum(FLASHBACK_SIZE)/1024/1024/1024 from V$FLASHBACK_DATABASE_LOG;

To who is using more % on the above

SQL> select * from v$flash_recovery_area_usage;

TO delete the flashback log

Reduce the db_recovery_file_dest_size,

alter system set db_recovery_file_dest_size=100G;

In other session when you tail the alert log you will see below message.

Deleted Oracle managed file +FLASH/prod/FLASHBACK/log_1.3351.896398699
Deleted Oracle managed file +FLASH/prod/FLASHBACK/log_4.8287.896533879
Deleted Oracle managed file +FLASH/prod/FLASHBACK/log_2.9436.896398703
Deleted Oracle managed file +FLASH/prod/FLASHBACK/log_5.7796.897271709
Deleted Oracle managed file +FLASH/prod/FLASHBACK/log_6.7771.897273413


++ As well as if needed we can increase the db_recovery_file_dest_size to as per cycle we can keep.

SELECT scn, database_incarnation#, guarantee_flashback_database, storage_size, time, name FROM gv$restore_point;

Check if you have any guarantee flash back point,If so Please cross check and remove



( Reference from  http://oracletechdba.blogspot.com)

Wednesday, February 14, 2018

Output Post Processor Down : Actual 0 Target 1

Output Post Processor Down : Actual 0 Target 1



Some times Output Post Processor is not start up and it shows Actual and Target are showing different values when we query for Output Post Processor.

The log files shows that no error message.In this case apply the following possible solution for starting the OPP.


1. Shutdown the internal manager by using adcmctl.sh stop apps/****
2. Make sure there is no FNDLIBR processe running:
            $ ps -ef| grep FNDLIBR OR ps -ef|grep applprod|grep FNDLIBR
3. If there is any FNDLIBR processe please kill it $ kill -9 pid
4. Run cmclean.sql script as document from Note 134007.1
5. Restart the internal manager by using adcmctl.sh start apps/*****


Now see the current status of Output Post Processor if its up then no need to do anything and issue will be resolved..


but in some cases in it will not up after cmclean ans status will same as it before then simply kill the output post processor and issue will be resolved.



Solution:

1.  System Administator > Concurrent > Manager > Administer
2.  Query "Output Post Processor" -> Processes button
           Get the sytem id of "Output Post Processor"
3.  ps -ef|grep [system id]
4.  kill -9 [system id] in Unix
5.  System Administator > Concurrent > Manager > Administer
6.  Query "Output Post Processor" -> Restart button


and now see the status it will be resolved..

Monday, February 12, 2018

Restore point in Oracle


How to quickly restore to a clean database using Oracle’s restore point


Applies to:
Oracle database – 11gR2
Problem:
----------------------------------------------------------------------------------------------------------
Often while conducting benchmarking tests, it is required to load a clean database before the start of a new run. One way to ensure a clean database is to recreate the entire database before each test run, but depending on the size of it, this approach may be very time consuming or inefficient.
Solution:
----------------------------------------------------------------------------------------------------------
This article describes how to use Oracle’s flashback feature to quickly restore a database to a state that existed just before running the workload. More specifically, this article describes steps on how to use the ‘guaranteed restore points’.
Restore point:
Restore point is nothing but a name associated with a timestamp or an SCN of the database. One can create either a normal restore point or a guaranteed restore point. The difference between the two is that guaranteed restore point allows you to flashback to the restore point regardless of the DB_FLASHBACK_RETENTION_TARGET initialization parameter i.e. it is always available (assuming you have enough space in the flash recovery area).
NOTE: In this article Flashback logging was not turned ON.
Guaranteed Restore point:
Prerequisites: Creating a guaranteed restore point requires the following prerequisites:
  • The user must have the SYSDBA system privileges
  • Must have created a flash recovery area
  • The database must be in ARCHIVELOG mode
Create a guaranteed restore point:
After you have created or migrated a fresh database, first thing to do is to create a guaranteed restore point so you can flashback to it each time before you start a new workload. The steps are as under:
1.      $> su – oracle
2.      $> sqlplus / as sysdba;
3.      Find out if ARCHIVELOG is enabled
SQL> select log_mode from v$database;
If step 3 shows that ARCHIVELOG is not enabled then continue else skip to step 8 below.
4.      SQL> shutdown immediate;
5.      SQL> startup mount;
6.      SQL> alter database archivelog;
7.      SQL> alter database open;
8.      SQL> create restore point CLEAN_DB guarantee flashback database;
where CLEAN_DB is the name given to the guaranteed restore point.
Viewing the guaranteed restore point
SQL> select * from v$restore_point;
Verify the information about the newly created restore point. Also, note down the SCN# for reference and we will refer to it as “reference SCN#”
Flashback to the guaranteed restore point
Now, in order to restore your database to the guaranteed restore point, follow the steps below:
1.      $> su – oracle
2.      $> sqlplus / as sysdba;
3.      SQL> select current_scn from v$database;
4.      SQL> shutdown immediate;
5.      SQL> startup mount;
6.      SQL> select * from v$restore_point;
7.      SQL> flashback database to restore point CLEAN_DB;
8.      SQL> alter database open resetlogs;
9.      SQL> select current_scn from v$database;
Compare the SCN# from step 9 above to the reference SCN#.
NOTE: The SCN# from step 9 above may not necessarily be the exact SCN# as the reference SCN# but it will be close enough. 

If its RAC database . use srvctl to start and stop the database .


Wednesday, February 7, 2018

ORACLE RAC CRSCTL COMMAND

ORACLE RAC ASM CRSCTL COMMAND
------------------------------------------------------

CRSCTL STATUS COMMAND script with awk
---------------------------------------

crsctl status res |grep -v "^$"|awk -F "=" 'BEGIN {print " "} {printf("%s",NR%4 ? $2"|" : $2"\n")}'|sed -e 's/ *, /,/g' -e 's/, /,/g'|awk -F "|" 'BEGIN { printf "%-40s%-35s%-20s%-50s\n","Resource Name","Resource Type","Target ","State" }{ split ($3,trg,",") split ($4,st,",")}{for (i in trg) {printf "%-40s%-35s%-20s%-50s\n",$1,$2,trg[i],st[i]}}'

Tuesday, February 6, 2018

PostgreSQL- LOGS LOCATION


PostgreSQL- LOGS LOCATION



PostgreSQL users can select any of several different ways to handle database logs,
or even choose a combination.
 But especially for new users, or those getting used to an unfamiliar system,
just finding the logs can be difficult.

Log Entries
-------------

postgres=# show log_destination ;
 log_destination
-----------------
 stderr
(1 row)
The log_destination setting tells PostgreSQL where log entries should go.
In most cases it will be one of four values, though it can also be a comma-separated list of any of those four values.

SYSLOG
Syslog is a complex beast, and if your logs are going here, you’ll want more than this blog post to help you.
Different systems have different syslog daemons, those daemons have different capabilities and require different configurations,
and we simply can’t cover them all here. Your syslog may be configured to send PostgreSQL logs anywhere on the system,
or even to an external server. For your purposes, though, you’ll need to know what “ident” and “facility” you’re using.
These values tag each syslog message coming from PostgreSQL, and allow the syslog daemon to sort out where the message should go.
 You can find them like this:


postgres=# show log_destination ;
 log_destination
-----------------
 stderr
(1 row)

postgres=# show syslog_facility ;
 syslog_facility
-----------------
 local0
(1 row)

postgres=# show syslog_ident ;
 syslog_ident
--------------
 postgres
(1 row)

STDERR
This is probably the most common log destination (it’s the default, after all) and can get fairly complicated in itself.
Selecting “stderr” instructs PostgreSQL to send log data to the “stderr” (short for “standard error”)
 output pipe most operating systems give every new process by default.
 The difficulty is that PostgreSQL or the applications that launch it can then redirect this pipe to all kinds of different places.
If you start PostgreSQL manually with no particular redirection in place, log entries will be written to your terminal:
[~]$ pg_ctl -D $PGDATA start
server starting
[~]$ LOG:  database system was shut down at 2014-11-05 12:48:40 MST
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
LOG:  statement: select syntax error;
ERROR:  column "syntax" does not exist at character 8
STATEMENT:  select syntax error;

In these logs you’ll see the logs from me starting the database, connecting to it from some other terminal, and issuing the obviously erroneous command “select syntax error”. But there are several ways to redirect this elsewhere. The easiest is with pg_ctl’s -l option, which essentially redirects stderr to a file, in which case the startup looks like this:

[ ~]$ pg_ctl -l logfile -D $PGDATA start

server starting

In this system, logging_collector is turned on, which means we have to find out where it’s collecting logs.
First, check log_directory. In my case, below, it’s an absolute path, but by default it’s the relative path “pg_log”.
This is relative to the PostgreSQL data directory. Log files are named according to a pattern in log_filename.
Each of these settings is shown below:

postgres=# show logging_collector ;
 logging_collector
-------------------
 on
(1 row)

postgres=# show log_directory ;
 log_directory
---------------
 pg_log
(1 row)

postgres=# show data_directory ;
   data_directory
---------------------
 /var/lib/pgsql/data
(1 row)

postgres=# show log_filename ;
   log_filename
-------------------
 postgresql-%a.log
(1 row)

Check size of tables and objects in PostgreSQL database

 To get an overview about how much space is taken by what database, call:
SELECT
    pg_database.datname,
    pg_size_pretty(pg_database_size(pg_database.datname)) AS size
    FROM pg_database;
To get more details, call:
SELECT
   relname as "Table",
   pg_size_pretty(pg_total_relation_size(relid)) As "Size",
   pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid)) as "External Size"
   FROM pg_catalog.pg_statio_user_tables ORDER BY pg_total_relation_size(relid) DESC;
It will show 3 columns:
  • Table – The name of the table
  • Size – The total size that this table takes
  • External Size – The size that related objects of this table like indices take
If you want to dig in deeper and see the size of all objects (at least of 64kB):
SELECT
   relname AS objectname,
   relkind AS objecttype,
   reltuples AS "#entries", pg_size_pretty(relpages::bigint*8*1024) AS size
   FROM pg_class
   WHERE relpages >= 8
   ORDER BY relpages DESC;
This will show 4 columns:
  • objectname – The name of the object
  • objecttype – r for the table, i for an index, t for toast data, ...
  • #entries – The number of entries in the object (e.g. rows)
  • size – The size of the object