Monday, May 29, 2017

ASM Concepts

ASM Concepts Quick Overview (Doc ID 1086199.1)

  • ASM exists to manage file storage for the RDBMS
    • ASM does NOT perform I/O on behalf of the RDBMS
    • I/O is performed by the RDBMS processes as it does with other storage types
    • Thus, ASM is not an intermediary for I/O (would be a bottleneck)
    • I/O can occur synchronously or asynchronously depending on the value of the DISK_ASYNCH_IO parameter
    • Disks are RAW devices to ASM
    • Files that can be stored in ASM: typical database data files, control files, redologs, archivelogs, flashback logs, spfiles,
      RMAN backups and incremental tracking bitmaps, datapump dumpsets.
    • In 11gR2, ASM has been extended to allow storing any kind of file using Oracle ACFS capability (it appears as another filesystem to clients). Note that database files are not supported within ACFS

    • ASM Basics
      • The smallest unit of storage written to disk is called an "allocation unit" (AU) and is usually 1MB (4MB recommended for Exadata)
      • Very simply, ASM is organized around storing files
      • Files are divided into pieces called "extents"
      • Extent sizes are typically equal to 1 AU, except in 11.1 and 11.2 where it will use variable extent sizes that can be 1, 8, 64 AUs  or   1, 4, 16 AUs respectively
      • File extent locations are maintained by ASM using file extent maps.
      • ASM maintains file metadata in headers on the disks rather than in a data dictionary
      • The file extent maps are cached in the RDBMS shared pool; these are consulted when an RDBMS process does I/O
      • ASM is very crash resilient since it uses instance / crash recovery similar to a normal RDBMS (similar to using undo and redo logging)
    • Storage is organized into "diskgroups" (DGs)
      • A DG has a name like "DATA" in ASM which is visible to the RDBMS as a file begining with  "+DATA"; when tablespaces are created, they refer to a DG for storage such as "+DATA/.../..."
      • Beneath a diskgroup are one or more failure groups (FGs)
      • FGs are defined over a set of "disks"
      • "Disks" can be based on raw physical volumes, a disk partition, a LUN presenting a disk array, or even an LVM or NAS device
      • FGs should have disks defined that have a common failure component, otherwise ASM redundancy will not be effective
    • High availability
      • ASM can perform mirroring to recover from device failures
      • You have a choice of EXTERNAL, NORMAL, OR HIGH redundancy mirroring
        EXTERNAL means allow the underlying physical disk array do the mirroring 
        NORMAL
         means ASM will create one additional copy of an extent for redundancy   
        HIGH means ASM will create two additional copies of an extent for redundancy
      • Mirroring is implemented via "failure groups" and extent partnering; ASM can tolerate the complete loss of all disks in a failure group when NORMAL or HIGH redundancy is implemented

  • FG mirroring implementation
    • Mirroring is not implemented like RAID 1 arrays (where a disk is partnered with another disk)
    • Mirroring occurs at the file extent level and these extents are distributed among several disks known as "partners"
    • Partner disks will reside in one or more separate failure groups (otherwise mirror copies would be vulnerable)
    • ASM automatically choses partners and limits the number of them to less than 10 (varies by RDBMS version) in order to contain the overall impact of multiple disk failures
    • If a disk fails, then ASM updates its extent mapping such that reads will now occur on the surviving partners
      • This is one example when ASM and the RDBMS communicate with each other
      • The failed disk is offlined
    • In 11g, while the disk is offline, any changes to files are tracked so that those changes can be reapplied if the disk is brought online within a period of time (3.6 hours by default value of DISK_REPAIR_TIME). This could happen in cases of a bad controller or similar problem rather than the failure of the disk itself
      • The tracking occurs via a bitmap of changed file extents; the bitmaps tell ASM which extents need to be copied back to the repaired disk from the partner
      • This is called "fast mirror resync"
    • In 10g, the disk is offlined and dropped - there is no repair time grace period before dropping.
    • If the disk cannot be onlined, it must be dropped. A new disk will be installed and ASM will copy the data back via a "rebalancing" operation.  This happens automatically in the background
  • Rebalancing
    • "Rebalancing" is the process of moving file extents onto or off of disks for the purpose of evenly distributing the I/O load of the diskgroup
    • It occurs asynchronously in the background and can be monitored
    • In a clustered environment, rebalancing for a disk group is done within a single ASM instance only and cannot be distributed across multiple cluster node to speed it up
    • ASM will automatically rebalance data on disks when disks are added or removed
    • The speed and effort placed on rebalancing can be controlled via a POWER LIMIT setting
    • POWER LIMIT controls the number of background processes involved in the rebalancing effort and is limited to 10 (startinh 11.2.0.2 it can be from 0 to 1024).  Level 0 means no rebalancing will occur
    • I/O performance is impacted during rebalancing, but the amount of impact varies on which disks are being rebalanced and how much they are part of the I/O workload.  The default power limit was chosen so as not to impact application performance
  • Performance
    • ASM will maximize the available bandwidth of disks by striping file extents across all disks in a DG
    • Two stripe widths are available: coarse which has a stripe size of 1 AU, and fine with stripe size of 128K
    • Fine striping still uses normally-sized file extents, but the striping occurs in small pieces across these extents in a round-robin fashion
    • ASM does not read from alternating mirror copies since disks contain primary and mirror extents and I/O is already balanced
    • By default the RDBMS will read from a primary extent; in 11.1 this can be changed via the PREFERRED_READ_FAILURE_GROUP parameter setting for cases where reading extents from a local node results in lower latency.  Note: This is a special case applicable to "stretch clusters" and not applicable in the general usage of ASM
  • Miscellaneous
    • ASM can work for RAC and non-RAC databases
    • One ASM instance on a node will service any number of instances on that node
    • If using ASM for RAC, ASM must also be clustered to allow instances to update each other when file mapping changes occur
    • In 11.2 onwards, ASM is installed in a grid home along with the clusterware as opposed to an RDBMS home in prior versions.

Monday, May 22, 2017

Rapid Clone Ouicli.pl Fails: "Finished OUI CLI cloning

Rapid Clone Ouicli.pl Fails: "Finished OUI CLI cloning 

Oracle Applications Manager - Version 12.0.4
Information in this document applies to any platform.

SYMPTOMS

Symptom #1:
While cloning a 12.0.4 environment using the Rapid Clone option "dbTier", the following error occurs when Rapid Clone runs "ouicli.pl":

$ cd /appsutil/clone/bin
$ perl adcfgclone.pl dbTier

/appsutil/jre/bin/java -classpath
/appsutil/java://oui/jlib/OraInst
aller.jar://appsutil/java/xmlparserv2.jar
oracle.apps.ad.clone.util.OracleHomeCloner -OUICLI -e

AutoConfig could not successfully execute the following scripts:
ouicli.pl INSTE8_APPLY 255

Symptom 2:
---------

Checking the "ohclone.log" referenced from the "ouicli.pl" output, it has not been recently updated or does not exist at all.

Symptom #3:
----------

Running "perl ouicli.pl" from "$ORACLE_HOME/appsutil/clone", the script exists very quickly and shows the same error

CAUSE
------

The perl program "ouicli.pl" is unable to locate the JDK installation. This could be due to any of the following reasons:

1) There is no JDK/JRE installed under the "/appsutil/jre/bin" directory.

2) There is a problem, such as file corruption, with the JDK.

3) The value of "s_adjvaprg" in the Context File is set to an invalid location.

SOLUTION
1. Ensure there is a working "java" executable under "/appsutil/jre/bin":
$ cd $ORACLE_HOME/appsutil/jre/bin
$ ./java -version
java version "1.6.0_07"
Java(TM) SE Runtime Environment (build 1.6.0_07-b06)
Java HotSpot(TM) Server VM (build 10.0-b23, mixed mode)

If the version information is returned, the JDK/JRE is working. This "java" executable file should be referenced directly in the Context File, as noted in step #2.

2. Check the value of "s_adjvaprg" in the Context File:

$ cd $ORACLE_HOME/appsutil/

$ grep s_adjvaprg *.xml   (assumes there is only 1 ".xml" file)

$ORACLE_HOME/12102/appsutil/jre/bin/java

Note that the value of "s_adjvaprg" points to the full path to the java executable and includes the name of the executable, "java", as well.

3. If there is no "java" under "/appsutil/jre/bin", you will need to replace the "jre" directory and all sub-directories with a full JDK/JRE installation. Generally, you can find this under "$ORACLE_HOME/jdk"

$ cp -r $ORACLE_HOME/jdk/ $ORACLE_HOME/appsutil/jre/

4. If any changes have been made, now re-try the "adcfgclone.pl" command.


executed again

perl adcfgclone.pl dbTier

Successfully completed 

Friday, May 19, 2017

How to Install Oracle 12c and ASM on RedHat

How to Calculate the Number of IOPS and Throughput of a Database

How to Calculate the Number of IOPS and Throughput of a Database (Doc ID 2206831.1)

The information regarding IOPS and Throughput can be found in different places of the AWR report:
  • Instance Activity Stats
  • IO Profile (starting with 11gR2)
  • Load Profile 

Instance Activity Stats
IOPS - (Input/Output Operations Per Second) -  This is the sum of Physical Read Total IO Requests and Physical Write Total IO Requests 

Throughput - This is the sum of Physical read total bytes and Physical read total bytes



 For the example above:
IOPS            = Physical Read Total IO Requests + Physical Write Total IO Requests
                    = 83.10 + 361.28
Throughput  = Physical read total bytes + Physical read total bytes
    









The goal of this article is to explain how to calculate the IOPS and throughput of a Database.

The information regarding IOPS and Throughput can be found in different places of the AWR report:

  • Instance Activity Stats
  • IO Profile (starting with 11gR2)
  • Load Profile 


Instance Activity Stats


IOPS - (Input/Output Operations Per Second) -  

This is the sum of Physical Read Total IO Requests 
and Physical Write Total IO Requests 
Throughput - This is the sum of Physical read total bytes and Physical read total bytes



For the example above:

IOPS            = Physical Read Total IO Requests + Physical Write Total IO Requests
                    = 83.10 + 361.28
                    = 444.38

Throughput  = Physical read total bytes + Physical read total bytes
                    = 19,045,685.11 + 42,594,391.17
                    = 61640076.28 bytes
                    = 58.78 MB

IO Profile 

IOPS - Total Requests (This value is the sum of the metrics Physical Read Total IO Requests Per Sec and 

Physical Write Total IO Requests Per Sec from the Instance Activity Stats area)
Throughput in Mbps - Total (MB) (This value is the sum of the metrics Physical read total bytesc/sec 

and Physical read total bytes/sec from the AWR reports.)

Sunday, May 7, 2017

Dataguard Troubleshooting / Commands

Dataguard Troubleshooting / Commands


select * from v$archive_gap;
select * from v$dataguard_stats;

select flashback_on from v$database;
SELECT * FROM v$block_change_tracking;

show parameter fal;
!tnsping <server/client>
show parameter dump;
show parameter listener;
show parameter service;
show parameter log_archive_dest_2;
show parameter log_archive_dest_state_2;
show parameter dg_broker_start;

SELECT DEST_ID,dest_name,status,type,srl,RECOVERY_MODE 

FROM V$ARCHIVE_DEST_STATUS;

RECOVERY_MODE
-----------------------
MANAGED REAL TIME APPLY

RECOVERY_MODE
-------------
MANAGED

On Primary Database
===================

select DEST_ID,DEST_NAME,DESTINATION,TARGET,STATUS,ERROR 

from v$archive_dest where DESTINATION dest_id=2;
/
SELECT THREAD# "Thread",SEQUENCE# "Last Sequence generated"  FROM V$ARCHIVED_LOG  WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)  ORDER BY 1
/
select max(sequence#),thread# from gv$log group by thread#;

set numwidth 15
select max(sequence#) current_seq,archived,status from v$log;
/


On Standby Database
===================

SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP
/
select PROCESS,STATUS,THREAD#,SEQUENCE#,BLOCK#,BLOCKS,DELAY_MINS 

from v$managed_standby;
/
select max(sequence#),thread# from gv$archived_log 

where applied='YES' group by thread#;
/
set numwidth 15
select max(applied_seq#) last_seq from v$archive_dest_status;
/

FIND GAP
--------

select thread#,low_sequence#,high_sequence# from v$archive_log;

LISTNER VERIFICATION FROM PRIMATY DB
------------------------------------
select status,error from v$archive_dest where dest_name='LOG_ARCHIVE_DEST_2';

DEFER Log Shipping
------------------
alter system set log_archive_dest_state_2='DEFER' scope=both;

alter system set dg_broker_start=false;

ENABLE Log Shipping
-------------------
alter system set log_archive_dest_state_2='ENABLE' scope=both;

alter system set dg_broker_start=true;

DELAY CHANGE
------------
SQL> alter system set log_archive_dest_2='ARCH DELAY=15 

OPTIONAL REOPEN=60 SERVICE=S1';


ARCHIVE_LAG_TARGET tells Oracle to make sure to switch a log every n seconds
----------------------------------------------------------------------------
ALTER SYSTEM SET ARCHIVE_LAG_TARGET = 1800 SCOPE=BOTH;
This sets the maximum lag to 30 mins.



On Primary to Display info about all log destinations
=====================================================
set pages 300 lines 300
set numwidth 15
column ID format 99
column "SRLs" format 99
column active format 99
col type format a4
select ds.dest_id id,ad.status,ds.database_mode db_mode,
ad.archivertype,ds.recovery_mode, ds.protection_mode,
ds.standby_logfile_count "SRLs",ds.standby_logfile_active active,
ds.archived_seq# from v$archive_dest_status ds,v$archive_dest ad 
where ds.dest_id = ad.dest_id and ad.status != 'INACTIVE'  
order by ds.dest_id
/

On Primary to Display log destinations options
==============================================
set pages 300 lines 300
set numwidth 10
column id format 99
select dest_id id ,archiver,transmit_mode,affirm,
async_blocks async,net_timeout net_time,delay_mins delay,
reopen_secs reopen,register,binding from v$archive_dest order by dest_id
/


==================================================================================================

Standby Database
================
select NAME,DATABASE_ROLE,OPEN_MODE,PROTECTION_MODE,PROTECTION_LEVEL, CURRENT_SCN,FLASHBACK_ON,FORCE_LOGGING from v$database;

Some possible statuses for the MRP
----------------------------------
ERROR - This means that the process has failed. 
See the alert log or v$dataguard_status for further information.

WAIT_FOR_LOG - Process is waiting for the archived redo log to be completed. 
Switch an archive log on the primary and requery 
v$managed_standby to see if the status changes to APPLYING_LOG.

WAIT_FOR_GAP - Process is waiting for the archive gap to be resolved. 
Review the alert log to see if FAL_SERVER has been called to resolve the gap.

APPLYING_LOG - Process is applying the archived redo log 
to the standby database.

CHECK MANAGED RECOVERY PROCESS : SHOWS STATUS OF ARCH,RFS,MRP PROCESS.
------------------------------
select inst_id,process,status,client_process,thread#,sequence#,block#,
blocks,delay_mins from gv$managed_standby;

select * from gv$active_instances;

!ps -ef|grep -i mrp

STARTING MRP0
-------------
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

STOPING MRP0
------------
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

To Display MRP0 Speed
---------------------
set pages 300 lines 300
Col Values For A65
Col Recover_start For A21
Select To_char(START_TIME,'Dd.Mm.Yyyy Hh24:Mi:ss') "Recover_start",
To_char(Item)||' = '||To_char(Sofar)||' '||To_char(Units)||' '||
 To_char(TIMESTAMP,'Dd.Mm.Yyyy Hh24:Mi') "Values" 
From V$Recovery_progress Where Start_time=(Select Max(Start_time) From V$Recovery_progress);

checking log transfer and apply
-------------------------------
SELECT SEQUENCE#,FIRST_TIME,NEXT_TIME,APPLIED
 FROM V$ARCHIVED_LOG ORDER BY SEQUENCE# ;
select count(*) from V$ARCHIVED_LOG where applied='NO';
/

TIME TAKEN TO APPLY A LOG
-------------------------
set pages 300 lines 300

select TIMESTAMP,completion_time "ArchTime",SEQUENCE#,
round((blocks*block_size)/(1024*1024),1) "SizeM",
round((TIMESTAMP-lag(TIMESTAMP,1,TIMESTAMP) 
OVER (order by TIMESTAMP))*24*60*60,1) "Diff(sec)",
round((blocks*block_size)/1024/ decode(((TIMESTAMP-lag(TIMESTAMP,1,TIMESTAMP) OVER (order by TIMESTAMP))*24*60*60),0,1, (TIMESTAMP-lag(TIMESTAMP,1,TIMESTAMP) OVER (order by TIMESTAMP))*24*60*60),1) "KB/sec", round((blocks*block_size)/(1024*1024)/ decode(((TIMESTAMP-lag(TIMESTAMP,1,TIMESTAMP)
OVER (order by TIMESTAMP))*24*60*60),0,1, (TIMESTAMP-lag(TIMESTAMP,1,TIMESTAMP) OVER (order by TIMESTAMP))*24*60*60),3) "MB/sec",round(((lead(TIMESTAMP,1,TIMESTAMP) over (order by TIMESTAMP))-completion_time)*24*60*60,1) "Lag(sec)" from v$archived_log a, v$dataguard_status dgs where a.name = replace(dgs.MESSAGE,'Media Recovery Log ','') and dgs.FACILITY = 'Log Apply Services' order by TIMESTAMP desc;
/

CHECKING FOR DATAGAURD ERROR
----------------------------
set pages 300 lines 300
column Timestamp Format a20
column Facility  Format a24
column Severity  Format a13
column Message   Format a80 trunc

Select to_char(timestamp,'YYYY-MON-DD HH24:MI:SS') Timestamp,Facility,Severity,error_code,message_num,Message from v$dataguard_status where severity in ('Error','Fatal') order by Timestamp;

select  *  from v$ARCHIVE_GAP;

--OR---
Here is another script with v$dataguard_status:

select *
  from (select TIMESTAMP,
               completion_time "ArchTime",
               SEQUENCE#,
               round((blocks * block_size) / (1024 * 1024), 1) "Size Meg",
               round((TIMESTAMP - lag(TIMESTAMP, 1, TIMESTAMP)
                      OVER(order by TIMESTAMP)) * 24 * 60 * 60,
                     1) "Diff(sec)",
               round((blocks * block_size) / 1024 /
                     decode(((TIMESTAMP - lag(TIMESTAMP, 1, TIMESTAMP)
                             OVER(order by TIMESTAMP)) * 24 * 60 * 60),
                            0,
                            1,
                            (TIMESTAMP - lag(TIMESTAMP, 1, TIMESTAMP)
                             OVER(order by TIMESTAMP)) * 24 * 60 * 60),
                     1) "KB/sec",
               round((blocks * block_size) / (1024 * 1024) /
                     decode(((TIMESTAMP - lag(TIMESTAMP, 1, TIMESTAMP)
                             OVER(order by TIMESTAMP)) * 24 * 60 * 60),
                            0,
                            1,
                            (TIMESTAMP - lag(TIMESTAMP, 1, TIMESTAMP)
                             OVER(order by TIMESTAMP)) * 24 * 60 * 60),
                     3) "MB/sec",
               round(((lead(TIMESTAMP, 1, TIMESTAMP) over(order by TIMESTAMP)) -
                     completion_time) * 24 * 60 * 60,
                     1) "Lag(sec)"
          from v$archived_log a, v$dataguard_status dgs
         where a.name = replace(dgs.MESSAGE, 'Media Recovery Log ', '')
           and dgs.FACILITY = 'Log Apply Services'
         order by TIMESTAMP desc)
 where rownum < 10;

Finding Missing Logs on Standby
-------------------------------
select local.thread#,local.sequence# from (select thread#,sequence# from v$archived_log where dest_id=1) local where local.sequence# not in (select sequence# from v$archived_log where dest_id=2 and thread# = local.thread#)
/

Check which logs have not been applied
--------------------------------------
alter session set nls_date_format='YYYY-MM-DD HH24:MI.SS';
SELECT SEQUENCE#, APPLIED, completion_time FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

REGISTRYING LOGFILE
-------------------
alter database register logfile '/file/path/';

RECOVERY PROGRESS ON STANDBY SITE
---------------------------------
v$managed_standby
v$archived_standby

v$archive_dest_status -  TO FIND THE LAST ARCHIVED LOG RECEIVED AND APPLIED ON THIS SITE.
select archived_thread#,archived_seq#,applied_thread#,applied_seq# from v$archive_dest_status;

v$log_history
select max(sequence#),latest_archive_log from v$log_history;

v$archived_log - individual archive log
select thread#,sequence#,applied,registrar from v$archived_log;

standby_file_management - playes when attributes of datafiles are modified primary site.
-IF IT IS RAW DEVICE STANDBY_FILE_MANAGEMENT SHOULD BE MANUAL.OTHERWISE AUTO

http://oraclerac.weebly.com/standby.html

==================================================================================================================

TROUBLESHOOTING A PHYSICAL STANDBY DATABASE:

NOTE: Pls check Metalink 232649.1 (Data Guard Gap Detection and Resolution)


On Standby server:

Run the below query to check the type of Standby database,
 PHYSCIAL or LOGICAL:

sqlplus "/ as sysdba"
select database_role from v$database;

If Physical Standby then follow:

Step1: Check which logs have not been applied:
======
alter session set nls_date_format='YYYY-MM-DD HH24:MI.SS';
SELECT SEQUENCE#, APPLIED, completion_time FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

Step2:Check if there is a gap in the archive logs:
======
SELECT * FROM V$ARCHIVE_GAP;

If there is a gap, then it is most likely that the log has been compressed on the Primary server, and the Standby FAL service cannot retrieve the log.If so, then temporarily stop archivelog compression job on the primary and unzip the required archive logs. After a few minutes, the FAL service will retrieve the log and the Standby apply services will resume.Check the progress by running the SQL in step-1 above.
If the logs haven't been processed after 5-10 minutes, then you will have to perform the following tasks:

Step3: Copy the (zipped) log to the standby archive log destination on the Standby server, (unzip the archive), and register,

ALTER DATABASE REGISTER LOGFILE '/u01/oradata/stby/arch/arch_1_443.arc';

Step4: Check if this is a 'real-time apply standby:
=======
select recovery_mode from V$ARCHIVE_DEST_STATUS;

Step5: Stop/restart the standby apply services:
=======
alter database recover managed standby database cancel;

If a real-time apply standby then:
alter database recover managed standby database using current logfile disconnect from session;

Found this:
RECOVER MANAGED STANDBY DATABASE cancel;
ORA-16136: Managed Standby Recovery not active

RECOVER MANAGED STANDBY DATABASE disconnect from session;
Media recovery complete.

Else (non- realtime apply):
alter database recover managed standby database disconnect from session;

Check the progress by running the SQL in step-1 above.

Useful Standby query:
----------------------------
Startup standby database

startup nomount;
alter database mount standby database;
alter database recover managed standby database disconnect;

To remove a delay from a standby
alter database recover managed standby database cancel;
alter database recover managed standby database nodelay disconnect;

Cancel managed recovery
alter database recover managed standby database cancel;

Register a missing log file
alter database register physical logfile '<fullpath/filename>';

If FAL doesn't work and it says the log is already registered
alter database register or replace physical logfile '<fullpath/filename>';

If that doesn't work, try this...

shutdown immediate
startup nomount
alter database mount standby database;
alter database recover automatic standby database;

>> wait for the recovery to finish - then cancel

shutdown immediate
startup nomount
alter database mount standby database;
alter database recover managed standby database disconnect;


Check which logs are missing (Run this on the standby)

select local.thread#, local.sequence# from
       (select thread#, sequence# from  v$archived_log where dest_id=1) local where  local.sequence# not in
       (select sequence# from v$archived_log where dest_id=2 and thread# = local.thread#);

Disable/Enable archive log destinations
alter system set log_archive_dest_state_2 = 'defer';
alter system set log_archive_dest_state_2 = 'enable';


Turn on fal tracing on the primary db
alter system set LOG_ARCHIVE_TRACE = 128;

Stop the Data Guard broker
alter system set dg_broker_start=false;

Show the current instance role
select name, open_mode, database_role from v$database;
=====
Logical standby apply stop/start
Stop Logical standby >> alter database stop logical standby apply;

Start Logical standby >> alter database start logical standby apply;

See how up to date a physical standby is: (Run this on the primary)
set numwidth 15
select    max(sequence#) current_seq from    v$log;

Then run this on the standby
set numwidth 15
select max(applied_seq#) last_seq from v$archive_dest_status;

Display info about all log destinations (run on the primary)

set lines 100 set numwidth 15 column ID format 99 column "SRLs" format 99 column active format 99 col type format a4

select ds.dest_id id , ad.status , ds.database_mode db_mode , ad.archiver type , ds.recovery_mode , ds.protection_mode , ds.standby_logfile_count "SRLs" , ds.standby_logfile_active active , ds.archived_seq# from v$archive_dest_status ds , v$archive_dest ad where ds.dest_id = ad.dest_id and ad.status != 'INACTIVE' order by ds.dest_id;

Display log destinations options (run on the primary)

set numwidth 8 lines 100 column id format 99
select dest_id id , archiver , transmit_mode , affirm , async_blocks async , net_timeout net_time , delay_mins delay , reopen_secs reopen , register,binding from v$archive_dest order by dest_id;

List any standby redo logs
set lines 100 pages 999 col member format a70
select st.group# , st.sequence# , ceil(st.bytes / 1048576) mb , lf.member from v$standby_log st , v$logfile lf where st.group# = lf.group#;

Script for Standby archivelog monitoring….(removed the duplicate rows)

select arch.thread# "Thread", arch.sequence# "Last Sequence Received", appl.sequence# "Last Sequence Applied",  (arch.sequence# - appl.sequence#) "Difference" from
(select thread# ,sequence# from v$archived_log where (thread#,first_time ) in (select thread#,max(first_time) from v$archived_log group by thread#)) arch,
(select thread# ,sequence# from v$log_history where (thread#,first_time ) in (select thread#,max(first_time) from v$log_history group by thread#)) appl
where arch.thread# = appl.thread#
order by 1;

Dataguard Physical Standby Start / Stop

How to start/stop Dataguard Physical Standby Database


SHUTDOWN
========

status of mrp:
NON-RAC:
select process,status,client_process,thread#,
sequence#,block#,blocks,delay_mins from v$managed_standby;

RAC:

select inst_id,process,status,client_process,thread#,
sequence#,block#,blocks,delay_mins from gv$managed_standby;

cancelling mrp:

recover managed standby database cancel;

shut immediate;


STARTUP
=======
startup nomount;

alter database mount standby database;


STARTING MRP0
=============
If a non-real-time apply standby then issue the following 
SQL statement to start Redo Apply:

alter database recover managed standby database 
disconnect from session;
--OR--
RECOVER MANAGED STANDBY DATABASE DISCONNECT 
FROM 
SESSION PARALLEL 64;

SQL> 
SELECT open_mode FROM V$DATABASE;

OPEN_MODE
--------------------
MOUNTED

If a real-time apply standby then issue the following
 SQL statement to start Redo Apply:

alter database recover managed standby database using
 current logfile disconnect from session;

SQL> 
SELECT open_mode FROM V$DATABASE;

OPEN_MODE
--------------------
READ ONLY WITH APPLY

Now that the standby is in real-time query mode 
(ie the standby is open in read-only mode and Redo Apply is active).

status of mrp:

NON-RAC:


select process,status,client_process,thread#,sequence#,block#,
blocks,delay_mins from v$managed_standby;

RAC:

select inst_id,process,status,client_process,thread#,sequence#,
block#,blocks,delay_mins from gv$managed_standby;

OPEN_MODE = READ ONLY WITH APPLY
--------------------------------
alter database open readonly;

Real-Time Apply is Enabled / Disabled


How to check real-time apply is enabled or not on Physical Standby Database

To start the real time apply (standby redo logfiles are must for it)

On Physical Standby
--------------------- 

SQL> select DEST_ID,dest_name,status,type,srl,recovery_mode 

from v$archive_dest_status;

   DEST_ID DEST_NAME               STATUS    TYPE           SRL RECOVERY_MODE
---------- ----------------------- --------- -------------- --- --------------
         1 LOG_ARCHIVE_DEST_2       VALID     LOCAL          NO  MANAGED

SQL>

alter database recover managed standby database disconnect from session;

On Physical Standby
--------------------- 

SQL> select DEST_ID,dest_name,status,type,srl,recovery_mode 

from v$archive_dest_status;

 DEST_ID DEST_NAME             STATUS    TYPE      SRL RECOVERY_MODE
-------- -------------------- --------- --------- --- ---------------
     1 LOG_ARCHIVE_DEST_2    VALID     LOCAL     YES  MANAGED REAL TIME APPLY

SQL>


alter database recover managed standby database using current logfile 
disconnect from session;

Active Data Guard Enabled / Disabled

How to Check Active Data Guard is  Enabled  or Not 


SQL> select  'Using Active Data Guard' ADG   from v$managed_standby m,v$database d where m.process like 'MRP%'  ;
        ADG
-----------------------
Using Active Data Guard

or from standby database

SQL> select open_mode,controlfile_type from v$database;

OPEN_MODE              CONTROLFILE
--------------------   ------------------
READ ONLY WITH APPLY   STANDBY

Conversion from Physical standby to Active Data Guard standby
--------------------------------------------------------------


Real-time query
===============


A physical standby database can be opened for read-only access and used to offload queries from a primary database. In addition, if a license for the Oracle Active Data Guard option has been purchased, a physical standby database can also be open while Redo Apply is active. This capability is known as the real-time query feature.

Redo Apply must be stopped before any physical standby database instance is opened.

If one or more physical standby instances are open, those instances must be stopped or restarted in a mounted state before starting Redo Apply

SQL> SELECT open_mode FROM V$DATABASE;

OPEN_MODE
--------------------
MOUNTED

Stop Apply Services

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Database altered.

SQL> SHUT IMMEDIATE

SQL> START NOMOUNT

SQL> ALTER DATABASE MOUNT STANDBY DATABASE;

Database altered.

SQL> ALTER DATABASE OPEN;

Database altered.

SQL>  SELECT open_mode FROM V$DATABASE;

OPEN_MODE
--------------------
READ ONLY

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE disconnect;

Database altered.

SQL> SELECT open_mode FROM V$DATABASE;

OPEN_MODE
--------------------

READ ONLY WITH APPLY

It enables us to have a physical standby read only open, while redo apply is still done in the background.
The Active Data Guard  contains the following features
------------------------------------------------------
Physical Standby with Real-time Query
Fast Incremental Backup on Physical Standby
Automatic Block Repair