Friday, April 22, 2022

Oracle E-Business Suite (EBS) System Schema Migration

 

 Oracle E-Business Suite (EBS) System Schema Migration

With AD and TXK Delta 13, EBS has implemented a set of structural changes that modernize the EBS database architecture. These changes introduce a new schema named EBS_SYSTEM, which is defined with a least privileges model that utilizes public database APIs. In addition, connections from the application tier to the database have been updated to utilize database service names.

Oracle E-Business Suite (EBS) Release 12.2 to use the new EBS System Schema (EBS_SYSTEM).

Section 1: Overview of the EBS System Schema

The Release 12.2 database architecture has been modernized by adoption of the Oracle E-Business Suite System Schema, EBS_SYSTEM. 

Prior to the introduction of the EBS_SYSTEM schema, Oracle E-Business Suite installed application objects in the Oracle Database SYS and SYSTEM schemas. 

Migration to the EBS System Schema obviates the need for any EBS-owned objects to reside in the SYS or SYSTEM schemas.

Key characteristics of the EBS System Schema include:

  • Creation of the EBS_SYSTEM schema and associated grant management is performed as follows:
    1. Creation of the EBS_SYSTEM schema and is performed by SYS running the adgrants.sql script (supplying the APPS account as the parameter) before applying the AD-TXK Delta 13 RUPs.
    2. Grants required by the APPS account are given by the apps_adgrants.sql script being run automatically by the AD-TXK Delta 13 RUP installation process. This script does not need to be run manually as part of normal patching operations.
  • All EBS database objects that currently reside in the SYS or SYSTEM schemas are migrated to appropriate Oracle E-Business Suite schemas. Depending upon the EBS object type and function, the object is migrated to EBS_SYSTEM, APPS, or APPS_NE.

  • All Oracle E-Business Suite administration actions (such as running adop, adadmin and other utilities) are now performed by EBS_SYSTEM.

  • Access to the Oracle database SYS and SYSTEM and the Oracle database server operating system is no longer required for Oracle E-Business Suite system administrative functions.

  • If any grants need to be fixed after the AD-TXK Delta 13 RUP is applied, 

Key benefits of migrating to the EBS System Schema include the provision of support for the following:

  • Public Oracle Database APIs
  • Least Privileges Model for database object access
  • Separation of Duties for administrators
  • Database service names for application tier database connections
  • Oracle Database Unified Auditing
  • Easier interoperability across Oracle Database releases

Diagram 1 - The modernized Oracle E-Business Suite database and its key features


1.1 Public Oracle Database APIs

As part of the Oracle E-Business Suite System Schema Migration, all Oracle E-Business Suite code is updated to map to public Oracle database dictionary objects and APIs. Utilizing public Oracle database APIs provides further capability to lock down EBS runtime accounts.

1.2 Least Privileges Model for Database Object Access

With the migration to the EBS_SYSTEM schema and usage of public Oracle Database APIs, runtime accounts may be constrained even further. As part of this feature, unnecessary privileges are revoked from Oracle E-Business Suite application accounts.

1.3 Separation of Duties for Administrators

Migration to the EBS_SYSTEM schema makes it possible to separate the role of the Oracle E-Business Suite system administrators from database administrators. All Oracle E-Business Suite administration actions (such as running adop, adadmin, and other utilities) will now prompt for the EBS_SYSTEM password instead of the SYSTEM password. Highly privileged operations that were previously run by the SYS or SYSTEM accounts are now run by EBS_SYSTEM.

Access to the Oracle database SYS and SYSTEM and the Oracle database server operating system is no longer required for Oracle E-Business Suite system administration functions. Database patching may be performed by the Oracle database administrator, and Oracle E-Business Suite patching may be performed by the Oracle E-Business Suite system administrator or applications database administrator (DBA).

The passwords for EBS_SYSTEM and SYSTEM must match until after the Completion Patch is successfully applied. Once the Completion Patch has been successfully applied, the password for EBS_SYSTEM should be changed to be different from the SYSTEM schema password.

1.4 Database Service Names for Application Tier Database Connections

As part of modernizing the Oracle E-Business Suite, connections from the Oracle E-Business Suite application tier to the Oracle E-Business Suite database are now performed using database service names.

1.5 Support for Oracle Database Unified Auditing

Once all requirements are met, Oracle E-Business Suite customers are now able to utilize Unified Auditing, the latest method for auditing an Oracle Database. With Unified Auditing, audit data is combined into a single audit trail.  A new schema, AUDSYS, is used for storing the Unified Audit Trail. Separation of duties is achieved with multiple database roles to audit configuration and view the audit data.

1.6 Streamline Database Directory Objects

Following migration to EBS System Schema (EBS_SYSTEM), the APPS schema will no longer have the ability to create database directory objects. Database directory objects are now created by the EBS_SYSTEM user.

The following four standard new database directory objects are created with the privileges shown:

Object Name Privileges
EBS_LOG

Read/Write

EBS_TEMP Read/Write
EBS_INBOUND Read
EBS_OUTBOUND Write

By default, the database directory objects are mapped to a temporary directory in the $ORACLE_HOME on the database tier file system. If a large number of files are written to this directory, your $ORACLE_HOME database tier file system may reach capacity: it is therefore recommended that you instead map the database directory objects to a secure location in a separate mount point from your $ORACLE_HOME database tier file system. For more information, refer to the Oracle Database documentation for your specific database release.

1.7 Interoperabilty Across Oracle Database Releases

   Oracle E-Business Suite uptake of database releases will be made easier by EBS only referencing public database views and APIs. 


References : Doc ID 2755875.1

AD and TXK Delta 13 for EBS 12.2

 

 AD and TXK Delta 13  for EBS 12.2


Click here for the details of the original post:  AD and TXK Delta 13 Updates Now Available for EBS 12.2

Oracle announced the availability of the latest updates for the Applications DBA (AD) and Technology Stack (TXK) infrastructure components of Oracle E-Business Suite Release 12.2. This Delta 13 release for AD and TXK includes new features, plus performance and stability fixes.

Oracle strongly recommend that you apply these new AD and TXK updates at your earliest convenience:

Refer to the following for the latest updates and instructions for applying the latest AD and TXK updates:

Note: You should always apply the AD and TXK updates during the same patching cycle.

New Feature: Oracle E-Business Suite (EBS) System Schema Migration

With AD and TXK Delta 13, EBS has implemented a set of structural changes that modernize the EBS database architecture. These changes introduce a new schema named EBS_SYSTEM, which is defined with a least privileges model that utilizes public database APIs. In addition, connections from the application tier to the database have been updated to utilize database service names.


R12.AD.C.Delta.13, Patch 32394134 and Patch 33401305


R12.TXK.C.Delta.13, Patch 32392507 and Patch 33550674



References

Monday, April 18, 2022

Oracle EBS Foundation Table

Oracle E-Business Suite (EBS) Foundation Tables

This post is simply a list of Oracle E-Business Suite (EBS) Foundation tables. The Foundation tables contain data that relate to the entire suite of applications – they are not specific to any one module.

Some foundation tables are used in Oracle BI Applications (OBIA), for example, the FND_USER, FND_USER_RESP_GROUPS, and FND_RESPONSIBILITY_VL tables are used in security-related Initialization Blocks.

Foundation Table

Purpose

FND_APPLICATION

Stores applications registered with Oracle Application Object Library.

FND_APPLICATION_TL

Stores translated information about all the applications registered with Oracle Application Object Library.

FND_APP_SERVERS

This table will track the servers used by the E-Business Suite system.

FND_ATTACHED_DOCUMENTS

Stores information relating a document to an application entity.

FND_CONCURRENT_PROCESSES

Stores information about concurrent managers.

FND_CONCURRENT_PROCESSORS

Stores information about immediate (subroutine) concurrent program libraries.

FND_CONCURRENT_PROGRAMS

Stores information about concurrent programs. Each row includes a name and description of the concurrent program.

FND_CONCURRENT_PROGRAMS_TL

Stores translated information about concurrent programs in each of the installed languages.

FND_CONCURRENT_QUEUES

Stores information about concurrent managers.

FND_CONCURRENT_QUEUE_SIZE

Stores information about the number of requests a concurrent manager can process at once, according to its work shift.

FND_CONCURRENT_REQUESTS

Stores information about individual concurrent requests.

FND_CONCURRENT_REQUEST_CLASS

Stores information about concurrent request types.

FND_CONC_REQ_OUTPUTS

This table stores output files created by Concurrent Request.

FND_CURRENCIES

Stores information about currencies.

FND_DATABASES

It tracks the databases employed by the eBusiness suite. This table stores information about the database that is not instance specific.

FND_DATABASE_INSTANCES

Stores instance specific information. Every database has one or more instance.

FND_DESCRIPTIVE_FLEXS

Stores setup information about descriptive flexfields.

FND_DESCRIPTIVE_FLEXS_TL

Stores translated setup information about descriptive flexfields.

FND_DOCUMENTS

Stores language-independent information about a document.

FND_EXECUTABLES

Stores information about concurrent program executables.

FND_FLEX_VALUES

Stores valid values for key and descriptive flexfield segments.

FND_FLEX_VALUE_SETS

Stores information about the value sets used by both key and descriptive flexfields.

FND_LANGUAGES

Stores information regarding languages and dialects.

FND_MENUS

It lists the menus that appear in the Navigate Window, as determined by the System Administrator when defining responsibilities for function security.

FND_MENUS_TL

Stores translated information about the menus in FND_MENUS.

FND_MENU_ENTRIES

Stores information about individual entries in the menus in FND_MENUS.

FND_PROFILE_OPTIONS

Stores information about user profile options.

FND_REQUEST_GROUPS

Stores information about report security groups.

FND_REQUEST_SETS

Stores information about report sets.

FND_RESPONSIBILITY

Stores information about responsibilities. Each row includes the name and description of the responsibility, the application it belongs to, and values that identify the main menu, and the first form that it uses.

FND_RESPONSIBILITY_TL

Stores translated information about responsibilities.

FND_RESP_FUNCTIONS

Stores security exclusion rules for function security menus. Security exclusion rules are lists of functions and menus inaccessible to a particular responsibility.

FND_SECURITY_GROUPS

Stores information about security groups used to partition data in a Service Bureau architecture.

FND_SEQUENCES

Stores information about the registered sequences in your applications.

FND_TABLES

Stores information about the registered tables in your applications.

FND_TERRITORIES

Stores information for countries, alternatively known as territories.

FND_USER

Stores information about application users.

FND_VIEWS

Stores information about the registered views in your applications.

FND_USER_RESPONSIBILITY

 

FND_RESPONSIBILITY_VL

 

FND_ORACLE_USERID

 

FND_DATA_GROUP_UNITS

 

Sunday, April 17, 2022

AWS Redshift

 AWS Redshift 

Amazon Redshift is a data warehouse product developed by Amazon and is a part of Amazon's cloud platform, Amazon Web Services. Redshift is a relational database management system designed specifically for OLAP and is built on top of PostgreSQL and ParAccel's Massive Parallel Processing technology, leveraging its distributed architecture, columnar storage, and column compression to execute exploratory queries. Due to being based off of PostgreSQL, Redshift allows clients to make connections and execute DDL and DML SQL statements using JDBC or ODBC.

 For better Understanding about AWS RedShift click the link below

RedShift.pdf

 What is Amazon Redshift?

Amazon Redshift is a fully managed, scalable cloud data warehouse that accelerates your time to insights with fast, easy, and secure analytics at scale. Thousands of customers rely on Amazon Redshift to analyze data from terabytes to petabytes and run complex analytical queries. You can get real-time insights and predictive analytics on all your data across your operational databases, data lake, data warehouse, and third-party datasets. Amazon Redshift delivers all this at a price performance that’s up to 3x better than other cloud data warehouses out of the box, helping you keep your costs predictable.

Amazon Redshift Serverless makes it easy for you to run petabyte-scale analytics in seconds to get rapid insights without having to configure and manage your data warehouse clusters. Amazon Redshift Serverless automatically provisions and scales the data warehouse capacity to deliver high performance for demanding and unpredictable workloads, and you pay only for the resources you use.

What are the top reasons customers choose Amazon Redshift?

Thousands of customers choose Amazon Redshift to accelerate their time to insights because it’s easy to use, it delivers performance at any scale, and it lets you analyze all your data. Amazon Redshift is a fully managed service and offers both provisioned and serverless options, making it easy for you to run and scale analytics without having to manage your data warehouse. You can choose the provisioned option for predictable workloads or go with the Amazon Redshift Serverless option to automatically provision and scale the data warehouse capacity to deliver high performance for demanding and unpredictable workloads. It delivers performance at any scale with up to 3x better price performance than other cloud data warehouses out of the box, helping you keep your costs predictable. Amazon Redshift lets you get insights from running real-time and predictive analytics on all your data across your operational databases, data lake, data warehouse, and thousands of third-party datasets. Amazon Redshift keeps your data secure at rest and in transit and meets internal and external compliance requirements. It supports industry-leading security to protect your data in transit and at rest and is compliant with SOC1, SOC2, SOC3, and PCI DSS Level 1 requirements. All Redshift security and compliance features are included at no additional cost.

How does Amazon Redshift simplify data warehouse management?

Amazon Redshift is fully managed by AWS so you no longer need to worry about data warehouse management tasks such as hardware provisioning, software patching, setup, configuration, monitoring nodes and drives to recover from failures, or backups. AWS manages the work needed to set up, operate, and scale a data warehouse on your behalf, freeing you to focus on building your applications. Amazon Redshift also has automatic tuning capabilities, and surfaces recommendations for managing your warehouse in Redshift Advisor. For Redshift Spectrum, Amazon Redshift manages all the computing infrastructure, load balancing, planning, scheduling, and execution of your queries on data stored in Amazon S3. The serverless option automatically provisions and scales the data warehouse capacity to deliver high performance for demanding and unpredictable workloads, and you pay only for the resources you use.

 How does the performance of Amazon Redshift compare to that of other data warehouses?

TPC-DS benchmark results show that Amazon Redshift provides the best price performance out of the box, even for a comparatively small 3 TB dataset. Amazon Redshift delivers up to 3x better price performance than other cloud data warehouses. This means that you can benefit from Amazon Redshift’s leading price performance from the start without manual tuning. Get up to 3x better price performance with Amazon Redshift than with other cloud data warehouses | AWS Big Data Blog.

Amazon Redshift uses a variety of innovations to achieve up to 10x better performance than traditional databases for data warehousing and analytics workloads, including efficient read-optimized columnar compressed data storage with massively parallel processing (MPP) compute clusters that scale linearly to hundreds of nodes. Instead of storing data as a series of rows, Amazon Redshift organizes the data by column. When loading data into an empty table, Amazon Redshift automatically samples your data and selects the most appropriate compression scheme.

Redshift Spectrum lets you run queries against exabytes of data in Amazon S3. There is no loading or extract, transform, and load (ETL) required. Even if you don’t store any of your data in Amazon Redshift, you can still use Redshift Spectrum to query datasets as large as an exabyte in Amazon S3. Materialized views provide significantly faster query performance for repeated and predictable analytical workloads such as dashboards, queries from business intelligence (BI) tools, and ETL data processing. Using materialized views, you can store the precomputed results of queries and efficiently maintain them by incrementally processing the latest changes made to the source tables. Subsequent queries referencing the materialized views use the precomputed results to run much faster, and automatic refresh and query rewrite capabilities simplify and automate the use of materialized views.

The compute and storage capacity of on-premises data warehouses are limited by the constraints of the on-premises hardware. Amazon Redshift gives you the ability to scale compute and storage independently as needed to meet changing workloads. With Redshift Managed Storage (RMS), you now have the ability to scale your storage to petabytes using Amazon S3 storage.

Automatic Table Optimization (ATO) is a self-tuning capability that helps you achieve the performance benefits of creating optimal sort and distribution keys without manual effort. ATO observes how queries interact with tables and uses machine learning (ML) to select the best sort and distribution keys to optimize performance for the cluster’s workload. ATO optimizations have shown to increase cluster performance by 24% and 34% using the 3 TB and 30 TB TPC-DS benchmarks, respectively, versus a cluster without ATO. Additional features such as Automatic Vacuum Delete, Automatic Table Sort, and Automatic Analyze eliminate the need for manual maintenance and tuning of Redshift clusters to get the best performance for new clusters and production workloads.

Workload management allows you to route queries to a set of defined queues to manage the concurrency and resource utilization of the cluster. Today, Amazon Redshift has both automatic and manual configuration types. With manual WLM configurations, you’re responsible for defining the amount of memory allocated to each queue and the maximum number of queries, each of which gets a fraction of that memory, which can run in each of their queues. Manual WLM configurations don’t adapt to changes in your workload and require an intimate knowledge of your queries’ resource utilization to get right. Amazon Redshift Auto WLM doesn’t require you to define the memory utilization or concurrency for queues. Instead, it adjusts the concurrency dynamically to optimize for throughput. Optionally, you can define query priorities to provide queries preferential resource allocation based on your business priority. Auto WLM also provides powerful tools to let you manage your workload. Query priorities let you define priorities for workloads so they can get preferential treatment in Amazon Redshift, including more resources during busy times for consistent query performance, and query monitoring rules offer ways to manage unexpected situations such as detecting and preventing runaway or expensive queries from consuming system resources. The following are key areas of Auto WLM with adaptive concurrency performance improvements: proper allocation of memory, elimination of static partitioning of memory between queues, and improved throughput.

Amazon Redshift Advisor develops customized recommendations to increase performance and optimize costs by analyzing your workload and usage metrics for your cluster. Sign in to the Amazon Redshift console to view Advisor recommendations.

How do I get started with Amazon Redshift?

With just a few clicks in the AWS Management Console, you can start querying data. You can take advantage of pre-loaded sample data sets, including benchmark datasets TPC-H, TPC-DS, and other sample queries to kick start analytics immediately. You can create databases, schemas, tables and load data from Amazon S3, Amazon Redshift data shares, or restore from an existing Amazon Redshift provisioned cluster snapshot. You can also directly query data in open formats, such as Parquet or ORC in Amazon S3 data lake, or query data in operational databases, such as Amazon Aurora, Amazon RDS PostgreSQL and MySQL.

To get started with Amazon Redshift Serverless, choose “Try Amazon Redshift Serverless” and start querying data. Amazon Redshift Serverless automatically scales to meet any increase in workloads.

 What is Advanced Query Accelerator (AQUA) for Amazon Redshift?

Advanced Query Accelerator (AQUA) is a new distributed and hardware-accelerated cache that enables Amazon Redshift to run up to 10x faster than other enterprise cloud data warehouses by automatically boosting certain types of queries. AQUA is available with the RA3.16xlarge, RA3.4xlarge, or RA3.xlplus nodes at no additional charge and with no code changes.

 How do I enable/disable AQUA for my Redshift data warehouse?

For Redshift clusters running on RA3 nodes, you can enable/disable AQUA at the cluster level using the Redshift console, AWS Command Line Interface (CLI), or API. For Redshift clusters running on DC, DS, or older-generation nodes, you must upgrade to RA3 nodes first and enable/disable AQUA.

What type of queries are accelerated by AQUA?

AQUA accelerates analytics queries by running data-intensive tasks such as scans, filtering, and aggregation closer to the storage layer. You’ll see the most noticeable performance improvement on queries that require large scans, especially those with LIKE and SIMILAR_TO predicates. Over time, the types of queries that are accelerated by AQUA will increase.

How do I know which queries on my Redshift cluster are accelerated by AQUA?

You can query the system tables to see the queries accelerated by AQUA.

What is Amazon Redshift managed storage?

Amazon Redshift managed storage is available with serverless and RA3 node types and lets you scale and pay for compute and storage independently so you can size your cluster based only on your compute needs. It automatically uses high-performance SSD-based local storage as tier-1 cache and takes advantage of optimizations such as data block temperature, data block age, and workload patterns to deliver high performance while scaling storage automatically to Amazon S3 when needed without requiring any action.


Tuesday, April 5, 2022

Script To Find Redolog Switch History And Find Archivelog Size For Each Instances

 Script To Find Redolog Switch History And Find Archivelog Size For Each Instances 


This script will list the below items from each Instances in the RAC Database. The same can be used for Standalone (Non-RAC) Database as well.

          (1) Report the redo log switches on hourly basis from the all the instances on RAC Database
          (2) Report the size of the archivelogs generated on daily basis from all the instances on RAC Database

1. total number of archives generated per hour & per day 
2. The size of the archives MB/ hour & MB per day

Script:


set linesize 200 pagesize 1000
column day format a3
column total format 9999
column h00 format 999
column h01 format 999
column h02 format 999
column h03 format 999
column h04 format 999
column h04 format 999
column h05 format 999
column h06 format 999
column h07 format 999
column h08 format 999
column h09 format 999
column h10 format 999
column h11 format 999
column h12 format 999
column h13 format 999
column h14 format 999
column h15 format 999
column h16 format 999
column h17 format 999
column h18 format 999
column h19 format 999
column h20 format 999
column h21 format 999
column h22 format 999
column h23 format 999
column h24 format 999
break on report
compute max of "total" on report
compute max of "h00" on report
compute max of "h01" on report
compute max of "h02" on report
compute max of "h03" on report
compute max of "h04" on report
compute max of "h05" on report
compute max of "h06" on report
compute max of "h07" on report
compute max of "h08" on report
compute max of "h09" on report
compute max of "h10" on report
compute max of "h11" on report
compute max of "h12" on report
compute max of "h13" on report
compute max of "h14" on report
compute max of "h15" on report
compute max of "h16" on report
compute max of "h17" on report
compute max of "h18" on report
compute max of "h19" on report
compute max of "h20" on report
compute max of "h21" on report
compute max of "h22" on report
compute max of "h23" on report
compute sum of NUM on report
compute sum of GB on report
compute sum of MB on report
compute sum of KB on report

REM Script to Report the Redo Log Switch History

alter session set nls_date_format='DD MON YYYY';
select thread#, trunc(completion_time) as "date", to_char(completion_time,'Dy') as "Day", count(1) as "total",
sum(decode(to_char(completion_time,'HH24'),'00',1,0)) as "h00",
sum(decode(to_char(completion_time,'HH24'),'01',1,0)) as "h01",
sum(decode(to_char(completion_time,'HH24'),'02',1,0)) as "h02",
sum(decode(to_char(completion_time,'HH24'),'03',1,0)) as "h03",
sum(decode(to_char(completion_time,'HH24'),'04',1,0)) as "h04",
sum(decode(to_char(completion_time,'HH24'),'05',1,0)) as "h05",
sum(decode(to_char(completion_time,'HH24'),'06',1,0)) as "h06",
sum(decode(to_char(completion_time,'HH24'),'07',1,0)) as "h07",
sum(decode(to_char(completion_time,'HH24'),'08',1,0)) as "h08",
sum(decode(to_char(completion_time,'HH24'),'09',1,0)) as "h09",
sum(decode(to_char(completion_time,'HH24'),'10',1,0)) as "h10",
sum(decode(to_char(completion_time,'HH24'),'11',1,0)) as "h11",
sum(decode(to_char(completion_time,'HH24'),'12',1,0)) as "h12",
sum(decode(to_char(completion_time,'HH24'),'13',1,0)) as "h13",
sum(decode(to_char(completion_time,'HH24'),'14',1,0)) as "h14",
sum(decode(to_char(completion_time,'HH24'),'15',1,0)) as "h15",
sum(decode(to_char(completion_time,'HH24'),'16',1,0)) as "h16",
sum(decode(to_char(completion_time,'HH24'),'17',1,0)) as "h17",
sum(decode(to_char(completion_time,'HH24'),'18',1,0)) as "h18",
sum(decode(to_char(completion_time,'HH24'),'19',1,0)) as "h19",
sum(decode(to_char(completion_time,'HH24'),'20',1,0)) as "h20",
sum(decode(to_char(completion_time,'HH24'),'21',1,0)) as "h21",
sum(decode(to_char(completion_time,'HH24'),'22',1,0)) as "h22",
sum(decode(to_char(completion_time,'HH24'),'23',1,0)) as "h23"
from
v$archived_log
where first_time > trunc(sysdate-10)
and dest_id = (select dest_id from V$ARCHIVE_DEST_STATUS where status='VALID' and type='LOCAL')
group by thread#, trunc(completion_time), to_char(completion_time, 'Dy') order by 2,1;

REM Script to calculate the archive log size generated per day for each Instances.

select THREAD#, trunc(completion_time) as "DATE"
, count(1) num
, trunc(sum(blocks*block_size)/1024/1024/1024) as GB
, trunc(sum(blocks*block_size)/1024/1024) as MB
, sum(blocks*block_size)/1024 as KB
from v$archived_log
where first_time > trunc(sysdate-10)
and dest_id = (select dest_id from V$ARCHIVE_DEST_STATUS where status='VALID' and type='LOCAL')
group by thread#, trunc(completion_time)
order by 2,1
;



Output:  

THREAD#    date        Day total  h00  h01  h02  h03  h04  h05  h06  h07  h08  h09  h10  h11  h12  h13  h14  h15  h16  h17  h18  h19  h20  h21  h22  h23
---------- ----------- --- ----- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
         1 17 MAR 2018 Sat    47    0    0    0    0    0    0    0    0    1    0   44    1    0    0    0    0    0    0    0    0    0    1    0    0
         2 17 MAR 2018 Sat   133    0    0    0    0    1    0    0    0    0    2  128    1    0    0    0    0    0    0    1    0    0    0    0    0
         1 18 MAR 2018 Sun    10    0   10    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0
         2 18 MAR 2018 Sun    33    0   33    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0
                            ----      ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
maximum                      133        33    0    0    1    0    0    0    1    2    12   8    1    0    0    0    0    0    0    1    0    0    1    0 

THREAD#    DATE              NUM          GB         MB         KB
---------- ----------- ---------- ---------- ---------- ----------
         1 17 MAR 2018         47          0         63    64939.5
         2 17 MAR 2018        133          6       6403    6557111
         1 18 MAR 2018         10          0          0         50
         2 18 MAR 2018         33          1       1616  1654888.5
                       ---------- ---------- ---------- ----------
sum                           223          7       8082    8276989

 

-- References Script To Find Redolog Switch History And Find Archivelog Size For Each Instances In RAC (Doc ID 2373477.1)