AWS Services Cheat Sheet
PDF DOWNLOAD TO READ AWS CHEAT SHEET SERVICES
Click on the link below
Database Architect,Core DBA ,APPSDBA,Mysql DBA,MongoDB,PostgreSQL,Installations,Upgrades on all Flavors of UNIX (LINUX,SOLARIS,HP(AIX)). Oracle E-Business Suite R12 (Upgrades,Patching,Cloning) AWS upgrades and implementation expert. OCI (Oracle Cloud Infrastructure) Architect, Exadata (Oracle Database Exadata Cloud at Customer(Exacc)),Superclusters ,Autonomous Databases, AWS RDS Customs , Sql Tuning Expert. ***Never Stop sharing,Learning and Growing***
AWS Services Cheat Sheet
PDF DOWNLOAD TO READ AWS CHEAT SHEET SERVICES
Click on the link below
Oracle Multitenant Cheat Sheet
Author - ADAM CUNNING
Multitenant | The Oracle Architecture that consists of a CDB, and one or more PDBs |
Container Database (CDB) | A traditional database instance, but has the ability to support PDBs |
Pluggable Database (PDB) | A collection of Tablespaces which supports it's own independant Role and User security, and can be easily moved between CDBs |
Root Database | The instance administrative layer that sets above PDBs. Users and Roles here must be preceded by c## |
Seed Database | A PDB that remains offline to be used as a template for creating new blank PDBs |
Connect to Contaner or PDB | ||
CONN <user>/<pwd>@//<host>:<listener port>/<service> {as sysdba}; | ||
CONN <user>/<pwd>@//<tns_entry> {as sysdba}; | ||
Display Current Container or PDB | ||
SHOW CON_NAME; | SELECT SYS_CONTEXT('USERENV','CON_NAME') | |
SHOW CON_ID ; | FROM DUAL; | |
List Containers and PDBs on Instance | ||
SELECT PDB_NAME, Status | SELECT Name, Open_Mode | SELECT Name,PDB |
FROM DBA_PDBS | FROM V$PDBS | FROM V$SERVICES |
ORDER BY PDB_Name; | ORDER BY Name; | ORDER BY Name; |
Change Container or PDB | ||
ALTER SESSION SET container=<name>; | ALTER SESSION SET container=cdb$root; |
First, set your source and target datafile paths... ALTER SESSION SET PDB_FILE_NAME_CONVERT='</seed path/>','</target path/>'; Then run the create command from the target root container... CREATE PLUGGABLE DATABASE <New PDB Name> ADMIN USER <Username> IDENTIFIED BY <Password> FROM <Source PDB[@dblink]> Finally, Open the newly created database... ALTER PLUGGABLE DATABASE <target pdb> OPEN; |
Startup and Shutdown Startup and Shutdown of a multitenant database function the same as on a regular database, however, if connected to pluggable database, only the pluggable database shuts down. If connected to the root container database then the entire instance shuts down. Pluggable databases also have their own commands that can be run from the root container or other pluggable db. ALTER PLUGGABLE DATABASE <name>OPEN READ WRITE{RESTRICTED}{FORCE}; ALTER PLUGGABLE DATABASE <name> OPEN READ ONLY {RESTRICTED}{FORCE}; ALTER PLUGGABLE DATABASE <name> OPEN UPGRADE {RESTRICTED}; ALTER PLUGGABLE DATABASE <name> CLOSE {IMMEDIATE}; To retain state as startup state of container... ALTER PLUGGABLE DATABASE <name> SAVE STATE; Roles and Users Common Users and Roles must be created in the root container and prefixed by the characters c## Local Users and Roles must be created in pdb Granting Roles and Privileges GRANT <privelege/role> TO <user> CONTAINER=<PDB name or ALL>; If local only, grant from pdb and omit container argument. |
Backup RMAN connection to root container... Normal Backup will capture full instance For just Root Container BACKUP DATABASE ROOT For Pluggable Databases BACKUP PLUGGABLE DATABASE <pdb1,pdb2,pdb3> RMAN connection to pluggable database will only work on that pdb Restore Connect to root container. Normal restore is full instance. For pdb... RUN {ALTER PLUGGABLE DATABASE <pdb> CLOSE; SET UNTIL TIME "<timeset value>"; RESTORE PLUGGABLE DATABASE <pdb>; RECOVER PLUGGABLE DATABASE <pdb>; ALTER PLUGGABLE DATABASE <pdb>OPEN; } |
Exporting/Unplugging An Existing PDB To unplug a database, use the following commands. It is recommended that the path used match the datafile storage location. ALTER PLUGGABLE DATABASE <pdb_name> CLOSE; ALTER PLUGGABLE DATABASE <pdb_name> UNPLUG INTO '</path/><name>.xml'; DROP PLUGGABLE DATABASE <pdb_name> KEEP DATAFILES; Importing/Plugging in PDB into a CDB Before importing/plugging in a PDB into a CDB a small procedure should be run to Validate the integrity and compatibility of the PDB. SET SERVEROUTPUT ON DECLARE l_result BOOLEAN; BEGIN l_result := DBMS_PDB.CHECK_PLUG_COMPATIBILITY( PDB_DESCR_FILE => '</path/><name>.xml', PDB_NAME => '<name>'); IF l_result THEN DBMS_OUTPUT.PUT_LINE('Compatible, OK to Proceed'); ELSE DBMS_OUTPUT.PUT_LINE('Incompatible, See PDB_PLUG_IN_VIOLATIONS for details'); END IF; END; If the pdb is validated, then use the following commands to import/plug it in. Reference the xml file path specified during export, and the datafile path... CREATE PLUGGABLE DATABASE <new_pdb_name> USING '</path/><name>.xml' FILE_NAME_CONVERT=('</source path/>','</dest path/>'); ALTER PLUGGABLE DATABASE <new_pdb_name> OPEN; |
A special type of PDB is a Proxy PDB. A Proxy PDB essentially is a PDB that is linked to another PDB so that if a PDB is being migrated to another environment and there is a desire to not modify all source code to new location references first, they can still use the old references on a Proxy and the actions will take place on the New DB. To setup, first setup a dblink to the pluggable target CREATE PLUGGABLE DATABASE <proxy pdb name> AS PROXY FROM <target pdb>@<dblink>; NOTE: dblink may be dropped after proxy db is created |
Cloning from NonCDB to CDB NonCDB must support multitenant and use dblink on NONCDB to connect DBLink user must have CREATE SESSION and CREATE PLUGGABLE DATABASE privileges CREATE PLUGGABLE DATABASE <new_pdb> FROM NON$CDB@<dblink> FILE_NAME_CONVERT=('</source datafile path/>,'</target datafile path/>'); @ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql ALTER PLUGGABLE DATABASE <target pdb> OPEN; Moving a PDB CREATE PLUGGABLE DATABASE <new pdb> FROM <old pdb>@<dblink> RELOCATE; ALTER PLUGGABLE DATABASE <new pdb> OPEN; Removing a PDB ALTER PLUGGABLE DATABASE <name> CLOSE; DROP PLUGGABLE DATABASE <name> INCLUDING DATAFILES; Exporting/Unplugging a pdb to a single compressed file ALTER PLUGGABLE DATABASE <pdb_name> UNPLUG INTO '</path/><filename>.pdb'; Importing/Plugging in a pdb from a single compressed file CREATE PLUGGABLE DATABASE <new pdb name> USING '</path/><filename>.pdb'; |
The focus of this article on an undocumented script, UTLSYXSZ.SQL, located in the $ORACLE_HOME/rdbms/admin directory.
When the UTLSYXSZ.SQL script is run it creates a PL/SQL package named UTLSYXSZ_UTIL. The script then runs an anonymous block that collects user information and outputs a multi-section report. The final action the script performs is to drop the built-in package and undefine the variables it created.
SQL> @?/rdbms/admin/utlsyxsz.sql
Run the above script to get the current sysaux tablespace current/future utilization and AWR space estimation for future growth. A very useful script to plan the size of sysaux tablespace.
Helpful script $ORACLE_HOME/rdbms/admin/utlsyxsz.sql. It provides a prediction of the space needed when creating the SYSAUX tablespace to begin with.
The report depends on many input values such as
| - Interval Setting (minutes)
| - Retention Setting (days)
| - Number of Instances
| - Average Number of Active Sessions
| - Number of Datafiles
Parameter Collection
After the script defines a number of session variables the package it creates is then called to gather
metadata and produced the following reports.
SYSAUX Size Estimation Report
CURRENT SYSAUX Usage Report
AWR Space Estimation Report
Optimizer Stat History Space Estimation Report
Estimated SYSAUX Usage Report
Summary of SYSAUX Space Estimation
Finally reporting out the Total Estimated SYSAUX Size
Some of the information in the reports is collected by querying data dictionary tables and dynamic performance views. Other information it solicits from the user by making calls to UTLSYXSZ_UTIL which require the user to input a value (or accept the default), and in a few cases, for example the variable ash_hist which defines how much ASH (Active Session History) data will be examined as part of the report's calculation, the value is partially hard-coded into the package and a script edit is required to alter the value as you can see in the listing below.
Estimated size of Stats history
The current SYSAUX, as the result of self-sizing is about 3.9GB so an additional 4.6GB is a good estimate of the space that will be required for expansion should the system suddenly experience an increase in tables, users, and DML activity as anticipated.
select m.TABLE_OWNER,
m.TABLE_NAME,
m.INSERTS,
m.UPDATES,
m.DELETES,
m.TRUNCATED,
m.TIMESTAMP as LAST_MODIFIED,
round((m.inserts+m.updates+m.deletes)*100/NULLIF(t.num_rows,0),2) as EST_PCT_MODIFIED,
t.num_rows as last_known_rows_number,
t.last_analyzed
From dba_tab_modifications m,
dba_tables t
where m.table_owner=t.owner
and m.table_name=t.table_name
and table_owner not in ('SYS','SYSTEM')
and ((m.inserts+m.updates+m.deletes)*100/NULLIF(t.num_rows,0) > 10 or t.last_analyzed is null)
order by timestamp desc;
By default, I filter the tables using the rule of the 10% modified, but you can change this percentage and adapt it to your needs.
Note that this view is not updated in real time for performance reasons, if you want to have the last statistics available, use the following command before executing the query:
Exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
select m.TABLE_OWNER,
'NO' as IS_PARTITION,
m.TABLE_NAME as NAME,
m.INSERTS,
m.UPDATES,
m.DELETES,
m.TRUNCATED,
m.TIMESTAMP as LAST_MODIFIED,
round((m.inserts+m.updates+m.deletes)*100/NULLIF(t.num_rows,0),2) as EST_PCT_MODIFIED,
t.num_rows as last_known_rows_number,
t.last_analyzed
From dba_tab_modifications m,
dba_tables t
where m.table_owner=t.owner
and m.table_name=t.table_name
and m.table_owner not in ('SYS','SYSTEM')
and ((m.inserts+m.updates+m.deletes)*100/NULLIF(t.num_rows,0) > 10 or t.last_analyzed is null)
union
select m.TABLE_OWNER,
'YES' as IS_PARTITION,
m.PARTITION_NAME as NAME,
m.INSERTS,
m.UPDATES,
m.DELETES,
m.TRUNCATED,
m.TIMESTAMP as LAST_MODIFIED,
round((m.inserts+m.updates+m.deletes)*100/NULLIF(p.num_rows,0),2) as EST_PCT_MODIFIED,
p.num_rows as last_known_rows_number,
p.last_analyzed
From dba_tab_modifications m,
dba_tab_partitions p
where m.table_owner=p.table_owner
and m.table_name=p.table_name
and m.PARTITION_NAME = p.PARTITION_NAME
and m.table_owner not in ('SYS','SYSTEM')
and ((m.inserts+m.updates+m.deletes)*100/NULLIF(p.num_rows,0) > 10 or p.last_analyzed is null)
order by 8 desc;
-- To gather statistics with indexes (all together)
select 'exec dbms_stats.gather_table_stats('''||owner||''''||','||''''||table_name||''''||','||'cascade => TRUE);'
from ALL_TAB_STATISTICS
where OWNER in('OWNER')
and global_stats='YES' and stale_stats='YES';
-- Find stale stats Index
select * from all_ind_statistics
WHERE OWNER in('OWNER')
AND stale_stats='YES' and global_stats='TRUE';
If any row will come, gather the respective table as per the follwoing example,
sql> exec dbms_stats.gather_table_stats('owner','table_name',cascade => TRUE);
Important ones I think you should check on a very regular basis
Topic Area | MOS # | My Oracle Support Note Title |
---|---|---|
Core | 888828.1 | Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) |
1270094.1 | Exadata Critical Issues (Doc ID 1270094.1) | |
556.1 | Oracle Exadata: Exadata and Linux Important Recommended Fixes (Doc ID 556.1) | |
2724126.1 | Exadata X9M/X8M (RoCE-based systems) Software Requirements and Recommendations (Doc ID 2724126.1) | |
2075007.1 | Exadata System Software Certification (Doc ID 2075007.1) | |
1570460.1 | Exadata Software and Hardware Support Lifecycle (Doc ID 1570460.1) | |
1070954.1 | Oracle Exadata Database Machine EXAchk (Doc ID 1070954.1) | |
1306791.2 | Information Center: Oracle Exadata Database Machine (Doc ID 1306791.2) | |
Exadata Updates/Patching/Upgrades | 1262380.1 | Exadata Patching Overview and Patch Testing Guidelines (Doc ID 1262380.1) |
2207063.1 | HOWTO: Install ksplice kernel updates for Exadata Database Nodes (Doc ID 2207063.1) | |
Security | 2751741.1 | Oracle Exadata Database Machine Security FAQ (Doc ID 2751741.1) |
2256887.1 | How to research Common Vulnerabilities and Exposures (CVE) for Exadata packages (Doc ID 2256887.1) | |
1405320.1 | Responses to common Exadata security scan findings (Doc ID 1405320.1) | |
Database & GI | 742060.1 | Release Schedule of Current Database Releases (Doc ID 742060.1) |
2700151.1 | Grid Infrastructure 11.2.0.4 on Exadata / Oracle Linux 7 Transitional Support (Doc ID 2700151.1) | |
2285040.1 | Release Update Introduction and FAQ (Doc ID 2285040.1) | |
Database & GI Upgrade/Updates | 2542082.1 | 19c Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running on Oracle Linux (Doc ID 2542082.1) |
1919.2 | 19c Database Self-Guided Upgrade with Best Practices (Doc ID 1919.2) |
Identified notes that are specifically important, interesting, useful and relevant as they relate to your Exadata Database Machine deployment - union them together with the list above!