Sunday, March 27, 2022

AWS SERVICES CHEAT SHEET

 AWS Services Cheat Sheet


PDF DOWNLOAD TO READ AWS CHEAT SHEET SERVICES 

Click on the link below




Oracle Multitenant Cheat Sheet

 Oracle Multitenant Cheat Sheet 

Author - ADAM CUNNING 

Essential Termin­ology

Multit­enant
The Oracle Archit­ecture that consists of a CDB, and one or more PDBs
Container Database (CDB)
A tradit­ional database instance, but has the ability to support PDBs
Pluggable Database (PDB)
A collection of Tables­paces which supports it's own indepe­ndant Role and User security, and can be easily moved between CDBs
Root Database
The instance admini­str­ative layer that sets above PDBs. Users and Roles here must be preceded by c##
Seed Database
A PDB that remains offline to be used as a template for creating new blank PDBs

Daily Use Commands (from SQL command line)

Connect to Contaner or PDB
CONN <us­er>/<pw­d>@//<ho­st>:<li­stener port>/<se­rvi­ce> {as sysdba};
CONN <us­er>/<pw­d>@//<tn­s_e­ntr­y> {as sysdba};
Display Current Container or PDB
SHOW CON_NAME;
SELECT SYS_CO­NTE­XT(­'US­ERE­NV'­,'C­ON_­NAME')
SHOW CON_ID;
FROM DUAL;
List Containers and PDBs on Instance
SELECT PDB_NAME, Status
SELECT Name, Open_Mode
SELECT Name,PDB
FROM DBA_PDBS
FROM V$PDBS
FROM V$SERVICES
ORDER BY PDB_Name;
ORDER BY Name;
ORDER BY Name;
Change Container or PDB
ALTER SESSION SET container=<na­me>;
ALTER SESSION SET contai­ner­=cd­b$root;

Clonin­g/C­reating a PDB

First, set your source and target datafile paths...
ALTER SESSION SET PDB_FI­LE_­NAM­E_C­ONV­ERT='</seed path/>','</t­arget path/>';
Then run the create command from the target root contai­ner...
CREATE PLUGGABLE DATABASE <New PDB Name>
 ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ADMIN USER <Us­ern­ame>
 ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­IDE­NTIFIED BY <Pa­ssw­ord>
 ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ FROM <Source PDB[@d­bli­nk]>
Finally, Open the newly created databa­se...
ALTER PLUGGABLE DATABASE <target pdb> OPEN;
NOTE: Creating a PDB is just cloning from the seed db.
 ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ [@dblink] is optional and used when creating PDB from existing PDB on another instance.
 ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ If using dblink, the link user should be an admini­str­ative user on the source PDB

Managing a Multit­enant Database

Startup and Shutdown
Startup and Shutdown of a multit­enant database function the same as on a regular database,
 however, if connected to pluggable database, only the pluggable database shuts down. 
If connected to the root container database then the entire instance shuts down. 
Pluggable databases also have their own commands that can be run from the root container or
 other pluggable db.

ALTER PLUGGABLE DATABASE <na­me>OPEN READ WRITE{RESTR­ICT­ED}­{FORCE};
ALTER PLUGGABLE DATABASE <na­me> OPEN READ ONLY {RESTR­ICT­ED}­{FORCE};
ALTER PLUGGABLE DATABASE <na­me> OPEN UPGRADE {RESTR­ICTED};
ALTER PLUGGABLE DATABASE <na­me> CLOSE {IMMED­IATE};
To retain state as startup state of contai­ner...
ALTER PLUGGABLE DATABASE <na­me> SAVE STATE;

Roles and Users
Common Users and Roles must be created in the root container and prefixed by the characters c##
Local Users and Roles must be created in pdb

Granting Roles and Privileges
GRANT <pr­ive­leg­e/r­ole> TO <us­er> CONTAINER=<PDB name or ALL>;
If local only, grant from pdb and omit container argument.

Backup and Recovery

Backup
RMAN connection to root contai­ner...
Normal Backup will capture full instance
For just Root Container
BACKUP DATABASE ROOT
For Pluggable Databases
BACKUP PLUGGABLE DATABASE <pd­b1,­pdb­2,p­db3>

RMAN connection to pluggable database will only work on that pdb

Restore
Connect to root container. Normal restore is full instance. For pdb...
RUN {ALTER PLUGGABLE DATABASE <pd­b> CLOSE;
 ­ ­ ­ ­  SET UNTIL TIME "<ti­meset value>";
 ­ ­ ­ ­  RESTORE PLUGGABLE DATABASE <pd­b>;
 ­ ­ ­ ­  RECOVER PLUGGABLE DATABASE <pd­b>;
 ­ ­ ­ ­  ALTER PLUGGABLE DATABASE <pd­b>OPEN; }

Moving PDB's (Unplu­ggi­ng/­Plu­gging in PDB)

Export­ing­/Un­plu­gging An Existing PDB
To unplug a database, use the following commands. It is recomm­ended that the path used match 
the datafile storage location.
ALTER PLUGGABLE DATABASE <pd­b_n­ame> CLOSE;
ALTER PLUGGABLE DATABASE <pd­b_n­ame> UNPLUG INTO '</p­ath­/><­nam­e>.xml';
DROP PLUGGABLE DATABASE <pd­b_n­ame> KEEP DATAFILES;

Import­ing­/Pl­ugging in PDB into a CDB
Before import­ing­/pl­ugging in a PDB into a CDB a small procedure should be run to Validate the integrity
 and compat­ibility of the PDB.
SET SERVER­OUTPUT ON
DECLARE
 ­ ­ ­ ­ ­l_r­esult BOOLEAN;
BEGIN
 ­ ­ ­ ­ ­l_r­esult := DBMS_P­DB.C­HE­CK_­PLU­G_C­OMP­ATI­BILITY(
 ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­PDB­_DE­SCR­_FILE => '</p­ath­/><­nam­e>.xml',
 ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­PDB­_NA­ME       => '<na­me>');
 ­ ­ ­ ­ IF l_result THEN
 ­ ­ ­ ­ ­ ­ ­ ­ ­ ­DBM­S_O­UTP­UT.P­UT­_LI­NE(­'Co­mpa­tible, OK to Proceed');
 ­ ­ ­ ­ ELSE
 ­ ­ ­ ­ ­ ­ ­ ­ ­ ­DBM­S_O­UTP­UT.P­UT­_LI­NE(­'In­com­pat­ible, See PDB_PL­UG_­IN_­VIO­LATIONS for details');
 ­ ­ ­ ­ END IF;
END;

If the pdb is validated, then use the following commands to import­/plug it in. Reference the xml file path specified during export, and the datafile path...
CREATE PLUGGABLE DATABASE <ne­w_p­db_­nam­e> USING '</p­ath­/><­nam­e>.xml'
 ­ ­ ­ ­ ­ ­ ­FIL­E_N­AME­_CO­NVE­RT=('</s­ource path/>','</dest path/>');
ALTER PLUGGABLE DATABASE <ne­w_p­db_­nam­e> OPEN;

PROXY Database Functi­onality

A special type of PDB is a Proxy PDB. A Proxy PDB essent­ially is a PDB that is linked to another PDB
 so that if a PDB is being migrated to another enviro­nment and there is a desire to not modify all source
 code to new location references first, they can still use the old references on a Proxy and the actions 
will take place on the New DB.

To setup, first setup a dblink to the pluggable target
CREATE PLUGGABLE DATABASE <proxy pdb name> AS PROXY FROM <target pdb>@<db­lin­k>;
NOTE: dblink may be dropped after proxy db is created
In a proxy DB the alter Database and Alter Pluggable Database commands apply to the proxy db. All other DDL applies to the target db.

Misc Other Multit­enant Management Commands

Cloning from NonCDB to CDB
NonCDB must support multit­enant and use dblink on NONCDB to connect
DBLink user must have CREATE SESSION and CREATE PLUGGABLE DATABASE privileges
CREATE PLUGGABLE DATABASE <ne­w_p­db> FROM NON$CDB@<db­lin­k>
 ­ ­ ­ ­ ­ ­ ­FIL­E_N­AME­_CO­NVE­RT=('</s­ource datafile path/>,'</t­arget datafile path/>');
@ORACL­E_H­OME­/rd­bms­/ad­min­/no­ncd­b_t­o_p­db.sql
ALTER PLUGGABLE DATABASE <target pdb> OPEN;

Moving a PDB
CREATE PLUGGABLE DATABASE <new pdb> FROM <old pdb>@<db­lin­k> RELOCATE;
ALTER PLUGGABLE DATABASE <new pdb> OPEN;

Removing a PDB
ALTER PLUGGABLE DATABASE <na­me> CLOSE;
DROP PLUGGABLE DATABASE <na­me> INCLUDING DATAFILES;

Export­ing­/Un­plu­gging a pdb to a single compressed file
ALTER PLUGGABLE DATABASE <pd­b_n­ame> UNPLUG INTO '</p­ath­/><­fil­ena­me>.pdb';

Import­ing­/Pl­ugging in a pdb from a single compressed file
CREATE PLUGGABLE DATABASE <new pdb name> USING '</p­ath­/><­fil­ena­me>.pdb';
Note that compressed pdb files for export and import are suffixed by .pdb and are a zip fle format.

Tuesday, March 22, 2022

OCI for AWS Architects

 OCI for AWS Architects









How OCI Global Infrastructure Compares to AWS


OCI ARCHITECTURE


Region - Localized Geographic Area comprised with AD.

AD (Avaiability Domain)-One or more fault tolerant data centers located within region but connected to each other by low latency,high bandwidth to network.

FD(FAULT DOMAIN) - Grouping of hardware on infrastructure within AD to provide anti-affinity logical data centers

Chosing a Region
  • Location
  • Data Residency & compliance
  • Service Availability.

AD (Availabilty Domain)
  • Isolated from each other.
  • Physical Infrastructure not shared.
  • Fault Tolerant

Region --> AD ---> FD(3)



Realm - isolated fromeach other they don't share anything with each other .



IAM CONCEPT (IDENTITY AND ACCESS MANAGEMENT)






further topics in progress 

Saturday, March 19, 2022

SYSAUX Tablespace Sizing

 SYSAUX Tablespace Sizing

The focus of this article on an undocumented script, UTLSYXSZ.SQL, located in the $ORACLE_HOME/rdbms/admin directory.

When the UTLSYXSZ.SQL script is run it creates a PL/SQL package named UTLSYXSZ_UTIL. The script then runs an anonymous block that collects user information and outputs a multi-section report. The final action the script performs is to drop the built-in package and undefine the variables it created.

SQL> @?/rdbms/admin/utlsyxsz.sql

Run the above script to get the current sysaux tablespace current/future utilization and AWR space estimation for future growth. A very useful script to plan the size of sysaux tablespace.

Helpful script  $ORACLE_HOME/rdbms/admin/utlsyxsz.sql.  It provides a prediction of the space needed when creating the SYSAUX tablespace to begin with.


The report depends on many input values such as

|   - Interval Setting (minutes)
|   - Retention Setting (days)
|   - Number of Instances
|   - Average Number of Active Sessions
|   - Number of Datafiles


Parameter Collection

After the script defines a number of session variables the package it creates is then called to gather

metadata and produced the following reports.

 SYSAUX Size Estimation Report

 CURRENT SYSAUX Usage Report

 AWR Space Estimation Report

 Optimizer Stat History Space Estimation Report

 Estimated SYSAUX Usage Report

 Summary of SYSAUX Space Estimation

 Finally reporting out the Total Estimated SYSAUX Size


Some of the information in the reports is collected by querying data dictionary tables and dynamic performance views. Other information it solicits from the user by making calls to UTLSYXSZ_UTIL which require the user to input a value (or accept the default), and in a few cases, for example the variable ash_hist which defines how much ASH (Active Session History) data will be examined as part of the report's calculation, the value is partially hard-coded into the package and a script edit is required to alter the value as you can see in the listing below.

Estimated size of Stats history


Note prompted for the number of Indexes or Columns. Generation of these numbers, again, is hard-coded into the package and can be changed before the script is run to produce a value more in line with your actual usage.




Conclusion of the report we see that Oracle is estimating that a system that corresponds to the parameters entered should result in a SYSAUX tablespace sized at about 8.5GB. 

The current SYSAUX, as the result of self-sizing is about 3.9GB so an additional 4.6GB is a good estimate of the space that will be required for expansion should the system suddenly experience an increase in tables, users, and DML activity as anticipated. 

Find missing or stale statistics

 

Find missing or stale statistics


Optimizer Statistics are  for the optimizer to choose the best execution plan possible. Sometimes you encounter a sub-optimal plan and need to find out if the tables involved in the statement are up to date. This query will help you find out which tables have been modified significantly since the last statistics gathering.

Script to find missing or stale statistics

select 	m.TABLE_OWNER,
		m.TABLE_NAME,
		m.INSERTS,
		m.UPDATES,
		m.DELETES,
		m.TRUNCATED,
		m.TIMESTAMP as LAST_MODIFIED,		
		round((m.inserts+m.updates+m.deletes)*100/NULLIF(t.num_rows,0),2) as EST_PCT_MODIFIED,
		t.num_rows as last_known_rows_number,
		t.last_analyzed
From 	dba_tab_modifications m,
		dba_tables t
where 	m.table_owner=t.owner
and	m.table_name=t.table_name
and 	table_owner not in ('SYS','SYSTEM')
and 	((m.inserts+m.updates+m.deletes)*100/NULLIF(t.num_rows,0) > 10 or t.last_analyzed is null)
order by timestamp desc;

By default, I filter the tables using the rule of the 10% modified, but you can change this percentage and adapt it to your needs.

Note that this view is not updated in real time for performance reasons, if you want to have the last statistics available, use the following command before executing the query:

Exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
select 	m.TABLE_OWNER,
		'NO' as IS_PARTITION,
		m.TABLE_NAME as NAME,		
		m.INSERTS,
		m.UPDATES,
		m.DELETES,
		m.TRUNCATED,
		m.TIMESTAMP as LAST_MODIFIED,		
		round((m.inserts+m.updates+m.deletes)*100/NULLIF(t.num_rows,0),2) as EST_PCT_MODIFIED,
		t.num_rows as last_known_rows_number,
		t.last_analyzed
From 	dba_tab_modifications m,
		dba_tables t		
where 	m.table_owner=t.owner
and		m.table_name=t.table_name
and 	m.table_owner not in ('SYS','SYSTEM')
and 	((m.inserts+m.updates+m.deletes)*100/NULLIF(t.num_rows,0) > 10 or t.last_analyzed is null)
union
select 	m.TABLE_OWNER,
		'YES' as IS_PARTITION,		
		m.PARTITION_NAME as NAME,		
		m.INSERTS,
		m.UPDATES,
		m.DELETES,
		m.TRUNCATED,
		m.TIMESTAMP as LAST_MODIFIED,		
		round((m.inserts+m.updates+m.deletes)*100/NULLIF(p.num_rows,0),2) as EST_PCT_MODIFIED,
		p.num_rows as last_known_rows_number,
		p.last_analyzed		
From 	dba_tab_modifications m,		
		dba_tab_partitions p
where 	m.table_owner=p.table_owner
and		m.table_name=p.table_name
and 	m.PARTITION_NAME = p.PARTITION_NAME
and 	m.table_owner not in ('SYS','SYSTEM')
and 	((m.inserts+m.updates+m.deletes)*100/NULLIF(p.num_rows,0) > 10 or p.last_analyzed is null)
order by 8 desc;
-- To gather statistics with indexes (all together)

select 'exec dbms_stats.gather_table_stats('''||owner||''''||','||''''||table_name||''''||','||'cascade => TRUE);'
from ALL_TAB_STATISTICS
where OWNER in('OWNER')
and global_stats='YES' and stale_stats='YES';

-- Find stale stats Index
select * from all_ind_statistics
WHERE OWNER in('OWNER')
AND stale_stats='YES' and global_stats='TRUE';

If any row will come, gather the respective table as per the follwoing example,

sql> exec dbms_stats.gather_table_stats('owner','table_name',cascade => TRUE);

Oracle Multitenant Migration

 Oracle Multitenant Migration

Multitenant Support 


What does this mean?

 1. Oracle Database 19c is the last release to support non-CDB architecture

2. Before upgrade to Oracle Database 21c or beyond, you must convert to the mulititenant architecture .




MUTLITENANT MIGRATION

CDB | Components 

CDB$ROOT must be a superset of all PDBs

Recommendation

 1. Install as many components as required 
2. But no more than that 

Number of components have big effect on upgrade duration Components (e.g., JAVAVM) may require patch regular activity

Always use default of a given version
    • Example 19.0.0
    • Always use three digits only

• Should you change COMPATIBLE after applying a Release Update?
    • Example 19.10
    Never


Plug In | Compatibility Check 

1. In source, generate manifest file

SQL> exec dbms_pdb.describe('/tmp/DB19.xml');


 2. In CDB, check compatibility

set serveroutput on
BEGIN
IF dbms_pdb.check_plug_compatibility('/tmp/DB19.xml') THEN
dbms_output.put_line('PDB compatible? ==> Yes');
ELSE
dbms_output.put_line('PDB compatible? ==> No');
END IF;
END;
/

3. Always check the details

SQL> select type, message
from PDB_PLUG_IN_VIOLATIONS
where name='DB19' and status<>'RESOLVED';

TYPE         MESSAGE
___________________________________________________________________________________
ERROR '19.9.0.0.0 Release_Update' is installed in the CDB but no release updates are installed in the PDB
ERROR DBRU bundle patch 201020: Not installed in the CDB but installed in the PDB
ERROR PDB's version does not match CDB's version: PDB's version 12.2.0.1.0. CDB's version 19.0.0.0.0.
WARNING CDB parameter compatible mismatch: Previous '12.2.0' Current '19.0.0'
WARNING PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run. 


Plug In | Create PDB

1. Restart database in read-only mode

2. Generate manifest file and shut down

3. In CDB, create PDB from manifest file

SQL> shutdown immediate
SQL> startup mount
SQL> alter database open read only;
SQL> exec dbms_pdb.describe('/tmp/DB19.xml');
SQL> shutdown immediate;
SQL> create pluggable database DB19
using '/tmp/DB19.xml' nocopy tempfile reuse;










Convert | Create PDB


1. Open PDB
2. Convert and restart
3. Restart PDB
4. Check plug-in violations
5. Purge
6. Ensure PDB is open READ WRITE and unrestricted
7. Configure PDB to auto-start

SQL> alter pluggable database DB19 open;
SQL> alter session set container=DB19;
SQL> @?/rdbms/admin/noncdb_to_pdb.sql
SQL> alter pluggable database DB19 close;
SQL> alter pluggable database DB19 open;
SQL> select type, message from pdb_plug_in_violations
     where name='DB19' and status<>'RESOLVED';
SQL> select open_mode, restricted from v$pdbs;
SQL> alter pluggable database DB19 save state;

Convert | noncdb_to_pdb.sql

Requires downtime
• Runtime varies - typically 10-30 min
• Fix for Bug 25809128 is included since 19.9.0 and adds a significant improvement
• Runs only once in the life of a database
• Irreversible
• Re-runnable from 12.2


Fallback| PDB Downgrade 

Downgrade works for CDB/PDB entirely as well as for single/multiple PDBs
• Manual tasks
• catdwgrd.sql in current (after upgrade) environment
• catrelod.sql in previous (before upgrade) environment
• Don't change COMPATIBLE
• datapatch must roll back SPUs/PSUs/BPs manually

MOS Note: 2172185.1
How to Downgrade a Single Pluggable Oracle Database ( PDB ) to previous release


Migration | Last Words

Every migration 

• Is an architectural change 
• Requires downtime 
• Requires a fallback 
• Ends with a backup


How to migrate a non pluggable database that uses TDE to pluggable database ?
 (Doc ID 1678525.1)


Data Guard | Migration Options 

It is possible to preserve the standby database when you migrate from non-CDB to PDB

Special attention is needed 
You don't have to rebuild your standby database but you might find it is the easiest solution.



Follow Doc - Multitenant Migration

    
    

Thursday, March 17, 2022

Monitoring Oracle Exadata MOS Notes

 

Monitoring Oracle Exadata MOS Notes


Important ones I think you should check on a very regular basis

Topic AreaMOS #My Oracle Support Note Title
Core888828.1Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)
 1270094.1 Exadata Critical Issues (Doc ID 1270094.1)
 556.1Oracle Exadata: Exadata and Linux Important Recommended Fixes (Doc ID 556.1)
 2724126.1 Exadata X9M/X8M (RoCE-based systems) Software Requirements and Recommendations (Doc ID 2724126.1)
 2075007.1 Exadata System Software Certification (Doc ID 2075007.1)
 1570460.1 Exadata Software and Hardware Support Lifecycle (Doc ID 1570460.1)
 1070954.1 
Oracle Exadata Database Machine EXAchk (Doc ID 1070954.1)
 1306791.2 Information Center: Oracle Exadata Database Machine (Doc ID 1306791.2)
   
Exadata Updates/Patching/Upgrades1262380.1 Exadata Patching Overview and Patch Testing Guidelines (Doc ID 1262380.1)
 2207063.1 HOWTO: Install ksplice kernel updates for Exadata Database Nodes (Doc ID 2207063.1)
   
Security 2751741.1Oracle Exadata Database Machine Security FAQ (Doc ID 2751741.1)
  2256887.1How to research Common Vulnerabilities and Exposures (CVE) for Exadata packages (Doc ID 2256887.1)
 1405320.1Responses to common Exadata security scan findings (Doc ID 1405320.1)
   
Database & GI742060.1 Release Schedule of Current Database Releases (Doc ID 742060.1)
 2700151.1 Grid Infrastructure 11.2.0.4 on Exadata / Oracle Linux 7 Transitional Support (Doc ID 2700151.1)
 2285040.1 Release Update Introduction and FAQ (Doc ID 2285040.1)
   
Database & GI Upgrade/Updates2542082.1 19c Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running on Oracle Linux (Doc ID 2542082.1)
 1919.219c Database Self-Guided Upgrade with Best Practices (Doc ID 1919.2)

 

Identified notes that are specifically important, interesting, useful and relevant as they relate to your Exadata Database Machine deployment - union them together with the list above!