Wednesday, January 30, 2019

Ports Used in Oracle Apps EBS, portpool details

Ports Used in Oracle Apps EBS, portpool details


PORTPOOL taken in EBS R12 


Location of portpool.lst file: 

${INST_TOP}/admin/out/portpool.lst

Friday, January 25, 2019

Output Format in Postgressql

Alternate output format for psql

Its always difficult to read the output when we execute any query.
Example :- 



postgres=# SELECT * FROM pg_stat_tables;





Simple and Format way - 

\x on 



Thursday, January 24, 2019

POSTGRES DIRECTORY STRUCTURE

PostgreSQL DIRECTORY STRUCTURE

All the data needed for a database cluster is stored within the cluster's data directory, commonly referred to as PGDATA.


 pg_serial is used to track summarized information about committed serializable transactions which might still become part of a serialization failure rolling back some not-yet-committed transaction to protect data integrity.

The catalog cache is information from the system tables which describes the tables, indexes, views, etc. in the database. If you had to re-read that from the system tables each time, it would be slow. Even shared memory would be clumsy for that, so each backend process has its own cache of system catalog data for fast lookup. When anything changes, all backends are sent a signal to update or reload their cache data. When pages are read or written, they go through the OS cache, which is not directly under PostgreSQL control. The optimizer needs to keep track of a lot of information while it parses and plans a query, which is why that is shown. A plan has execution nodes, some of which may need to use memory; that is where work_mem comes in -- a sort or hash table (as examples) will try not to exceed work_mem *for that node*. It is significant that one query might use quite a few nodes which each allocate memory up to work_mem. But since most queries are simpler and might not use any work_mem allocations, people often do their calculations based on an expected maximum of one allocation per backend (i.e., per connection). But that could be off by quite a bit if all connections might be running queries with five nodes allocating memory.






PostgresSQL Architecture


PostgresSQL Architecture 



Postgresql Architecture 

When PostgreSQL starts-
The Postmaster starts first and allocates the shared memory
 It also accepts connections and spins off a backend for each new connection. So each backend (server process) gets its pointers to shared memory from the postmaster.
It is disastrous if the postmaster dies with backends still running, so we have it do as little as possible, so that there isn't as much which can crash it.
Postgres does have a pool of shared memory;
however, it does not have a library or dictionary cache stored in that memory (This means that statements do need to be parsed and planned every time they are entered. If parse/plan overhead is an issue, we suggest the use of prepared statements.
While Oracle is able to avoid the repeated parse/plan overhead, it must still do enough analysis of the query to determine whether the information is present in the library cache, which also consumes some time and CPU resources. The parser is quite lightweight, so we feel that the overhead of parsing the query each time is acceptable.
1.             Shared Memory:
----------------------
SHARED BUFFERS –
The biggest chunk of shared memory is shared_buffers.
When pages from a table or index are read from the OS, they are read into shared_buffers, and the backends reference the pages and their contents right there in shared memory.
An exception is temporary tables, where (since only the creating backend can reference the temp table) data is accessed in temp_buffer space as much as possible(temp_buffers is separate. It is not in shared memory). It's faster to access process-local memory like that because you don't need to worry about pinning or locking the data, since you are not sharing it.
WAL BUFFERS -- are for buffering data to be written to the WAL files.
CLOG BUFFERS -- are one of the SLRU-style buffers oriented toward circular "rings" of data, like which transaction numbers have been committed or rolled back.
LOCK SPACE -- Memory structures in shared memory are generally protected by "lightweight" locks, which are in shared memory. Tables are protected by "heavyweight" locks which are also in shared memory (and themselves protected by lightweight locks). Of course, lightweight locks are protected by spinlocks. It gets very complicated and fussy. :-)
OTHER BUFFERS -- are probably mostly SLRU buffers besides CLOG (which was the first user of the SLRU system). SLRU is good for data where you mostly want to use recently accessed data and then you are done with it relatively quickly.
The opposite of shared memory is process-local memory -- only the one process that allocates it can access it.
Each SLRU system has a separate subdirectory. Shared memory is memory that all of the backend server processes can directly access. To prevent chaos, access to shared memory must follow some rules which tends to make it a little slower, like locking areas of memory a process will be using. Process-local memory is allocated by one backend server process, and the other backend server processes can't see it or use it, so it's faster to access, and no worries about another process trashing it while you're using it.
Getting back to the shared memory uses I hadn't talked about, CLOG buffers and SLRU buffers like multixact, notify, subtrans, serial, etc. use buffers in memory for recently accessed data, but spill to disk in those subdirectories beyond a limited number of buffers.

2. Utility Processes:

---------------------
With a default configuration I see the postmaster, the checkpointer process, the writer process, the wal writer process, the autovacuum launcher process, and the stats collector process. I think you will see more processes running if you turn on archiving or streaming replication. You might also get a process for writing the server log, depending on configuration. As their name say, WRITER -- process is responsible to write the dirty buffers to data files, CHECKPOINTER -- process is for checkpoint, WAL WRITER -- is for writing the dirty buffers in WAL buffers to WAL files, AUTOVACUUM LAUNCHER -- process lauches autovacuum when require(depends on your autovacuum settings in postgresql.conf file) and STATS COLLECTOR -- process to collect the statistics of objects in the database require by Optimizer to improve the performance.

The checkpointer process is responsible for creating safe points from which a recovery can begin; the background writer tries to keep some pages available for re-use so that processes running queries don't need to wait for page writes  in order to have free spots to use in shared buffers. Both checkpointer and writer processes writes to the same files, however the checkpointer writes all data that was dirty as of a certain time (the start of the checkpoint) regardless of how often it was used since dirtied, and the background writer writes data that hasn't been used recently, regardless of when it was first dirtied.  Neither knows or cares whether the data being written was committed, rolled back, or still in progress.

Follow link Taken artcile from - Memory Structure

PostgreSQL is a client/server type relational database management system with the multi-process   architecture and runs on a single host.A collection of multiple processes cooperatively managing one  database cluster is usually referred to as a 'PostgreSQL server', and it contains the following types of   processes:

·       A postgres server process is a parent of all processes related to a database cluster management.

·       Each backend process handles all queries and statements issued by a connected client.

·       Various background processes perform processes of each feature (e.g., VACUUM and CHECKPOINT processes) for database management.


·       In the replication associated processes, they perform the streaming replication. In the background worker process supported from version 9.3, it can perform any processing implemented by users.  


An example of the process architecture in PostgreSQL.


This figure shows processes of a PostgreSQL server: a postgres server process, two backend processes, seven background processes, and two client processes. The database cluster, the shared memory, and two client processes are also illustrated.



Memory Architecture

Memory architecture in PostgreSQL can be classified into two broad categories:
Local memory area – allocated by each backend process for its own use.
Shared memory area – used by all processes of a PostgreSQL server.



Article taken from - http://bajis-postgres.blogspot.com
                         


PostgresSQL Database and OBJECT Size Scripts


Calculate PostgresSQL Database and OBJECT Size

Scripts which are useful in calculating size of postgressql database

Individual postgresql database size
              
SELECT pg_size_pretty(pg_database_size('db_name'));

Get a listing of all of your databases in Postgres and their sizes in GB, ordering by the largest size first

SELECT pg_database.datname as "database_name", pg_database_size(pg_database.datname)/1024/1024/1024 AS size_in_GB FROM pg_database ORDER by size_in_GB DESC;

Get a listing of all of your databases in Postgres and their sizes by using meta-command 
\l+

 Script to Find all the table size in the current database.

SELECT table_schema || '.' || table_name AS TableName, pg_size_pretty(pg_total_relation_size('"' || table_schema || '"."' || table_name || '"')) AS TableSize FROM information_schema.tables ORDER BY pg_total_relation_size('"' || table_schema || '"."' || table_name || '"') DESC

Script to Find all the table and index size in the current database.
SELECT
    TableName
    ,pg_size_pretty(pg_table_size(TableName)) AS TableSize
    ,pg_size_pretty(pg_indexes_size(TableName)) AS IndexSize
    ,pg_size_pretty(pg_total_relation_size(TableName)) AS TotalSize
FROM
(
     SELECT ('"' || table_schema || '"."' || table_name || '"') AS TableName
     FROM information_schema.tables
) AS Tables
ORDER BY 4 DESC

 Checking table size as well as dependencies size
SELECT  schemaname,
relname as "Table",
   pg_size_pretty(pg_total_relation_size(relid)) As " table_Size",
   pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid)) as "External Size"
   FROM pg_catalog.pg_statio_user_tables ORDER BY pg_total_relation_size(relid) DESC;



Size of all tables , table related objects size and total table size in a current schema or any schema

SELECT stats.relname
           AS table,
       pg_size_pretty(pg_relation_size(statsio.relid))
           AS table_size,
       pg_size_pretty(pg_total_relation_size(statsio.relid)
           - pg_relation_size(statsio.relid))
           AS related_objects_size,
       pg_size_pretty(pg_total_relation_size(statsio.relid))
           AS total_table_size,
       stats.n_live_tup
           AS live_rows
  FROM pg_catalog.pg_statio_user_tables AS statsio
  JOIN pg_stat_user_tables AS stats
 USING (relname)
 WHERE stats.schemaname = current_schema  -- Replace with any schema name
 UNION ALL
SELECT 'TOTAL'
           AS table,
       pg_size_pretty(sum(pg_relation_size(statsio.relid)))
           AS table_size,
       pg_size_pretty(sum(pg_total_relation_size(statsio.relid)
           - pg_relation_size(statsio.relid)))
           AS related_objects_size,
       pg_size_pretty(sum(pg_total_relation_size(statsio.relid)))
           AS total_table_size,
       sum(stats.n_live_tup)
           AS live_rows
  FROM pg_catalog.pg_statio_user_tables AS statsio
  JOIN pg_stat_user_tables AS stats
 USING (relname)
 WHERE stats.schemaname = current_schema  -- Replace with any schema name
 ORDER BY live_rows ASC;



Meta-command is also useful for listing the existing tables size in a current schema

\d+

 Meta-command is useful To show tables size of all schema

\dt+ *.*

command is useful To show tables size for a particular schema

\dt+ schema_name.*

How to get the current locks in database. 

select * from pg_locks;

How to check the details of replication between the primary and slave. 

select * from pg_stat_replication ;

The above query gives us the state of the replication and lag between the primary and slave servers. 



How to get the database statistics like hitratio, dml statistics. 

SELECT sdb.datname,blks_read,blks_hit,round((blks_hit::float / (blks_read+blks_hit+1) * 100)::numeric, 2) as cachehitratio,xact_commit,xact_rollback,tup_returned,tup_fetched,tup_inserted,tup_updated,tup_deleted,ROUND(100.0*AGE(DATFROZENXID)/ (SELECT SETTING::float FROM PG_SETTINGS WHERE NAME ILIKE 'autovacuum_freeze_max_age')) FROM pg_stat_database sdb INNER JOIN pg_database db on sdb.datname=db.datname WHERE sdb.datname !~ '^(template(0|1))' ORDER BY cachehitratio desc;

The above query will give you the database statistics. 

Wednesday, January 23, 2019

POSTGRESQL COMMANDS FOR DBA



POSTGRESQL COMMANDS

 list all databases
\l 

To list all table
\d

To list all functions
\df

To list all aggregate functions
\da

To list all schemas
\dn

To list all users
\du

To reset the query buffer
\r

To list all table space
\db

To list the privileges
\dp
 
To show current data directory.
SHOW data_directory;

To change current data directory and it requires a server restart. 
SET data_directory TO NewPath;

To show current server configuration setting.
SHOW ALL; SELECT name, setting, unit, context FROM pg_settings;

Check the version of PostgreSQL:
SELECT version();
 
Create a database using the script:
CREATE DATABASE Database_Name WITH OWNER User_Name;

Change the owner of the database:
ALTER DATABASE Database_Name OWNER TO User_Name;
 
Create a full copy of the database with structure and data:
CREATE DATABASE NewDB WITH TEMPLATE OldDB;

Create a database User using the script:
CREATE USER TEST WITH password 'testPassword';

Change the password of the User:
ALTER USER postgres WITH PASSWORD 'temproot';

Upgrade the existing User to Superuser:
ALTER USER TestUser WITH SUPERUSER;
 
Reload the PostgreSQL configuration file:
SELECT  pg_reload_conf();
 
Rotate the PostgreSQL Log(pg_log) file:
SELECT pg_rotate_logfile();
 
Find the Postgres server start time:
SELECT pg_postmaster_start_time();

POSTGRES SIZE AND LIMITATIONS

POSTGRES SIZE AND LIMITATIONS

  • Limit Value Maximum Database Size Unlimited 
  • Maximum Table Size 32 TB
  • Maximum Row Size 1.6 TB 
  • Maximum Field Size 1 GB 
  • Maximum Rows per Table Unlimited 
  • Maximum Columns per Table 250 - 1600 depending on column types 
  • Maximum Indexes per Table Unlimited.

Saturday, January 19, 2019

Calculate size in Postgresql Database?

List all the databases and their sizes in PostgreSQL


The  query will get you a list of all the databases.

select * from pg_database;    


Only list of databases.

select datname  from pg_database;  

If you will execute above query you will see that Postgres servers have three databases defined by default named template0, template1 and postgres. template0 and template1 are skeleton databases that are or can be used by the CREATE DATABASE command. 
postgres is the default database you will connect to before you have created any other databases.


How to get size of each database in PostgreSQL


Sometime we need to show database size in our application, to achieve this we will execute below query to get database size in MBs.


select pg_size_pretty(pg_database_size(pg_database.datname)) from pg_database;  


Below is the complete query to get database name and their sizes in MBs.

 SELECT pg_database.datname, pg_database_size(pg_database.datname), pg_size_pretty(pg_database_size(pg_database.datname)) 
FROM pg_database 
ORDER BY pg_database_size DESC;     




Friday, January 18, 2019

What is POSTGRESQL?

What is PostgreSQL?


PostgreSQL is a powerful, open source object-relational database system that uses and extends the SQL language combined with many features that safely store and scale the most complicated data workloads.

PostgreSQL has earned a strong reputation for its proven architecture, reliability, data integrity, robust feature set, extensibility, and the dedication of the open source community behind the software to consistently deliver performant and innovative solutions.

Why use PostgreSQL?


PostgreSQL comes with many features aimed to help developers build applications, administrators to protect data integrity and build fault-tolerant environments, and help you manage your data no matter how big or small the dataset. 
In addition to being free and open source,
 PostgreSQL is highly extensible. 
For example, you can define your own data types, build out custom functions, even write code from different programming languages without recompiling your database!
PostgreSQL tries to conform with the SQL standard where such conformance does not contradict traditional features or could lead to poor architectural decisions.

Below is an inexhaustive of various features found in PostgreSQL, with more being added in every major release:
  • Data Types
    • Primitives: Integer, Numeric, String, Boolean
    • Structured: Date/Time, Array, Range, UUID
    • Document: JSON/JSONB, XML, Key-value (Hstore)
    • Geometry: Point, Line, Circle, Polygon
    • Customizations: Composite, Custom Types
  • Data Integrity
    • UNIQUE, NOT NULL
    • Primary Keys
    • Foreign Keys
    • Exclusion Constraints
    • Explicit Locks, Advisory Locks
  • Concurrency, Performance
    • Indexing: B-tree, Multicolumn, Expressions, Partial
    • Advanced Indexing: GiST, SP-Gist, KNN Gist, GIN, BRIN, Covering indexes, Bloom filters
    • Sophisticated query planner / optimizer, index-only scans, multicolumn statistics
    • Transactions, Nested Transactions (via savepoints)
    • Multi-Version concurrency Control (MVCC)
    • Parallelization of read queries and building B-tree indexes
    • Table partitioning
    • All transaction isolation levels defined in the SQL standard, including Serializable
    • Just-in-time (JIT) compilation of expressions
  • Reliability, Disaster Recovery
    • Write-ahead Logging (WAL)
    • Replication: Asynchronous, Synchronous, Logical
    • Point-in-time-recovery (PITR), active standbys
    • Tablespaces
  • Security
    • Authentication: GSSAPI, SSPI, LDAP, SCRAM-SHA-256, Certificate, and more
    • Robust access-control system
    • Column and row-level security
  • Extensibility
    • Stored functions and procedures
    • Procedural Languages: PL/PGSQL, Perl, Python (and many more)
    • Foreign data wrappers: connect to other databases or streams with a standard SQL interface
    • Many extensions that provide additional functionality, including PostGIS
  • Internationalisation, Text Search
    • Support for international character sets, e.g. through ICU collations
    • Full-text search

PostgreSQL is highly extensible: many features, such as indexes, have defined APIs so that you can build out with PostgreSQL to solve your challenges.
PostgreSQL has been proven to be highly scalable both in the sheer quantity of data it can manage and in the number of concurrent users it can accommodate. There are active PostgreSQL clusters in production environments that manage many terabytes of data, and specialized systems that manage petabytes.


Thursday, January 17, 2019

Internal Error Has Occurred' When Using Sql Advisory Through OEM

To BottomTo Bottom




Applies to:

Oracle Database - Enterprise Edition - Version 12.1.0.2 and later
Information in this document applies to any platform.

Symptoms

  • Using Enterprise Manager console UI > DB target homepage > Performance > Top Activity > SQL tuning advisor
  • Enterprise Manager version is 12.1.0.5.0
  • During executing the job of "SQL tuning advisor", the following error is seen:


Cause

From "emoms.trc", the following error is seen:
--------------------------
2016-10-03 17:08:23,474 [Thread-12335531] ERROR sqlt.SqlTuningProcessBean logp.251 - java.sql.SQLException: ORA-06550: line 1, column 7:
PLS-00201: identifier 'DBMS_SCHEDULER' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
 
 The user does not have DBMS_SCHEDULER privileges.

Solution

Grant DBMS_SCHEDULER privileges to the user:
"grant execute on DBMS_SCHEDULER TO username; "