Friday, December 20, 2024

EBS 12.2 Certified with Database Release Update 19.25 - All Platforms (October 2024)

 EBS 12.2 Certified with Database Release Update 19.25 - All Platforms (October 2024)


Oracle announced the release of the October 2024 updates to the E-Business Suite (EBS) Technology Codelevel Checker (ETCC) - Part II.  The latest October 2024 updates for ETCC include the certification of Oracle Database Release Update (DBRU) 19.25(19.25.0.0.241015-Oct2024) for all UNIX and Windows platforms with Oracle E-Business Suite Release 12.2 on-premises.

DBRU 19.25

For additional details, refer to Oracle E-Business Suite Release 12.2: Consolidated List of Oracle Database Patches and Technology Bug Fixes (MOS Note 1594274.1)

References

Related Articles

Cost Management Strategies for Oracle Cloud Infrastructure

 

Cost Management Strategies for Oracle Cloud Infrastructure


Cost Management Strategies for Oracle Cloud Infrastructure

Efficient cost management is one of the critical aspects of cloud adoption for enterprises. While Oracle Cloud Infrastructure (OCI) offers robust features and high performance, organizations must implement effective strategies to manage and optimize costs. This blog provides a detailed guide for CIOs, finance teams, and IT managers to understand and control their OCI expenses.


Understanding OCI Pricing Models

Oracle Cloud Infrastructure uses a transparent and predictable pricing model designed to offer flexibility and control. Here are the key components:

  1. Pay-as-You-Go (PAYG):

    • Allows you to pay only for what you use without upfront commitments.
    • Ideal for organizations with variable workloads or short-term projects.
  2. Bring Your Own License (BYOL):

    • Enables customers to leverage existing on-premises Oracle licenses.
    • Reduces overall costs by reusing licenses in OCI.
  3. Universal Credits:

    • A prepaid model offering flexibility to use any OCI service.
    • Provides cost predictability and discounts for committed usage.
  4. Always Free Tier:

    • Includes free access to compute, storage, and other resources for development and experimentation.

Understanding these models and aligning them with your organization’s needs is the first step toward cost optimization.


Tips for Workload Optimization and Cost Monitoring

Efficient cost management involves optimizing workloads and continuously monitoring expenses. Here are practical strategies:

  1. Right-Sizing Resources:

    • Analyze usage patterns and scale compute and storage resources to actual needs.
    • Use OCI's Auto Scaling feature to dynamically adjust resources.
  2. Instance Flexibility:

    • Choose the right instance type (e.g., VM.Standard, BM.Standard) based on workload requirements.
    • Migrate to Ampere A1 instances for cost-efficient performance.
  3. Leverage Reserved Instances:

    • Commit to reserved instances for predictable, long-term workloads to receive significant discounts.
  4. Utilize Tagging:

    • Implement tagging for resources to track and allocate costs by department, project, or business unit.
  5. Enable Budget Alerts:

    • Set up budgets and notifications in OCI to monitor and control spending.
  6. Optimize Storage:

    • Use Object Storage lifecycle policies to automatically archive or delete unused data.
    • Leverage OCI’s tiered storage options for cost savings.

Tools for Cost Analysis and Management

Oracle Cloud provides a suite of tools to help organizations monitor and manage costs effectively:

  1. Oracle Cloud Cost Analysis:

    • Offers detailed insights into usage and spending trends.
    • Visualize costs by service, region, or resource over time.
  2. Budgets and Alerts:

    • Set thresholds for budgets and receive alerts when spending exceeds limits.
    • Helps prevent cost overruns.
  3. Usage Reports:

    • Access detailed usage reports for billing and cost analysis.
    • Integrate reports with third-party tools for advanced analytics.
  4. Resource Manager:

    • Automates resource provisioning and helps ensure that cost-saving configurations are consistently applied.

Real-World Scenarios of Cost Savings

  1. Case Study: Optimizing Compute Costs:

    • A mid-sized e-commerce company transitioned from on-demand compute instances to reserved instances.
    • Result: Achieved 30% cost reduction while maintaining performance.
  2. Case Study: Storage Optimization:

    • A healthcare provider implemented Object Storage lifecycle policies to archive older medical records.
    • Result: Reduced storage costs by 40%.
  3. Case Study: Leveraging BYOL:

    • A global manufacturing firm utilized BYOL to migrate its Oracle Database licenses to OCI.
    • Result: Saved 25% in licensing costs while improving database performance.

Conclusion

Effective cost management in Oracle Cloud Infrastructure requires a combination of understanding pricing models, optimizing workloads, and leveraging OCI tools. By implementing these strategies, CIOs, finance teams, and IT managers can maximize the value of their OCI investments and achieve substantial cost savings.

Start your cost optimization journey today by exploring the powerful tools and resources available within Oracle Cloud Infrastructure.

is this enough or more to wrirte 

Wednesday, December 18, 2024

How to Change the Oracle OEM Domain in a URL

 How to Change the Oracle OEM Domain in a URL


Oracle Enterprise Manager (OEM) is a robust tool used for monitoring and managing Oracle environments. Occasionally, you may need to update the domain name in the OEM URL, such as when transitioning to a new network, updating a Fully Qualified Domain Name (FQDN), or integrating with new DNS settings. This blog outlines the step-by-step process to change the domain name in the OEM URL.


Why Change the OEM Domain?

There are several reasons you might need to update the domain in the OEM URL:

  • Network migration: Moving OEM to a new network.

  • Domain standardization: Aligning with organizational naming conventions.

  • SSL updates: Adopting a new domain for secure connections.

For example, changing the URL from 

https://oem.old-domain.com:7801/em   to 

https://oem.new-domain.com:7801/em.

Steps to Change the Domain

Step 1: Update Hostname in Configuration Files

  1. Locate the OEM Configuration Directory: Navigate to the configuration directory in your Oracle Management Service (OMS) installation:

    cd $OMS_HOME/sysman/config
  2. Modify the emoms.properties File: Open the emoms.properties file:

    vi emoms.properties

    Update the URL to reflect the new domain:

    oracle.sysman.eml.mntr.emdURL=https://<new-domain>:<port>/emd/main

    Save and exit the file.

  3. Check the emomslogging.properties File: Review this file for any references to the old domain and update them if needed.

  4. Backup Configuration Files: Before proceeding further, ensure that you back up all configuration files in the directory:

    cp *.properties /backup/location/

Step 2: Update the WebLogic Server Configuration

  1. Access the WebLogic Admin Console: Navigate to the WebLogic Admin Console at:

    http://<current-domain>:7001/console

    Log in using your WebLogic admin credentials.

  2. Update the Server URL:

    • Go to Servers > AdminServer > Protocols > HTTP.

    • Update the Listen Address to the new FQDN.

    • Save and apply the changes.

  3. Restart WebLogic Services: Stop and restart the WebLogic Admin and Managed Servers:

    $DOMAIN_HOME/bin/stopWebLogic.sh
    $DOMAIN_HOME/bin/startWebLogic.sh
  4. Verify Changes in WebLogic Logs: Monitor the WebLogic logs to ensure there are no errors during the restart:

    tail -f $DOMAIN_HOME/servers/AdminServer/logs/AdminServer.log

Step 3: Update the Repository Information

  1. Stop OMS:

    $OMS_HOME/bin/emctl stop oms
  2. Update the OMS Configuration: Run the following command to update the domain name in the repository:

    $OMS_HOME/bin/emctl config oms -repos_pwd <repository_password>
  3. Restart OMS:

    $OMS_HOME/bin/emctl start oms
  4. Synchronize Agents: Ensure that all OEM agents are synchronized with the new domain by reconfiguring them:

    <AGENT_HOME>/bin/emctl stop agent
    <AGENT_HOME>/bin/emctl clearstate
    <AGENT_HOME>/bin/emctl start agent

Step 4: Validate the New URL

  1. Access the New OEM URL: Open a browser and visit:

    https://<new-domain>:<port>/em

    Ensure the page loads correctly.

  2. Test Agent Connectivity: Verify that all agents can communicate with the new OMS URL:

    <AGENT_HOME>/bin/emctl status agent
    <AGENT_HOME>/bin/emctl upload
  3. Validate Reporting: Test report generation and dashboard views to ensure the changes did not disrupt critical OEM functionality.


Step 5: Update DNS and SSL Certificates (if Applicable)

  1. Update DNS Records: Ensure the new domain is mapped to the OEM server's IP address in the DNS system.

  2. Regenerate SSL Certificates: If SSL is enabled, regenerate certificates with the new domain name using Oracle Wallet Manager or a tool like OpenSSL. Replace the old certificates on the server.

  3. Restart HTTPS Services: After updating SSL certificates, restart HTTPS services to apply changes:

    $OMS_HOME/bin/emctl secure oms -remove
    $OMS_HOME/bin/emctl secure oms -enable

Verification

  1. Monitor Logs for Errors: Check the OEM logs for any errors during or after the update:

    tail -f $OMS_HOME/sysman/log/emctl.log
  2. Validate with RMAN: Run an RMAN validation to ensure no issues in agent communication:

    RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE;
  3. Perform a Full System Health Check:

    • Run a database diagnostic check using OEM.

    • Confirm all critical alerts and notifications are working as expected.


Best Practices to Prevent Future Issues

  • Backup Configuration Files Regularly: Always take a backup before making any configuration changes.

  • Monitor Undo Tablespace and Redo Logs: Maintain sufficient space to avoid database corruption during recovery.

  • Apply Oracle Patches: Keep your Oracle environment updated with the latest patches.

  • Document Changes: Maintain detailed documentation of all changes made to facilitate troubleshooting and audits.


By following these steps, you can seamlessly change the domain name in the Oracle OEM URL while ensuring minimal disruption to monitoring and management activities. 

Tuesday, December 17, 2024

Step by step to resolve ORA-600 4194 4193 4197 on database crash (Doc ID 1428786.1)

 Step by step to resolve ORA-600 4194 4193 4197 on database crash (Doc ID 1428786.1)

The following error is occurring in the alert.log right before the database crashes.

ORA-00600: internal error code, arguments: [4194], [#], [#], [], [], [], [], []

This error indicates that a mismatch has been detected between redo records and rollback (undo) records.

ARGUMENTS:

Arg [a] - Maximum Undo record number in Undo block
Arg [b] - Undo record number from Redo block

Since we are adding a new undo record to our undo block, we would expect that the new record number is equal to the maximum record number in the undo block plus one. Before Oracle can add a new undo record to the undo block it validates that this is correct. If this validation fails, then an ORA-600 [4194] will be triggered.

This issue generally occurs when there is a power outage or hardware failure that initially crashes the database. On startup, the database does the normal roll forward (redo) and then rollback (undo), this is where the error is generated on the rollback.


This also can be cause by the following defect

Bug 8240762 Abstract: Undo corruptions with ORA-600 [4193]/ORA-600 [4194] or ORA-600 [4137] after SHRINK

Details:
Undo corruption may be caused after a shrink and the same undo block may be used
for two different transactions causing several internal errors like:
ORA-600 [4193] / ORA-600 [4194] for new transactions
ORA-600 [4137] for a transaction rollback.

Solution
----------

Best practice to create a new undo tablespace.
This method includes segment check.

1. Create pfile from spfile to edit
SQL> Create pfile='/tmp/initsid.ora' from spfile;

2. Shutdown the instance

3. set the following parameters in the pfile /tmp/initsid.ora
    undo_management = manual
    event = '10513 trace name context forever, level 2'

4. SQL>>startup restrict pfile='/tmp/initsid.ora'

5. SQL>select tablespace_name, status, segment_name from dba_rollback_segs where status != 'OFFLINE';

This is critical - we are looking for all undo segments to be offline - System will always be online.

If any are 'PARTLY AVAILABLE' or 'NEEDS RECOVERY' - Please open an issue with Oracle Support or update the current SR.  There are many options from this moment and Oracle Support Analyst can offer different solutions for the bad undo segments.

If all offline then continue to the next step

6. Create new undo tablespace - example
SQL>create undo tablespace <new undo tablespace> datafile <datafile> size 2000M;

7. Drop old undo tablespace
SQL>drop tablespace <old undo tablespace> including contents and datafiles;

8. SQL>shutdown immediate;

9 SQL>startup nomount;  --> Using your Original spfile

10. Modify the spfile with the new undo tablespace name

SQL> Alter system set undo_tablespace = '<new tablespace created in step 6>' scope=spfile;

11. SQL>shutdown immediate;

12. SQL>startup;  --> Using spfile
 


The reason we create a new undo tablespace first is to use new undo segment numbers that are higher then the current segments being used.  This way when a transaction goes to do block clean-out the reference to that undo segment does not exist and continues with the block clean-out.

Wednesday, November 27, 2024

EBS_SYSTEM Schema Migration

 

EBS_SYSTEM Schema Migration


Prerequisite: EBS Version 12.2.3+, Database 12.1.0.2 or 19c

MOS Notes:

  • Applying the Latest AD and TXK Release Update Packs to Oracle E-Business Suite Release 12.2 (Doc ID 1617461.1) (AD/TXK Delta 13)
  • Oracle E-Business Suite Release 12.2: Consolidated List of Patches and Technology Bug Fixes (Doc ID 1594274.1)
  • FAQ: Oracle E-Business Suite and System Schema Migration (Doc ID 2758999.1)
  • Oracle E-Business Suite Release 12.2 System Schema Migration (Doc ID 2755875.1)
  • Applying the Oracle E-Business Suite Consolidated Patch for EBS System Schema Migration (Doc ID 2774309.1)
  • Managing Database Privileges in Oracle E-Business Suite Release 12.2 (Running adgrants.sql) (Doc ID 2758993.1)
  • Enabling Unified Auditing in Oracle E-Business Suite Release 12.2 with Oracle Database 19c or 12c (Doc ID 2777404.1) — Beyond the scope of this post

By migrating to the EBS_SYSTEM schema, E-Business Suite gains some new features:

  • Utilizes public Oracle Database APIs
  • Least Privileges model for database object access
  • Separation of duties for database administrators
  • Database service names for application tier database connections
  • Certification with Oracle Database Unified Auditing
  • Simplified integration with Oracle Database Vault
  • Easier interoperability across Oracle Database releases

AD/TXK Delta 13

The first step in the migration is applying AD and TXK Delta 13. This step has some extra steps because it creates and enables the EBS_SYSTEM schema.

  1. Install the technology stack patches (Note 1594274.1)
    • Do not apply the April 2022 19c RU (not certified with EBS yet anyway)
      • Issues have been reported with some users having difficulty applying to pluggables.  Make sure this is resolved before moving forward.
    • Get the latest ETCC patch
      • from your staging area run the patch checker on the apps and database tiers
      • download etcc bundle patches as well as the current database and weblogic quarterly patches
    • Install the database patches
      • run checkDBpatch.sh and repeat the process until no new patches are identified
    • Start a patching cycle
      • adop phase=prepare
    • Install the application technology patches
      • run checkMTpatch.sh and repeat the process until no new patches are identified
  2.  Download the patches from 1617461.1 from section 3.2 step 4 and section 4
  3. Copy the latest adgrants.sql to the database server (as this time, it is in patch 33441060)
    • On the database tier
      • Run it as sys on the database server, this will create the EBS_SYSTEM user
      • ALTER USER EBS_SYSTEM ACCOUNT UNLOCK;
      • ALTER USER EBS_SYSTEM IDENTIFIED BY same_password_as_system;
        • This is critical until the schema migration is complete the passwords much match
    • On the apps tier
      • perl $PATCH_TOP/32394134/ad/bin/adValidateEbssystemSchema.pl
  4. Complete the patches as described in section 3.2

This has started the schema migration.  The utilities will now be using EBS_SYSTEM instead of SYSTEM on the apps_tier, but we are still linked to the SYSTEM schema (and the passwords must still be the same).

Consolidation Patch

We now need to apply the EBS_SYSTEM Schema Migration Consolidation patch (Note 2774309.1)

  • Included in 12.2.11
  • Patch 31817501 for 12.2.3-12.2.10

While you may proceed to the consolidation patch directly from the AD and TXK patches, you must complete this patching cycle before continuing to the next step.

The completion patch requires that you have done a full cleanup, so go ahead and do that now

adop phase=cleanup cleanup_mode=full

You may have mandatory post steps based upon your installed products.  For example, Enterprise Command Center must be at least 12.2 V7.

New Directories

The patch has created some new directories which were defaulted to /usr/tmp.  You should change them to a location visible to the apps tier if the applications DBAs will not have access to the database tier (In a non-RAC environment, I prefer to use a file system on the database tier that is NFS mounted on the apps tier for this as well as the value in APPLPTMP).

EBS_OUTBOUND
EBS_INBOUND
EBS_TEMP
EBS_LOG
EBS_OUT

For example,
CREATE OR REPLACE DIRECTORY EBS_OUT AS ‘/oracle_ebs/out’;

Custom Code

Just as when we were getting ready to enable online patching, Oracle has provided a script to help identify custom code that references objects that have been moved to EBS_SYSTEM.

Run
sqlplus apps @$AD_TOP/sql/ADSYSCC

Results will automatically be spooled to adsyscc.lst in the current directory.

Fix any custom objects identified in the report.

Completion Patch

At this point (full cleanup has been done, custom objects have been fixed), we are ready to apply patch 32573930

adop phase=apply patches=32573930 apply_mode=hotpatch options=forceapply,nocopyportion,nogenerationportion

Copy $AD_TOP/patch/115/sql/adsysmigclnup.sql to the database tier.

On the database tier:

sqlplus /nolog
  @adsysmigclnup APPS

Change the system and ebs_system passwords to have different values and make sure only the appropriate people know each password.

On the apps tier

adop -validate

At this point, you have completed the schema migration.  Note this is a summary from the MOS notes.  Be sure to read the actual notes to be sure that none of the steps I have skipped apply to your installation.

 

Thursday, October 17, 2024

Automate user creation and privilege assignment

Automate user creation and privilege assignment

Shell script for MariaDB DBAs to automate the user creation and privilege assignment process. This script can be adjusted for various other DBA tasks as well. The script prompts for input to create a user, grants privileges, and verifies the setup.

Shell Script: create_mariadb_user.sh


#!/bin/bash
# This script automates the process of creating a new MariaDB user and granting privileges.
# Prompt for input
echo "Enter MariaDB root password:"
read -s ROOT_PASS
echo "Enter the new username:"
read USERNAME
echo "Enter the password for the new user:"
read -s PASSWORD
echo "Enter the database name to grant privileges (leave blank for all databases):"
read DB_NAME
# Define SQL commands
if [ -z "$DB_NAME" ]; then
    # Grant privileges on all databases
    PRIVILEGES="GRANT ALL PRIVILEGES ON *.* TO '$USERNAME'@'localhost' IDENTIFIED BY '$PASSWORD';"
else
    # Grant privileges on a specific database
    PRIVILEGES="GRANT ALL PRIVILEGES ON $DB_NAME.* TO '$USERNAME'@'localhost' IDENTIFIED BY '$PASSWORD';"
fi
FLUSH="FLUSH PRIVILEGES;"
# Execute the SQL commands
mysql -u root -p$ROOT_PASS -e "CREATE USER '$USERNAME'@'localhost' IDENTIFIED BY '$PASSWORD'; $PRIVILEGES $FLUSH"
# Verify the user creation
echo "Verifying the new user setup..."
mysql -u root -p$ROOT_PASS -e "SHOW GRANTS FOR '$USERNAME'@'localhost';"
echo "User creation and privilege assignment completed."

How to Use the Script

  1. Save the script as create_mariadb_user.sh.

  2. Make it executable by running: 

  3. chmod +x create_mariadb_user.sh 

  4. ./create_mariadb_user.sh

     Follow the prompts to enter the root password, new username, user password, and database name.

  5. Explanation

    • Variables: The script takes input for the root password, new user details, and the database name.
    • SQL Execution: It combines the input into SQL commands and runs them using the mysql command-line client.
    • Verification: After creating the user, the script verifies the setup by showing the granted privileges for the new user.

    This script simplifies user management tasks for DBAs,.

Saturday, October 12, 2024

How to Monitor MariaDB Replication

 How to Monitor MariaDB Replication

1. Monitoring Replication Status Using SQL Queries

MariaDB provides several SQL commands that allow DBAs to monitor the health of replication. These queries help you understand the status of both the I/O thread and SQL thread on the slave server, as well as other critical replication metrics.

a. SHOW SLAVE STATUS

The most important command for monitoring replication is SHOW SLAVE STATUS. It provides a comprehensive view of the replication status, including the current state of the replication process, errors, and lag.


SHOW SLAVE STATUS\G;

Key fields to monitor:

  • Slave_IO_Running: Indicates whether the slave is connected to the master and reading the binary logs.

    • Expected value: Yes
  • Slave_SQL_Running: Indicates whether the SQL thread is applying the received binary log entries to the slave.

    • Expected value: Yes
  • Seconds_Behind_Master: Shows how much the slave lags behind the master, in seconds. A value of 0 means the slave is caught up with the master.

    • Ideal value: 0 (or a low number)
  • Last_Error: Shows any errors encountered during the replication process. This is useful for diagnosing issues.

  • Read_Master_Log_Pos: The current position in the master’s binary log that the slave has read up to.

  • Exec_Master_Log_Pos: The position in the master’s binary log that the slave has executed.

Example Output:


*************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.1.100 Master_User: replica_user Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mariadb-bin.000001 Read_Master_Log_Pos: 574 Relay_Log_File: mariadb-relay-bin.000001 Relay_Log_Pos: 714 Slave_IO_Running: Yes Slave_SQL_Running: Yes Seconds_Behind_Master: 0 Last_Error: None



b. SHOW MASTER STATUS (On the Master)

To check the status of the master and ensure it is generating binary logs correctly, you can run:


SHOW MASTER STATUS\G;

This shows the current position in the binary log that the slave should be following. Compare this with the slave's Exec_Master_Log_Pos to detect any replication lag.

2. Monitor for Replication Lag

Replication lag is the time difference between the master and slave in terms of executed transactions. It's essential to monitor replication lag to ensure the slave is as close to real-time as possible.

a. Seconds_Behind_Master

The Seconds_Behind_Master field in SHOW SLAVE STATUS indicates how far behind the slave is compared to the master. A value of 0 means the slave is up to date. A higher value indicates the slave is lagging behind the master.


SHOW SLAVE STATUS\G;

Key metric to monitor:

  • Seconds_Behind_Master: Should ideally be 0. If this is consistently high, the slave is falling behind the master, indicating potential performance issues.

3. Monitor Replication Errors

Replication errors can disrupt the replication process. Use the following techniques to catch and troubleshoot errors.

a. Last_SQL_Error and Last_IO_Error

These fields in SHOW SLAVE STATUS provide details on the most recent errors that occurred during the replication process. Regularly monitoring these fields can help detect issues early.


SHOW SLAVE STATUS\G;

Look for:

  • Last_SQL_Error: Indicates errors related to SQL thread execution on the slave.
  • Last_IO_Error: Indicates errors related to reading binary logs from the master.

Example of error output:


Last_IO_Error: error connecting to master 'replica_user@192.168.1.100:3306' - retry-time: 60 retries: 5 Last_SQL_Error: Error executing row event: 'Duplicate entry '1' for key 'PRIMARY''

If errors are found, use STOP SLAVE; to stop the replication process, fix the issue (e.g., fix the duplicate entry or connectivity issue), and restart replication with START SLAVE;.


4. Monitoring System Variables for Replication Health

MariaDB system variables provide important information about replication performance and issues.

a. SHOW STATUS for Replication Variables

Use SHOW STATUS to monitor variables related to replication performance.


SHOW GLOBAL STATUS LIKE 'Slave_running';

This will show whether the slave replication is running.


SHOW GLOBAL STATUS LIKE 'Rpl_%';

Other key replication-related variables to monitor:

  • Rpl_status: Displays replication status.
  • Rpl_rejected_connections: Shows the number of connection attempts that failed due to replication configuration issues.

b. Binary Log File Monitoring

Monitoring the size and number of binary log files can give you insight into how fast changes are occurring on the master. The SHOW BINARY LOGS command shows all binary logs, which can be useful to track how much data is being written.


SHOW BINARY LOGS;

Use this to calculate the rate at which binary logs are generated and identify periods of high transaction volume.


5. Using Tools for Monitoring Replication

There are several tools available that can help you monitor replication in MariaDB in a more automated and graphical way.

a. Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM) is a great tool for monitoring MariaDB replication. It provides real-time query performance monitoring and allows you to track replication lag, errors, and other key metrics through an intuitive dashboard.

  • Features:
    • Real-time replication status.
    • Query analytics to see which queries may be slowing down replication.
    • Alerts and notifications for replication lag.

b. Zabbix

Zabbix is a popular open-source monitoring tool that supports MariaDB replication monitoring. You can set up custom templates to monitor replication status, track lag, and set up alerts for failures.

  • Features:
    • Customizable dashboards to monitor replication.
    • Automatic alerts when replication lag exceeds a defined threshold.
    • Visualizations of replication health metrics.

c. Nagios

Nagios is another widely used monitoring solution that can be configured to track MariaDB replication health.

  • Features:
    • Nagios can monitor SHOW SLAVE STATUS output and alert if any issues arise.
    • It can also track replication lag and generate alerts if thresholds are breached.
    • Provides log monitoring for replication errors.

d. ClusterControl

ClusterControl offers an advanced GUI-based approach to managing MariaDB clusters, replication setups, and Galera clusters. It provides real-time monitoring and alerting for replication issues.

  • Features:
    • Real-time replication status tracking.
    • Automatic failover and failback in case of failures.
    • Query analysis for identifying bottlenecks affecting replication.

e. Monitoring via mysqladmin Tool

The mysqladmin tool is a command-line utility to monitor various aspects of MariaDB, including replication.


mysqladmin extended-status | grep -i 'slave'

This command will show various replication-related metrics.


6. Configure Replication Alerts

To ensure that you are always informed about replication issues, it's essential to configure automated alerts. Many monitoring tools such as Zabbix, PMM, and Nagios support alerting based on thresholds.

  • Replication Lag Alert: Set an alert if Seconds_Behind_Master exceeds a certain value (e.g., 30 seconds).
  • Replication Stopped Alert: Set an alert if Slave_IO_Running or Slave_SQL_Running shows No.
  • Replication Error Alert: Monitor the Last_SQL_Error and Last_IO_Error fields for any recent errors and set up alerts for immediate notification.

7. Log-Based Monitoring

You can also monitor MariaDB replication using the error logs and binary logs:

  • Error Log: If replication encounters an issue, MariaDB logs it in the error log file. Regularly checking the error logs ensures you catch replication errors early.

tail -f /var/log/mysql/error.log
  • Binary Logs: Regular monitoring of binary logs helps ensure that they are being read and processed correctly by the slave.

SHOW BINARY LOGS;


Conclusion

Monitoring MariaDB replication is critical for maintaining a healthy replication environment. Using SQL queries like SHOW SLAVE STATUS, monitoring system variables, and employing external tools like Percona PMM, Zabbix, and Nagios can provide comprehensive insights into replication health. Setting up automated alerts ensures you catch any replication lag, errors, or stoppages before they become a problem.

Setting Up Master-Slave Replication in MariaDB

 Step-by-Step Guide to Setting Up Master-Slave Replication in MariaDB


1. Prerequisites

Before you start, ensure the following:

  • MariaDB is installed on both the master and slave servers.
  • Both servers can communicate with each other (pingable via IP or hostname).
  • The master and slave have unique server IDs.
  • You have administrative access to both MariaDB servers.

2. Configure the Master Server

Step 2.1: Edit the MariaDB Configuration File

On the master server, edit the my.cnf  file to enable replication settings.


sudo nano /etc/my.cnf   # or /etc/mysql/my.cnf, depending on the OS


Add the following configuration under the [mysqld] section:

[mysqld]
server-id = 1  # Unique ID for the master (can be any number)
log-bin = /var/log/mysql/mariadb-bin  # Path to the binary log
binlog-format = mixed  # Binary log format (can be row, statement, or mixed)

Step 2.2: Restart MariaDB

Restart MariaDB to apply the changes:

sudo systemctl restart mariadb

Step 2.3: Create a Replication User

Create a user on the master server that the slave will use for replication. This user should have the REPLICATION SLAVE privilege:

CREATE USER 'replica_user'@'%' IDENTIFIED BY 'replica_password';
GRANT REPLICATION SLAVE ON *.* TO 'replica_user'@'%';
FLUSH PRIVILEGES;

  • 'replica_user'@'%': % allows the user to connect from any host. You can restrict it to the slave's IP if preferred.
  • REPLICATION SLAVE: Grants the required privileges for replication.
  • Step 2.4: Obtain Master Log File Information

    To get the binary log file and position, which will be needed for configuring the slave:

    FLUSH TABLES WITH READ LOCK;
    SHOW MASTER STATUS;

    Note down the File and Position values. These will be used later in the slave configuration.



    Step 2.5: Unlock the Master Tables

    After obtaining the binary log information, release the lock:

    UNLOCK TABLES;

    3. Configure the Slave Server

    Step 3.1: Edit the Slave Configuration File

    On the slave server, edit the my.cnf file:

    sudo nano /etc/my.cnf

    Add the following settings under the [mysqld] section:

    [mysqld]
    server-id = 2  # Unique ID for the slave
    relay-log = /var/log/mysql/relay-bin  # Relay log location
    log_bin = /var/log/mysql/mariadb-bin  # Optional: To make slave a potential future master


  • server-id: This must be unique. Since it's the slave, you can use a different ID than the master.
  • relay-log: The relay log is where the slave stores the incoming replication data from the master.
  • Step 3.2: Restart MariaDB

    Restart MariaDB on the slave to apply the changes:

    sudo systemctl restart mariadb

    Step 3.3: Configure the Slave for Replication

    Now, configure the slave with the master’s details and log file information obtained earlier:

    CHANGE MASTER TO
    MASTER_HOST='master_ip',
    MASTER_USER='replica_user',
    MASTER_PASSWORD='replica_password',
    MASTER_LOG_FILE='mariadb-bin.000001',
    MASTER_LOG_POS=120;

    `MASTER_HOST`: The IP address or hostname of the master server. 

     `MASTER_LOG_POS`: These are the file and position noted from the master.”

    Step 3.4: Start Slave Replication

    Once the slave is configured, start the replication process:

    START SLAVE;

    Step 3.5: Verify Slave Status

    To verify that the replication is working, run the following command on the slave:

    SHOW SLAVE STATUS\G;

    Look for the following fields:
    - Slave_IO_Running: Yes
    - Slave_SQL_Running: Yes
    - Seconds_Behind_Master: 0 (or a low number)


    If both Slave_IO_Running and Slave_SQL_Running are Yes, the replication is set up and working correctly.

    4. Monitor Replication

    Step 4.1: Check Replication Health

    On the slave, use the following command to check replication health:

    SHOW SLAVE STATUS\G;

    Monitor the Seconds_Behind_Master value. A consistent value of 0 or a low number indicates that replication is functioning properly.

  • Seconds_Behind_Master: If this is consistently high, it may indicate replication lag.
  • Last_Error: Any errors here will give you clues to troubleshoot issues with replication.
  • Step 4.2: Restart Slave if Necessary

    If replication stops for any reason, you can restart the slave with the following commands:

    STOP SLAVE;
    START SLAVE;

    5. Common Troubleshooting Tips

    - **Check Permissions**: Ensure the replication user has REPLICATION SLAVE privileges.
    - **Firewall Issues**: Ensure port 3306 is open between the master and slave.
    - **Sync Issues**: If replication falls too far behind, consider re-syncing the slave.

     If the slave has fallen too far behind, you may need to reinitialize replication by re-syncing the database from the master (using mysqldump or similar).

    Conclusion

    Setting up master-slave replication in MariaDB is a straightforward process that provides many benefits, such as improved read scalability and high availability. By following these steps and monitoring the health of the replication, you can ensure a stable and reliable replication environment.