Horizontal scaling can improve use cases such as increasing an application architecture’s responsiveness and read scale. Change operations are directed to a specific system, and read operations could be directed to any of the replicas in the environment, allowing for better query response times compared to a single system. Horizontal scaling can also provide high availability, ensuring that it won’t disrupt access to the database if one system goes offline.
This post focuses on increasing the operational efficiency of horizontal scaling when using replication technologies with MySQL and MariaDB.
Members of the same replication topologies exist in two categories: primaries and secondaries. A primary database system (also known as a master) accepts both read and write operations. A secondary database system (also known as a slave) accepts only read operations.
Several replication technologies are available for MySQL and MariaDB:
- Replication: In asynchronous or semi-synchronous replication, one or more secondary database servers receive updates from a single primary.
- Group Replication: Virtually synchronous replication between replica systems with automatic primary election. Group Replication is a core component of an InnoDB cluster, and you can implement it in a single or multiple primary mode.
- Galera Cluster: Virtually synchronous replication between three or more nodes. Each node in the cluster is a primary.
When adding additional members to an existing replication topology, you’ll need to perform an initial data synchronization. This involves a donor database system, which provides the data to an intended replica (known as a joiner or recipient). Each replication technology has a different term for this process:
Replication: Importing existing data to the secondary. Uses mysqldump or a physical file copy method for initial data transfer
Group Replication: Distributed recovery. Uses mysqldump, binary log replication, or the clone plugin for initial data transfer.
Galera Cluster: State Snapshot Transfers (SST). Uses mysqldump, rsync, clone, or xtrabackup for initial data transfer.
The initial seeding process has several limitations that can affect the efficiency of the topology, including: For the rest of this post, I’ll refer to the initial synchronization process as initial seeding.
- Transferring data from one system to another will take time. For large databases, this further impacts the time to replica availability.
- Some initial seeding techniques block the donor from making updates during the transfer process. This can impact the availability of the donor and replica as both need to “catch up” to a synchronized state.
- In heavily loaded environments, replicas may struggle to keep up with the database state on a primary. This leads to the need for full synchronization of replicas too far out to “catch up.”
- Some initial seeding techniques can only have a donor providing data to a single joiner at a time. Adding multiple joiners at once requires multiple donors and can place strain on resources or shorten time to availability.
Each of these issues can affect business processes. Applications that are dependent on MySQL and MariaDB to scale will be impacted. Environments such as web services won’t be able to support an increasing number of users, and growth will be limited.
Solution: Volume Snapshots on Pure FlashArray™
Volume snapshots on Pure Storage® FlashArray eliminate the need to perform initial seeding for Replication, Group Replication, and Galera Cluster nodes.
Adding a new or additional replica simply requires taking a volume snapshot of the volume(s) on the primary MySQL or MariaDB instance. Once you’ve taken the volume snapshot, you can copy it to a new or existing volume for the intended replica.
You can perform each of these steps manually using the Purity GUI or command-line interface (CLI). You can also automate it using the REST API. The replication architectural view for each replication technology will follow a very similar pattern where it creates, then copies volume snapshots:
Faster and More Efficient
FlashArray volume snapshots are faster and more efficient than other initial seeding techniques. However, when comparing the MySQL clone plugin or rsync SST method to FlashArray volume snapshots, there are some differences:
- With Group Replication and Galera Cluster, the use of the clone plugin or SST method is automated for new nodes/instances joining the topology.
- The speed of using volume snapshots with FlashArray is limited to how quickly a user can perform the steps using the GUI or CLI. If the process is automated using the REST API, the limit is the speed of code execution.
- All replicas on the same FlashArray will consume no additional storage space due to always-on deduplication.
Comparing Initial Seeding Methods
We found some very interesting results when we analyzed how volume snapshots compare as an initial seeding method. We used the following method to compare:
- MySQL (8.0.25) with Group Replication and MariaDB (10.5) with Galera Cluster; each had an initial node/instance populated with data.
- Three servers with 25GB Ethernet connectivity; storage connectivity, and application connectivity were separate
- Two database sizes: 768GB and 20TB
- A single FlashArray//X90R3 connected using NVMeoF-RoCEv2
Our test took an increasing amount of time to perform initial seeding with the clone plugin and rsync SST method when the database size increased. On the other hand, FlashArray volume snapshots were near-instantaneous and took the same amount of time regardless of database size.
So, why not accelerate your application today with FlashArray or Pure Cloud Block Store™? To get started, fill out a Pricing and Demo Request.