Most of Pure FlashArray customers are used to the snapshot functionality that helps them in taking either a crash-consistent or application-consistent snapshot in a flash (literally and figuratively) irrespective of the database size. While this helps them in protecting the database from any accidental changes or creating clones for development and test usage but cannot be treated as a backup. By definition, the backup has to reside on a second medium and the local snapshots do not meet this requirement. Certainly, customers can extend the snapshots to a second FlashArray using asynchronous replication and can treat them as a backup. Alternatively, they can offload the snapshots using Snap to NFS to NFS targets like FlashBlade or to the cloud using CloudSnap.
Meanwhile, there are a lot of Oracle customers who are still using RMAN backups specifically to validate the backup before offloading them to a secondary medium and did not want to solely rely on the snapshot functionality. They generally run the RMAN backups out of their production database to the secondary medium (disk or sbt). This process becomes a challenge when the size of the database is huge and a full backup takes a long time to complete. One option is to increase the number of RMAN channels if the production system has enough bandwidth for the data transfer but adding more channels can introduce performance overhead to the production system.
This blog covers the process to take a full database backup of a huge database using Pure FlashArray snapshots as a source and backing it up using Oracle RMAN from a mount host (secondary host; some backup vendors refer them as proxy host) into the secondary medium.
The following picture shows the overall process for backing up an Oracle database using RMAN from a secondary host. In our case, for rapid backup and restore, we are using FlashBlade as the target medium for the RMAN backup.
Prerequisites
- Mount host to be configured same as production host.
- The mount host should have the same Oracle binary code path as that of the production host.
- The mount host should have same oracle (or grid) userid and groupid as production.
- The mount host should have the same OS level as production.
- RMAN recovery catalog is required to store backup and recovery details of Oracle databases
- Production and mount host should have SQL*Net connectivity to RMAN catalog
- Mount host should have a copy of database init.ora and password file (if exists)
- Datafiles should be placed in separate volume(s).
- The backupsets from mount host should be made accessible at the production host for recovery.
- ASM instance is setup on the mount host.
- Empty volumes with the same size as that of production volumes (for data & fra) are created and mounted on the mount host, discovered.
Sample Diskgroups
Production Host
1 2 |
+DATA – dg_ora_data01 +FRA – dg_ora_fra01 |
Mount Host
1 2 |
+DATA – dg_ora_mount_data01 +FRA – dg_ora_mount_fra01 |
Note: If you have different LUN designs, adjust the script accordingly.
Full backup – Steps
1. On the production host take a snapshot of the datafile volume(s). If the datafiles are spread across multiple volumes, please use the protection group to group them together so the snapshot can be taken at the pgroup level.
1 2 |
SUFFIX=`date +%Y–%m–%d–%H%M%S` ssh pureuser@flasharray purevol snap —suffix $SUFFIX dg_ora_data01 |
2. On production host force current log to be archived.
1 |
SQL> alter system archive log current; |
3. On the production host take two backups of the control file. One will be used to open the database in the mount host and the other will be part of the RMAN backupset. (The CONTROL_FILES parameter in the mount host init.ora will point to the control_start control file).
1 2 3 4 5 6 7 8 9 |
export ORACLE_SID=prod rman target / << EOF run { </code> <code>allocate channel t1 type disk; alter system archive log current; copy current controlfile to ‘+FRA/PROD/CONTROLFILE/control_start’; copy current controlfile to ‘+FRA/PROD/CONTROLFILE/control_backup’; } EOF |
4. On production host resynchronize RMAN catalog with the production database. This adds the most recent archive log info into the RMAN catalog.
1 2 3 4 5 |
export ORACLE_SID=prod rman target / catalog user/pwd@rmancatalogdb << EOF resync catalog; exit; EOF |
5. On production host take a snapshot of the FRA volume.
1 2 |
SUFFIX=`date +%Y–%m–%d–%H%M%S` ssh pureuser@flasharray purevol snap —suffix $SUFFIX dg_ora_fra01 |
6. On the mount host, shutdown the prod database if it is mounted.
1 2 3 4 5 |
export ORACLE_SID=prod sqlplus –s / as sysdba << EOF shutdown immediate exit EOF |
7. On the mount host, dismount the +DATA and +FRA diskgroup from ASM if it was mounted.
1 2 |
srvctl stop diskgroup –g FRA srvctl stop diskgroup –g DATA |
8. On the mount host, force copy the snapshot of +DATA that was taken in step 1. Pass the full volume name with snapshot details to overwrite the volume in mount host.
1 |
ssh pureuser@flasharray purevol copy —force $SUFFIX $snapshot dg_ora_mount_data01 |
9. On the mount host, force copy the snapshot of _FRA that was taken in step 5. Pass the full volume name with snapshot details to overwrite the volume in mount host.
1 |
ssh pureuser@flasharray purevol copy —force $SUFFIX $snapshot dg_ora_mount_fra01 |
10. On the mount host, mount the diskgroups +DATA and +FRA.
1 2 |
srvctl start diskgroup –g FRA srvctl start diskgroup –g DATA |
11. On the mount host, start the database in mount mode. Make sure the init.ora file points to +FRA/PROD/CONTROLFILE/control_start as specified in step 3.
1 2 3 4 5 |
export ORACLE_SID=prod sqlplus –s / as sysdba << EOF startup mount; exit EOF |
12. On the mount host, backup the database using RMAN including the archived logs and backup control file. Connect to the RMAN recovery catalog and backup the database. This example shows target media as disk. In your case, this may be of type sbt. Hence the locations like /x01/backup should be updated accordingly.
1 2 3 4 5 6 7 8 9 |
export ORACLE_SID=prod</code> <code>rman target / catalog user/pwd@rmancatalogdb << EOF run </code> <code>{ </code> <code>allocate channel t1 type disk; backup controlfilecopy ‘+FRA/PROD/CONTROLFILE/control_backup’ format ‘/x01/backup/ctl_%d_%s_%p_%t’; backup as backupset database format ‘/x01/backup/db_%d_%s_%p_%t’; backup archivelog all format ‘/x01/backup/al_%d_%s_%p_%t’; release channel t1; } exit EOF |
13. On the production host, delete obsolete archived log files from the +FRA area. Make sure to update the device type here as for testing we used ‘disk’.
1 2 3 4 5 |
export ORACLE_SID=prod rman target / catalog user/pwd@rmancatalogdb << EOF delete archivelog backed up 2 times to device type ‘disk’; exit EOF |
Periodic archived logs backup – FlashArray Steps
Perform the following steps periodically as per your requirement. The only difference between this and the full database is, we don’t take a snapshot of the +DATA volume.
1. On production host force current log to be archived.
1 |
SQL> alter system archive log current; |
2. On production host take two backups of the control file. One will be used to open the database in the mount host and the other will be part of the RMAN backupset. (The CONTROL_FILES parameter in the mount host init.ora will point to the control_start control file).
1 2 3 4 5 6 7 8 9 10 |
export ORACLE_SID=prod rman target / << EOF run { allocate channel t1 type disk; alter system archive log current; copy current controlfile to ‘+FRA/PROD/CONTROLFILE/control_start’; copy current controlfile to ‘+FRA/PROD/CONTROLFILE/control_backup’; } EOF |
3. On the production host resynchronize RMAN catalog with the production database. This adds the most recent archive log info into the RMAN catalog.
1 2 3 4 5 |
export ORACLE_SID=prod rman target / catalog user/pwd@rmancatalogdb << EOF resync catalog; exit; EOF |
4. On the production host take a snapshot of the FRA volume.
1 2 |
SUFFIX=`date +%Y–%m–%d–%H%M%S` ssh pureuser@flasharray purevol snap —suffix $SUFFIX dg_ora_fra01 |
5. On the FlashArray mount host, shutdown the prod database if it is mounted.
1 2 3 4 5 |
export ORACLE_SID=prod sqlplus –s / as sysdba << EOF shutdown immediate exit EOF |
6. On the mount host, dismount the +DATA and +FRA diskgroup from ASM if it was mounted.
1 2 |
srvctl stop diskgroup –g FRA srvctl stop diskgroup –g DATA |
7. On the mount host, force copy the snapshot of _FRA that was taken in step 5. Pass the full volume name with snapshot details to overwrite the volume in mount host.
1 |
ssh pureuser@flasharray purevol copy —force $SUFFIX $snapshot dg_ora_mount_fra01 |
8. On the mount host, mount the diskgroups +DATA and +FRA.
1 2 |
srvctl start diskgroup –g FRA srvctl start diskgroup –g DATA |
9. On the mount host, start the database in mount mode. Make sure the init.ora file points to +FRA/PROD/CONTROLFILE/control_start as specified in step 3.
1 2 3 4 5 |
export ORACLE_SID=prod sqlplus –s / as sysdba << EOF startup mount; exit EOF |
10. On the mount host, backup the archived logs and control files using RMAN. Connect to the RMAN recovery catalog and perform the backup. This example shows target media as disk but this can be of type sbt. Hence the locations like /x01/backup should be updated accordingly with FlashArray.
1 2 3 4 5 6 7 8 9 10 11 |
export ORACLE_SID=prod rman target / catalog user/pwd@rmancatalogdb << EOF run { allocate channel t1 type disk; backup controlfilecopy ‘+FRA/PROD/CONTROLFILE/control_backup’ format ‘/x01/backup/ctl_%d_%s_%p_%t’; backup archivelog all format ‘/x01/backup/al_%d_%s_%p_%t’; release channel t1; } exit EOF |
11. On the production host, delete obsolete archived log files from the +FRA area. Make sure to update the device type here as for testing we used ‘disk’. (Note: Follow whatever is your standard operating procedure in deleting the archived logs rather than using the example below where we delete the archived logs after it is backed up twice).
1 2 3 4 5 |
export ORACLE_SID=prod rman target / catalog user/pwd@rmancatalogdb << EOF delete archivelog backed up 2 times to device type ‘disk’; exit EOF |