Cloning Oracle database on ASM with Pure FlashArray on to same host

This post was originally published on this site

A lot of customers have requested the steps to clone an Oracle database on ASM hosted on Pure FlashArray, on to the same database host like cloning QA database into DEV on the same host.

The problem is, cloning an ASM lun and mounting it onto the same host would result in two luns with same ASM header information that points to the same disk group name and the disk name.  Similar to duplicate UUID error we would encounter in a cloned LVM volumes on the same host.

FYI, Oracle generally doesn’t support mounting the copy of the ASM lun on to the same server as it is error-prone and admins might accidentally rename or destroy the wrong diskgroup.  

This can be accomplished by using the renamedg command that allows changing the diskgroup information at the header level on the cloned lun.

Disclaimer:  This post is purely for educational purposes.  Use them at your own risk.

Word of Caution
Get acclimated to the cloning process by testing them in a non-production environment before attempting them in production.  As a precautionary measure, take the snapshots of the source volumes.

Assumption

  1.   The following cloning process assumes the DM devices are named using the alias clause in /etc/multipath.conf file.

First time process

Following are the high-level steps to clone an Oracle database on ASM using Pure FlashArray snapshot functionality onto the same server.  These steps are done the very first time when a source database on ASM is cloned and mounted on the same server.  If you are performing this cloning on a regular basis, check the next section.

  1. Take the snapshot of the disks that belong to the source database either through Pure GUI or CLI/Rest based APIs.  Make the snapshot read/write by copying it into a LUN and attach it to the same host.
    purevol copy dg_qa_data dg_dev_data
    purevol copy dg_qa_fra dg_dev_fra
    
  2. Update /etc/multipath.conf with WWID of the new LUN and a new alias.  (In our case, we updated the alias to dg_dev_data and dg_dev_fra for the cloned luns that correspond to the DATA and FRA from QA).
  3. Restart the multipath services as root user and rescan the SCSI devices by invoking rescan-scsi-bus.sh.  Run multipath -ll to verify the new disk shows up with the alias provided.
  4. If you are using ASMlib, perform the following to change the disk name of the cloned ASM volumes.
    oracleasm renamedisk -f [full volume label] [new diskname]
    
    [root@donald grid]# oracleasm renamedisk -f /dev/mapper/dg_dev_data DEVDATA1
    Writing disk header: done
    Instantiating disk "DEVDATA1": done
    [root@donald grid]# oracleasm renamedisk -f /dev/mapper/dg_dev_fra DEVFRA1
    Writing disk header: done
    Instantiating disk "DEVFRA1": done
  5. As grid user (or user with ASM privileges), rename the diskgroup using the renamedg command.  Make sure to provide the full volume alias as the asm_diskstring so ASM will update the header of the right volume and not the source volume.  (The use of asm_diskstring is available starting 12cR1).
    If you do not want to execute the command rather wanted to see what the command would do, you can use the argument check=TRUE which would list the changes but not execute the actual rename command.
    renamedg dgname=source newdgname=target confirm=TRUE verbose=TRUE asm_diskstring='volume label'
    
    renamedg dgname=QADATA newdgname=DEVDATA confirm=TRUE asm_diskstring='/dev/mapper/dg_dev_data' verbose=TRUE
    renamedg dgname=QAFRA newdgname=DEVFRA confirm=TRUE asm_diskstring='/dev/mapper/dg_dev_fra' verbose=TRUE
    
  6. Login to sqlplus using SYSASM and mount the new diskgroup in restricted mode.
    alter diskgroup <diskgroup> mount restricted;
    
    SQL> alter diskgroup DEVDATA mount restricted;
    SQL> alter diskgroup DEVFRA mount restricted;
  7. Change the disk name of the new diskgroup using the following command.  This command works on Oracle 12c and above.
    alter diskgroup <new diskgroup> rename disk ‘CURRENT DISKNAME’ to ‘NEW DISKNAME’;
    
    SQL> alter diskgroup DEVDATA rename disk 'QADATA1' to 'DEVDATA1';
    SQL> alter diskgroup DEVFRA rename disk 'QAFRA1' to 'DEVFRA1';
    
  8. Dismount the diskgroup and mount it normally.  Now you should have the source diskgroup cloned and mounted with a new diskgroup name on the same host.
    alter diskgroup <diskgroup> dismount;
    alter diskgroup <diskgroup> mount;
  9. Now using the create control file script (generated from the source using “alter database backup controlfile to trace as ccf_db.sql”) recreate the control file for the cloned database, perform recovery and open the database in resetlogs mode.  You would need a copy of the init.ora file which should reflect changes pertinent to the clone environment.

Periodic cloning process

Follow the steps if you are periodically refreshing the cloned database like DEV or QA with the latest copy of the PROD.  In this instance, we are refreshing DEV database from the copy of QA on to the same host.

  1. As the grid user (or as the owner of the ASM instance), drop the diskgroups of the environment to be cloned, including contents.
    SQL> drop diskgroup DEVDATA including contents;
    SQL> drop diskgroup DEVFRA including contents;
  2. If using ASMlib, then delete the ASM disks that were part of the diskgroup that was dropped above as root user.
    [root #] oracleasm deletedisk DEVFRA1
    [root #] oracleasm deletedisk DEVDATA1
  3. Using CLI or REST-based API, take the snapshot of the source volumes and forcefully copy them onto the corresponding destination volumes that are already attached to the host.  Make sure you are not overwriting the source luns.
    Note:  The overwrite option is available only through CLI or REST-based API and not through GUI to avoid any accidental overwrites.  Even if a volume is overwritten, the prior state of the lun is preserved for 24 hours and it can be found under the Destroyed Volumes tab.
    ssh pureuser@<arrayname> purevol copy --overwrite  [destination volume]
    
    SUFFIX=`date +%Y-%m-%d-%H%M%S`
    ssh pureuser@array-name purevol snap --suffix SNAP-$SUFFIX dg_qa_data dg_qa_fra
    ssh pureuser@array-name purevol copy --overwrite dg_qa_data.SNAP-$SUFFIX dg_dev_data
    ssh pureuser@array-name purevol copy --overwrite dg_qa_fra.SNAP-$SUFFIX dg_dev_fra
  4. If you are using ASMlib, rename the disks of the cloned ASM volumes.
    [root@donald grid]# oracleasm renamedisk -f /dev/mapper/dg_dev_data DEVDATA1
    Writing disk header: done
    Instantiating disk "DEVDATA1": done
    [root@donald grid]# oracleasm renamedisk -f /dev/mapper/dg_dev_fra DEVFRA1
    Writing disk header: done
    Instantiating disk "DEVFRA1": done
  5. As grid user (or user with ASM privileges), rename the diskgroup using the renamedg command.  Make sure to provide the full volume alias as the asm_diskstring so ASM will update the header of the right volume and not the source volume.  
    renamedg dgname=source newdgname=target confirm=TRUE verbose=TRUE asm_diskstring='volume label'
    
    renamedg dgname=QADATA newdgname=DEVDATA confirm=TRUE asm_diskstring='/dev/mapper/dg_dev_data' verbose=TRUE
    
    renamedg dgname=QAFRA newdgname=DEVFRA confirm=TRUE asm_diskstring='/dev/mapper/dg_dev_fra' verbose=TRUE
  6. If using ASMlib, you can just mount the diskgroups that were renamed above.
    SQL> alter diskgroup DEVDATA mount;
    SQL> alter diskgroup DEVFRA mount;
  7. If using udev, then rename the disk inside ASM using the following command which works on Oracle 12c onwards.  Make sure the diskgroup is mounted in restricted mode. Once the diskname is changed, dismount and mount the diskgroups.
    alter diskgroup <diskgroup> mount restricted;
    alter diskgroup <new diskgroup> rename disk ‘CURRENT DISKNAME’ to ‘NEW DISKNAME’;
    
    alter diskgroup DEVDATA rename disk 'QADATA1' to 'DEVDATA1';
    alter diskgroup DEVDATA rename disk 'QAFRA1' to 'DEVFRA1';
    
    alter diskgroup <diskgroup> dismount;
    alter diskgroup <diskgroup> mount;
  8. Now follow the regular process of recreating the control file, recovering the database and opening it in resetlogs mode.

  9. Change the DBID of the database using NID utility.  This is a good practice if these databases are backed up using RMAN with a central catalog.
    nid target=user/pwd logfile=dbnamechg.log SETNAME='NO'