Friday, August 10, 2012

Post 43 of Series - Create Oracle ZFS Snapshots & Clones and mount on VMware Machines - Part 2

Post 43 of Series - Create Oracle ZFS Snapshots & Clones and mount on VMware Machines - Part 2


The Lord, our Help and Support, is always with us, but the mortal does not remember Him. He shows love to his enemies. He lives in a castle of sand. He enjoys the games of pleasure and the tastes of Maya. He believes them to be permanent this is the belief of his mind. Death does not even come to mind for the fool. Hate, conflict, sexual desire, anger, emotional attachment, falsehood, corruption, immense greed and deceit:  So many lifetimes are wasted in these ways. Nanak: uplift them, and redeem them, O Lord . show Your Mercy!



Lets create the snapsot of the share we have already created. Then edit(delete,copy) the current data on the live share and then restore it from teh snapshot we took. In the next step we will clone the share where we took the backup.

Go to shares and take the mouse on the right most side of your share that you have. I have recreated the share now and now its called "ZFS0/share0". You can you the same as we have in last part or create the one I have using the steps in the last part.  Click on edit and you will see the following screen.



Click on Snapshots and then click the "+" sign in front of the "Snapshot" and you get the following screen. Enter the name of the snapshot here. I am entering "share0_first_snap".
and now the snap of the share is taken.



Lets modify the data on share and restore it from the snap. We already have it mounted on node 1. i.e. rac1.rac.meditate.com. Lets go to that server and modify the data there. This is the RMAN backup of RAC database GSINGH1 that is a RAC database and running on 2 nodes. This backup we took in last post. Once is backup as compressed and one is image copy of the database.


11:24 : oracle : rac1.rac.meditate.com : @GSINGH1 : /zfs/backup


$ ls -lart

total 2259216

drwxrwxrwx 3 nfsnobody bin 3 Aug 2 09:43 ..

-rw-r-----+ 1 oracle oinstall 3744768 Aug 2 09:44 0vnhl8ir_1_1

-rw-r-----+ 1 oracle oinstall 12274176 Aug 2 09:44 10nhl8is_1_1

-rw-r-----+ 1 oracle oinstall 6493696 Aug 2 09:44 11nhl8j1_1_1

-rw-r-----+ 1 oracle oinstall 192987136 Aug 2 09:47 13nhl8je_1_1

-rw-r-----+ 1 oracle oinstall 110706688 Aug 2 09:47 12nhl8je_1_1

-rw-r-----+ 1 oracle oinstall 98304 Aug 2 09:47 15nhl8os_1_1

-rw-r-----+ 1 oracle oinstall 1114112 Aug 2 09:47 14nhl8or_1_1

-rw-r-----+ 1 oracle oinstall 80384 Aug 2 09:47 16nhl8p9_1_1

-rw-r-----+ 1 oracle oinstall 33792 Aug 2 09:47 17nhl8p9_1_1

-rw-r-----+ 1 oracle oinstall 692068352 Aug 2 09:57 data_D-GSINGH_I-686335423_TS-SYSAUX_FNO-2_1bnhl98s

-rw-r-----+ 1 oracle oinstall 754982912 Aug 2 09:57 data_D-GSINGH_I-686335423_TS-SYSTEM_FNO-1_1anhl98s

-rw-r-----+ 1 oracle oinstall 131080192 Aug 2 09:58 data_D-GSINGH_I-686335423_TS-UNDOTBS1_FNO-3_1dnhl9ci

-rw-r-----+ 1 oracle oinstall 52436992 Aug 2 09:58 data_D-GSINGH_I-686335423_TS-UNDOTBS2_FNO-6_1enhl9dj

-rw-r-----+ 1 oracle oinstall 328343552 Aug 2 09:58 data_D-GSINGH_I-686335423_TS-EXAMPLE_FNO-5_1cnhl9ci

-rw-r-----+ 1 oracle oinstall 5251072 Aug 2 09:59 data_D-GSINGH_I-686335423_TS-USERS_FNO-4_1gnhl9f0

-rw-r-----+ 1 oracle oinstall 18792448 Aug 2 09:59 cf_D-GSINGH_id-686335423_1fnhl9e7

drwxr-xr-x+ 2 oracle oinstall 19 Aug 2 09:59 .

-rw-r-----+ 1 oracle oinstall 98304 Aug 2 09:59 1hnhl9f3_1_1



11:24 : oracle : rac1.rac.meditate.com : @GSINGH1 : /zfs/backup

$


Lets delete some data from this directory:

11:24 : oracle : rac1.rac.meditate.com : @GSINGH1 : /zfs/backup


$ rm data_D-GSINGH_I-686335423_TS-SYSAUX_FNO-2_1bnhl98s data_D-GSINGH_I-686335423_TS-SYSTEM_FNO-1_1anhl98s data_D-GSINGH_I-686335423_TS-UNDOTBS1_FNO-3_1dnhl9ci data_D-GSINGH_I-686335423_TS-UNDOTBS2_FNO-6_1enhl9dj data_D-GSINGH_I-686335423_TS-EXAMPLE_FNO-5_1cnhl9ci



Now confirm the we have removed some files.

11:25 : oracle : rac1.rac.meditate.com : @GSINGH1 : /zfs/backup

$ ls -lart

total 344267

drwxrwxrwx 3 nfsnobody bin 3 Aug 2 09:43 ..

-rw-r-----+ 1 oracle oinstall 3744768 Aug 2 09:44 0vnhl8ir_1_1

-rw-r-----+ 1 oracle oinstall 12274176 Aug 2 09:44 10nhl8is_1_1

-rw-r-----+ 1 oracle oinstall 6493696 Aug 2 09:44 11nhl8j1_1_1

-rw-r-----+ 1 oracle oinstall 192987136 Aug 2 09:47 13nhl8je_1_1

-rw-r-----+ 1 oracle oinstall 110706688 Aug 2 09:47 12nhl8je_1_1

-rw-r-----+ 1 oracle oinstall 98304 Aug 2 09:47 15nhl8os_1_1

-rw-r-----+ 1 oracle oinstall 1114112 Aug 2 09:47 14nhl8or_1_1

-rw-r-----+ 1 oracle oinstall 80384 Aug 2 09:47 16nhl8p9_1_1

-rw-r-----+ 1 oracle oinstall 33792 Aug 2 09:47 17nhl8p9_1_1

-rw-r-----+ 1 oracle oinstall 5251072 Aug 2 09:59 data_D-GSINGH_I-686335423_TS-USERS_FNO-4_1gnhl9f0

-rw-r-----+ 1 oracle oinstall 18792448 Aug 2 09:59 cf_D-GSINGH_id-686335423_1fnhl9e7

-rw-r-----+ 1 oracle oinstall 98304 Aug 2 09:59 1hnhl9f3_1_1

drwxr-xr-x+ 2 oracle oinstall 14 Aug 8 04:25 .



11:25 : oracle : rac1.rac.meditate.com : @GSINGH1 : /zfs/backup

$


Now lets restore it from teh snap we took by rolling back the changes to the snap we took from ZFS as follows:

On the following page place you mouse on the line where we have snap "share0_first_snap". You will see a rollback button. Just click it.

You will get the following screen. Click "OK". We are done with rollback and the old files should have been restored by now.






Lets check on the server if the files have been restored.


11:25 : oracle : rac1.rac.meditate.com : @GSINGH1 : /zfs/backup


$ ls -lart

total 2259216

drwxrwxrwx 3 nfsnobody bin 3 Aug 2 09:43 ..

-rw-r-----+ 1 oracle oinstall 3744768 Aug 2 09:44 0vnhl8ir_1_1

-rw-r-----+ 1 oracle oinstall 12274176 Aug 2 09:44 10nhl8is_1_1

-rw-r-----+ 1 oracle oinstall 6493696 Aug 2 09:44 11nhl8j1_1_1

-rw-r-----+ 1 oracle oinstall 192987136 Aug 2 09:47 13nhl8je_1_1

-rw-r-----+ 1 oracle oinstall 110706688 Aug 2 09:47 12nhl8je_1_1

-rw-r-----+ 1 oracle oinstall 98304 Aug 2 09:47 15nhl8os_1_1

-rw-r-----+ 1 oracle oinstall 1114112 Aug 2 09:47 14nhl8or_1_1

-rw-r-----+ 1 oracle oinstall 80384 Aug 2 09:47 16nhl8p9_1_1

-rw-r-----+ 1 oracle oinstall 33792 Aug 2 09:47 17nhl8p9_1_1

-rw-r-----+ 1 oracle oinstall 692068352 Aug 2 09:57 data_D-GSINGH_I-686335423_TS-SYSAUX_FNO-2_1bnhl98s

-rw-r-----+ 1 oracle oinstall 754982912 Aug 2 09:57 data_D-GSINGH_I-686335423_TS-SYSTEM_FNO-1_1anhl98s

-rw-r-----+ 1 oracle oinstall 131080192 Aug 2 09:58 data_D-GSINGH_I-686335423_TS-UNDOTBS1_FNO-3_1dnhl9ci

-rw-r-----+ 1 oracle oinstall 52436992 Aug 2 09:58 data_D-GSINGH_I-686335423_TS-UNDOTBS2_FNO-6_1enhl9dj

-rw-r-----+ 1 oracle oinstall 328343552 Aug 2 09:58 data_D-GSINGH_I-686335423_TS-EXAMPLE_FNO-5_1cnhl9ci

-rw-r-----+ 1 oracle oinstall 5251072 Aug 2 09:59 data_D-GSINGH_I-686335423_TS-USERS_FNO-4_1gnhl9f0

-rw-r-----+ 1 oracle oinstall 18792448 Aug 2 09:59 cf_D-GSINGH_id-686335423_1fnhl9e7

drwxr-xr-x+ 2 oracle oinstall 19 Aug 2 09:59 .

-rw-r-----+ 1 oracle oinstall 98304 Aug 2 09:59 1hnhl9f3_1_1



11:26 : oracle : rac1.rac.meditate.com : @GSINGH1 : /zfs/backup

$

    And the files are back. So, restore is complete.


Lets clone our share now and mount the clone on other RAC node. Click on "Shares".



Click on edit share and you see the following screen




Click on snapshot and take mouse on snap and click centre option to make clone and you will the following window. Enter name of clone as "share0_first_clone" and click apply.


You will see the floowing screen and now if you click on "Shares" you will see two mountpoints. One is clone and one is real muntpoint.



You will see the size of clone will be very less as its just the difference between base share snapshot and clone. As you keep doing changes on clone you will see the size of clone will keep increasing.





On Node 2:


See which mountpoints are visible from ZFS to this node, use the folllowing command:

"showmount -e 192.168.1.195"


11:32 : oracle : rac2.rac.meditate.com : @GSINGH2 : /home/oracle

$ su -

Password:

[root@rac2 ~]# showmount -e 192.168.1.195

Export list for 192.168.1.195:

/export/share0_first_clone (everyone)

/export/share0 (everyone)

[root@rac2 ~]#

===========

Lets mount the clone using following command.


[root@rac2 ~]# mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 192.168.1.195:/export/share0_first_clone /zfs

[root@rac2 ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

84G 11G 69G 14% /

/dev/sda1 99M 13M 82M 14% /boot

tmpfs 754M 200M 554M 27% /dev/shm

192.168.1.195:/export/share0_first_clone

9.8G 2.2G 7.7G 23% /zfs

[root@rac2 ~]#

===============

Clone is mounted on node 2. Now as you keep modifying the clone on node2 its size will keep growing as you can see in the snapshot below.




See size is increasing.



Thats it for this post !

In next post we will create the database from image copy on clone on node2 and will drop and recreate database on clone. We will see the size of the clone will increase as we keep using that database on clone that we will create.


SHALOK: One who renounces God the Giver, and attaches himself to other affairs O  Nanak, he shall never succeed. Without the Name, he shall lose his honor. 1

No comments:

Post a Comment