Tuesday, July 6, 2010

Install Oracle RAC 11gR2 on Vmware with Windows 7 64-bit as host OS and Linux as guest(Part 9, Delete node and inst from existing 11gR2 cluster db)

One can be one with Him only by His grace



Index of all the posts of Gurpartap Singh's Blog

Here we will discuss the steps to permanently delete a node from a 3 node 11gR2 cluster and then will delete the instance. I will delete node 1 from the cluster and instance "simar1" later.


Check the following logs before eviction:

Node 2 log, before eviction:

oracle : rac2.rac.meditate.com : @crs : /u02/app/11.2.0.1/grid/log/rac2
$ tail -f alertrac2.log
2010-07-06 15:33:22.103
[ctssd(4976)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
2010-07-06 16:02:10.720
[ctssd(4976)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
2010-07-06 16:02:26.727
[ctssd(4976)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
2010-07-06 18:23:57.553
[ctssd(4976)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
2010-07-06 18:24:13.559
[ctssd(4976)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.



Node 3, log before eviction:

[oracle@rac3 rac3]$ tail -f alertrac3.log
2010-07-06 18:09:50.626
[ctssd(4978)]CRS-2408:The clock on host rac3 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
2010-07-06 18:31:35.103
[ctssd(4978)]CRS-2408:The clock on host rac3 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
2010-07-06 18:31:51.107
[ctssd(4978)]CRS-2408:The clock on host rac3 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
2010-07-06 18:33:35.182
[ctssd(4978)]CRS-2408:The clock on host rac3 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
2010-07-06 18:33:51.189
[ctssd(4978)]CRS-2408:The clock on host rac3 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.


In the logs above everthing looks normal.

Now shutdown node 1, login as root and shutdown the node with the following command:
"shutdown -h now"




Node 2 log after eviction, note Node 1 has been evicted.


Tail on log /u02/app/11.2.0.1/grid/log/rac2/alertrac2.log

[ctssd(4976)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cl
2010-07-06 20:29:52.823
[ctssd(4976)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cl
2010-07-06 20:30:09.118
[ctssd(4976)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cl
2010-07-06 20:39:37.352
[ctssd(4976)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cl
2010-07-06 20:39:53.358
[ctssd(4976)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cl
2010-07-06 20:41:37.075
[cssd(4903)]CRS-1611:Network communication with node rac1 (1) missing for 75% of timeout interval. Removal of this node from cluster in 7.0 60 seconds
2010-07-06 20:41:41.127
[cssd(4903)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval. Removal of this node from cluster in 3.0 00 seconds
2010-07-06 20:41:44.138
[cssd(4903)]CRS-1632:Node rac1 is being removed from the cluster in cluster incarnation 173452718
2010-07-06 20:41:49.198
[cssd(4903)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 rac3 .
2010-07-06 20:41:50.825
[ctssd(4976)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac3.
2010-07-06 20:41:52.816
[ctssd(4976)]CRS-2406:The Cluster Time Synchronization Service timed out on host rac2. Details in /u02/app/11.2.0.1/grid/log/rac2/ctssd/octs sd.log.
2010-07-06 20:42:45.637
[/u02/app/11.2.0.1/grid/bin/orarootagent.bin(5478)]CRS-5822:Agent '/u02/app/11.2.0.1/grid/bin/orarootagent_root' disconnected from server. D etails at (:CRSAGF00117:) in /u02/app/11.2.0.1/grid/log/rac2/agent/crsd/orarootagent_root/orarootagent_root.log.
2010-07-06 20:42:57.363
[ctssd(4976)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cl uster time.
2010-07-06 20:43:13.419
[ctssd(4976)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cl uster time.
2010-07-06 20:43:29.965
[/u02/app/11.2.0.1/grid/bin/orarootagent.bin(4875)]CRS-5818:Aborted command 'check for resource: ora.crsd 1 1' for resource 'ora.crsd'. Details at (:CRSAGF00113:) in /u02/app/11.2.0.1/grid/log/rac2/agent/ohasd/orarootagent_root/orarootagent_root.log.
2010-07-06 20:43:37.970
[ctssd(4976)]CRS-2408:The clock on host rac2 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.




Node 3 log after eviction, note node 1 has been evicted.


Tail on log /u02/app/11.2.0.1/grid/log/rac3/alertrac3.log

[ctssd(4978)]CRS-2408:The clock on host rac3 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
2010-07-06 20:38:42.312
[ctssd(4978)]CRS-2408:The clock on host rac3 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
2010-07-06 20:38:58.316
[ctssd(4978)]CRS-2408:The clock on host rac3 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
2010-07-06 20:41:37.204
[cssd(4904)]CRS-1611:Network communication with node rac1 (1) missing for 75% of timeout interval. Removal of this node from cluster in 6.930 seconds
2010-07-06 20:41:41.230
[cssd(4904)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval. Removal of this node from cluster in 2.910 seconds
2010-07-06 20:41:49.198
[cssd(4904)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 rac3 .
2010-07-06 20:41:50.156
[ctssd(4978)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac3.






Now on node 2 execute the following command to see which nodes are active:

oracle : rac2.rac.meditate.com : @crs : /u02/app/11.2.0.1/grid/log/rac2
$ olsnodes -s -t
rac1 Inactive Unpinned
rac2 Active Unpinned
rac3 Active Unpinned

oracle : rac2.rac.meditate.com : @crs : /u02/app/11.2.0.1/grid/log/rac2
$

So, we can see node 1 is inactive and this is what we wanted.




Now delete the node using the following command on node 2:

oracle : rac2.rac.meditate.com : @crs : /u02/app/11.2.0.1/grid/log/rac2
$ crsctl delete node -n rac1
CRS-4661: Node rac1 successfully deleted.

oracle : rac2.rac.meditate.com : @crs : /u02/app/11.2.0.1/grid/log/rac2
$

Node removed successfully.


Now remove the node from orainventory so that when you patch it just patches the existing nodes.

Now as oracle cd to:
cd /u02/app/11.2.0.1/grid/oui/bin

and run "runInstaller"

as:


Update orainventory by executing following command:
runInstaller -updateNodeList ORACLE_HOME=/u02/app/11.2.0.1/grid "CLUSTER_NODES={rac2,rac3}" CRS=true


oracle : rac2.rac.meditate.com : @crs : /u02/app/11.2.0.1/grid/oui/bin
$ runInstaller -updateNodeList ORACLE_HOME=/u02/app/11.2.0.1/grid "CLUSTER_NODES={rac2,rac3}" CRS=true
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 3505 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

oracle : rac2.rac.meditate.com : @crs : /u02/app/11.2.0.1/grid/oui/bin
$i


So, node removed successfully from orainventory.

Node is gone now.



oracle : rac2.rac.meditate.com : @crs : /u02/app/11.2.0.1/grid/oui/bin
$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora.DATA1.dg ora....up.type 0/5 0/ ONLINE ONLINE rac2
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE rac2
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac3
ora.RECV1.dg ora....up.type 0/5 0/ ONLINE ONLINE rac2
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE rac2
ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE rac2
ora.gsd ora.gsd.type 0/5 0/ ONLINE ONLINE rac2
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE rac2
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE rac2
ora.rac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac3
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2
ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2
ora.rac2.gsd application 0/5 0/0 ONLINE ONLINE rac2
ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac2
ora....SM3.asm application 0/5 0/0 ONLINE ONLINE rac3
ora....C3.lsnr application 0/5 0/0 ONLINE ONLINE rac3
ora.rac3.gsd application 0/5 0/0 ONLINE ONLINE rac3
ora.rac3.ons application 0/3 0/0 ONLINE ONLINE rac3
ora.rac3.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac3
ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE rac2
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE rac3
ora.simar.db ora....se.type 0/2 0/1 ONLINE ONLINE rac2

oracle : rac2.rac.meditate.com : @crs : /u02/app/11.2.0.1/grid/oui/bin
$


and

oracle : rac2.rac.meditate.com : @crs : /u02/app/11.2.0.1/grid/oui/bin
$ olsnodes -s -t
rac2 Active Unpinned
rac3 Active Unpinned

oracle : rac2.rac.meditate.com : @crs : /u02/app/11.2.0.1/grid/oui/bin
$



Now remove the instance using dbca. Do ". oraenv" to "simar2" and execute "dbca and you will get the following:

Select "Oracle real application cluster database".


Select "Instance Management" and click next.



Select "Delete an instance" and click next:




Select your database name and enter username "sys" and its password and click next.




select instance "rac1" and click next:



Don't worry about it and click "OK" as node "rac1" is down.






Don't worry about it and clcik "OK".




and you will see the following:






Since node "rac1" is down, so don't worry and click "continue".


6 comments:

  1. Hi, G.Singh!
    Thanks for your efforts.
    You are really doing great job.
    I am going to install and configure using your data. If needed I will contact you.
    Thanks.

    ReplyDelete
  2. Sure, post your questions here and good luck and thanks for your comments.

    Regards
    Gurpartap Singh

    ReplyDelete
    Replies
    1. Hi Gurpartap,
      Excellent posts. I have difficulty in finding CentOS 5.4, 5.8 is available but no 5.4. Can I use 5.8 instead?

      I don't have a router, do we really need a router or is there a way to avoid the router?

      Thanks
      Muhammad

      Delete
  3. Hi Muhammad

    Sorry for a late reply, Yees you can use other versions of CentOS as well. You may or may not see a lil different errors while install but all install steps will remain same.

    Well, I have my home network with few PC's and I use router but yes if you have just one laptop we an do this without router as well.

    Regards
    Gurpartap Singh

    ReplyDelete
  4. Hello Gurpartap Singh,

    I have done all the setup as per your steps and I successfully completed RAC setup in my laptop. Thank you so much for your efforts and really appreciate your blog.

    A small question.. ?
    I have done complete setup in my home network and now if I take my laptop to different place or another home network does this setup works. If not exactly what changes do I need to do as per their network.Might be this question is silly but what to know more about networking side.

    Once again thank you so much for the blog. - GOD BLESS YOU

    Best Wishes,
    Suresh

    ReplyDelete
  5. Thats Awesome!

    Thanks for the blessings !

    Yes, when you move your computer to other network the mac address of virtual network cards changes, you just need to remove the old virtual cards of the machine and re-add them with same ips as we did earlier and just restart the network.

    As root execute on console:
    system-config-network

    change ips here and save and then as root
    "service network restart"

    and it will restart network and now reboot the virtual machine. Do this on both machines.


    and do as much meditation as you can and pray for me too.

    Regards,
    Gurpartap Singh

    ReplyDelete