Skip to content

How to Modify Private Network – Subnet ID-Netmask and Private IP in Oracle Clusterware

November 14, 2013

Generally Choose a private IP address that is in the address range 10.*.*.* or 192.168.*.*

Private     10.0.0.1     /etc/hosts file

Private     10.0.0.2     /etc/hosts file

Class A: 10.0.0.0 to 10.255.255.255

Class B: 172.16.0.0 to 172.31.255.255

Class C: 192.168.0.0 to 192.168.255.255

http://www.subnet-calculator.com/

http://www.subnetmask.info/

http://jodies.de/ipcalc

http://www.subnetonline.com/pages/subnet-calculators/ip-subnet-calculator.php

Or Google search – “subnet calculator”

Class A addresses have their first octet in the range 1 to 126 (binary address begins with 0).

Class B addresses have their first octet in the range 128 to 191 (binary address begins with 10).

Class C addresses have their first octet in the range 192 to 223 (binary address begins with 110).

==================================================================================

Let’s change the Netmask and Private IP for Both Nodes

From

Node1 Priv_IP:192.168.5.1 Net Mask: 255.255.0.0

Node2 Priv_IP:192.168.5.2 Net Mask: 255.255.0.0

TO

Node1 Priv_IP:10.1.1.1 Net Mask: 255.0.0.0

Node2 Priv_IP:10.1.1.2 Net Mask: 255.0.0.0

We see there is a major change to the Netmask and IP.

Private network configuration is not only stored in OCR but also in the gpnp profile from 11gR2.

==================================================================================

(1) Backup profile.xml on all cluster nodes before proceeding, as grid user:

Backup /u01/app/11.2.0.1/grid/gpnp/<hostname>/profiles/peer/profile.xml

ls -lh /u01/app/11.2.0.1/grid/gpnp/`hostname`/profiles/peer/profile.xml

cd /u01/app/11.2.0.1/grid/gpnp/`hostname`/profiles/peer/

ls -lthr

cp -pr profile.xml profile.xml.bkp.$$.old

ls -ltrh *.old

 

 

Profile.xml stores interface name and respective Subnet ID

…..

<gpnp:Network-Profile>

<gpnp:HostNetwork id=”gen” HostName=”*”>

<gpnp:Network id=”net1″ IP=”192.168.56.0″ Adapter=”e1000g0″ Use=”public”/>

<gpnp:Network id=”net2″ IP=”192.168.0.0” Adapter=”e1000g1” Use=”cluster_interconnect“/>

…..

(2) Ensure Oracle Clusterware is running on ALL cluster nodes in the cluster

/u01/app/11.2.0.1/grid/bin/crsctl stat res -t

/u01/app/11.2.0.1/grid/bin/crsctl check crs

/u01/app/11.2.0.1/grid/bin/crsctl check cluster -all

(3) As grid user, get the existing information.

(a) For example:

# $GRID_HOME/bin/oifcfg getif

e1000g0  192.168.56.0  global  public

e1000g1  192.168.0.0  global  cluster_interconnect

(b) If the interface is available on the server, subnet address can be identified by command:

# $GRID_HOME/bin/oifcfg iflist

(c) netstat -i

Name  Mtu  Net/Dest      Address        Ipkts  Ierrs Opkts  Oerrs Collis Queue

lo0   8232 loopback      localhost      6983   0     6983   0     0      0

e1000g0 1500 mgracsolsrv64bit2 mgracsolsrv64bit2 1434   0     1083   0     0      0

e1000g1 1500 mgracsolsrv64bit2-priv mgracsolsrv64bit2-priv 15886  0     11124  0     0      0

 

netstat -in

Name  Mtu  Net/Dest      Address        Ipkts  Ierrs Opkts  Oerrs Collis Queue

lo0   8232 127.0.0.0     127.0.0.1      7067   0     7067   0     0      0

e1000g0 1500 192.168.56.0  192.168.56.21  1440   0     1093   0     0      0

e1000g1 1500 192.168.0.0   192.168.5.2    16177  0     11320  0     0      0

(4) Add the new cluster_interconnect information: From One node

# $GRID_HOME/bin/oifcfg setifglobal e1000g1/10.0.0.0:cluster_interconnect

 

 

 

 

 

Check Log:

 

# tail -f /u01/app/11.2.0.1/grid/log/mgracsolsrv64bit2/client/oifcfg.log

2013-11-13 18:12:46.540: [    GPnP][1]clsgpnpx_prfGetNNetIntf: [at clsgpnpx.c:2313] Result: (0) CLSGPNP_OK. returning net {e1000g1,10.0.0.0,cluster_interconnect} 1/1, copyout=1

……

# $GRID_HOME/bin/oifcfg getif

e1000g0  192.168.56.0  global  public

e1000g1  192.168.0.0  global  cluster_interconnect            (old)

e1000g1  10.0.0.0  global  cluster_interconnect                   (New)

# $GRID_HOME/bin/oifcfg iflist

e1000g0  192.168.56.0

e1000g1  192.168.0.0

mgracsolsrv64bit2:/export/home/grid: ifconfig -a

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1

inet 127.0.0.1 netmask ff000000

e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2

inet 192.168.56.21 netmask ffffff00 broadcast 192.168.56.255

e1000g0:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2

inet 192.168.56.81 netmask ffffff00 broadcast 192.168.56.255

e1000g0:3: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2

inet 192.168.56.82 netmask ffffff00 broadcast 192.168.56.255

e1000g0:5: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2

inet 192.168.56.31 netmask ffffff00 broadcast 192.168.56.255

e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3

inet 192.168.5.2 netmask ffff0000 broadcast 192.168.255.255

Note : If it is for adding a 2nd private network, not replacing the existing private network,

please ensure MTU size of both interfaces are the same, otherwise instance startup will report error:

(5) Verify the change:

mgracsolsrv64bit2:/export/home/grid: $GRID_HOME/bin/oifcfg getif

e1000g0  192.168.56.0  global  public

e1000g1  192.168.0.0  global  cluster_interconnect

e1000g1  10.0.0.0  global  cluster_interconnect

 

 

 

cat /u01/app/11.2.0.1/grid/gpnp/`hostname`/profiles/peer/profile.xml

….

<gpnp:Network-Profile><gpnp:HostNetwork id=”gen” HostName=”*”>

<gpnp:Network id=”net1″ IP=”192.168.56.0″ Adapter=”e1000g0″ Use=”public”/>

<gpnp:Network id=”net2″ IP=”192.168.0.0″ Adapter=”e1000g1″ Use=”cluster_interconnect”/>

<gpnp:Network id=”net3″ Adapter=”e1000g1” IP=”10.0.0.0” Use=”cluster_interconnect“/>

….

(6)  Shutdown Oracle Clusterware on all nodes and disable the Oracle Clusterware as root user:

 

Check current status CRS

$ more /var/opt/oracle/scls_scr/`hostname`/root/*

::::::::::::::

/var/opt/oracle/scls_scr/mgracsolsrv64bit2/root/crsstart

::::::::::::::

enable

::::::::::::::

/var/opt/oracle/scls_scr/mgracsolsrv64bit2/root/ohasdrun

::::::::::::::

restart

::::::::::::::

/var/opt/oracle/scls_scr/mgracsolsrv64bit2/root/ohasdstr

::::::::::::::

enable

Stop CRS as root

# /u01/app/11.2.0.1/grid/bin/crsctl stop crs

Now check the status again:

# more /var/opt/oracle/scls_scr/`hostname`/root/ohasdrun

stop

Disable CRS

# more /var/opt/oracle/scls_scr/`hostname`/root/ohasdstr

enable

# /u01/app/11.2.0.1/grid/bin/crsctl disable crs

CRS-4621: Oracle High Availability Services autostart is disabled.

# more /var/opt/oracle/scls_scr/`hostname`/root/ohasdstr

disable

Note : We can check about enable/disable startup status of the Oracle Clusterware daemons at path /var/opt/oracle/scls_scr/`hostname`/root/

 

(7) Make the network configuration change at OS level as required, ensure the interface is available on all nodes after the change.

 

 Check current IP’s at OS level.

# netstat -i|grep priv |awk ‘{print $1 ” ”  $4}’

e1000g1 mgracsolsrv64bit2-priv

# netstat -in |egrep ‘Name|e1000g1’ |awk ‘{print $1 ” ” $3 ” ” $4}’

Name Net/Dest Address

e1000g1 192.168.0.0 192.168.5.2

$ ifconfig -a |sed -n -e ‘/e1000g1/,/broadcast/p’

e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3

inet 192.168.5.2 netmask ffff0000 broadcast 192.168.255.255

Change the IP in below OS files as root on both nodes

(i) /etc/hosts

(ii) /etc/hostname.<interfaces>

ls -ltr /etc/hostname.*

(iii) cat /etc/inet/netmasks

192.168.56.0  255.255.255.0

192.168.0.0  255.255.0.0

Node 1 and 2

vi /etc/hosts

#PRIVATE

10.1.1.1  mgracsolsrv64bit1-priv mgracsolsrv64bit1-priv.mgdom.com

10.1.1.2  mgracsolsrv64bit2-priv mgracsolsrv64bit2-priv.mgdom.com

Node 1

vi /etc/hostname.e1000g1

10.1.1.1 netmask 255.0.0.0

Node 2

vi /etc/hostname.e1000g1

10.1.1.2 netmask 255.0.0.0

Node 1 and 2

vi /etc/inet/netmasks

10.0.0.0  255.0.0.0

 

 

 

Node 1 and 2

cat /etc/hostname.e1000g1

cat /etc/hosts

cat /etc/inet/netmasks

Change the IP Address For the Current Session as root:

Node 1:

# ifconfig -a |sed -n -e ‘/e1000g1/,/broadcast/p’

# ifconfig e1000g1 10.1.1.1 netmask 255.0.0.0 broadcast + up

# ifconfig -a |sed -n -e ‘/e1000g1/,/broadcast/p’

e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3

inet 10.1.1.1 netmask ff000000 broadcast 10.255.255.255

# ifconfig -a

Node 2:

# ifconfig -a |sed -n -e ‘/e1000g1/,/broadcast/p’

# ifconfig e1000g1 10.1.1.2 netmask 255.0.0.0 broadcast + up

# ifconfig -a |sed -n -e ‘/e1000g1/,/broadcast/p’

# ifconfig -a

(8) SSH/Ping from Both Nodes

$ ping <private hostname>

ping -s mgracsolsrv64bit1-priv 1 3

ping -s mgracsolsrv64bit2-priv 1 3

ping -s 10.1.1.1 1 3

ping -s 10.1.1.2 1 3

ssh 10.1.1.1 (both nodes) => This would Add SSH – RSA Keys to file /export/home/grid/.ssh/known_hosts

ssh 10.1.1.2 (both nodes)

 (9) Enable Oracle Clusterware and restart Oracle Clusterware on all nodes as root user:

tail -f /u01/app/11.2.0.1/grid/log/`hostname`/alertmgracsolsrv64bit1.log

tail -f /u01/app/11.2.0.1/grid/log/`hostname`/ohasd/ohasd.log

tail -f /u01/app/11.2.0.1/grid/log/`hostname`/crsd/crsd.log

tail -f /u01/app/11.2.0.1/grid/log/`hostname`/cssd/ocssd.log

Enable CRS

# more /var/opt/oracle/scls_scr/`hostname`/root/*

# /u01/app/11.2.0.1/grid/bin/crsctl enable crs

CRS-4622: Oracle High Availability Services autostart is enabled.

# more /var/opt/oracle/scls_scr/`hostname`/root/ohasdstr

 

Start CRS

# /u01/app/11.2.0.1/grid/bin/crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

# tail -f /u01/app/11.2.0.1/grid/log/`hostname`/alertmgracsolsrv64bit2.log

[ohasd(25263)]CRS-2112:The OLR service started on node mgracsolsrv64bit2.

2013-11-13 19:57:04.614

..

[cssd(25359)]CRS-1713:CSSD daemon is started in clustered mode

2013-11-13 19:57:49.834

[cssd(25359)]CRS-1601:CSSD Reconfiguration complete. Active nodes are mgracsolsrv64bit1 mgracsolsrv64bit2 .

2013-11-13 19:57:57.199

# more /var/opt/oracle/scls_scr/`hostname`/root/*

Check grid processes coming up:

ps -ef |grep d.bin|grep -v grep |wc -l; ps -ef |grep pmon |grep -v grep; ps -ef |grep tns|grep -v grep

/u01/app/11.2.0.1/grid/bin/crsctl stat res -t

/u01/app/11.2.0.1/grid/bin/crsctl check crs

/u01/app/11.2.0.1/grid/bin/crsctl check cluster -all

On both Nodes Verify below:

# /u01/app/11.2.0.1/grid/bin/olsnodes -n -i -l -p

mgracsolsrv64bit2       2       10.1.1.2        mgracsolsrv64bit2-vip

# /u01/app/11.2.0.1/grid/bin/oifcfg getif

e1000g0  192.168.56.0  global  public

e1000g1  192.168.0.0  global  cluster_interconnect

e1000g1  10.0.0.0  global  cluster_interconnect

# /u01/app/11.2.0.1/grid/bin/oifcfg iflist -p -n

e1000g0  192.168.56.0  PUBLIC  255.255.255.0

e1000g1  10.0.0.0  PRIVATE  255.0.0.0

# netstat -in |egrep ‘Name|e1000g1′ |awk ‘{print $1 ” ” $3 ” ” $4}’

Name Net/Dest Address

e1000g1 10.0.0.0 10.1.1.2

mgracsolsrv64bit2:/export/home/grid: netstat -i

Name  Mtu  Net/Dest      Address        Ipkts  Ierrs Opkts  Oerrs Collis Queue

lo0   8232 loopback      localhost      50086  0     50086  0     0      0

e1000g0 1500 mgracsolsrv64bit2 mgracsolsrv64bit2 12993  0     12089  0     0      0

e1000g1 1500 arpanet       mgracsolsrv64bit2-priv 74688  0     91327  0     0      0

mgracsolsrv64bit2:/export/home/grid: netstat -in

Name  Mtu  Net/Dest      Address        Ipkts  Ierrs Opkts  Oerrs Collis Queue

lo0   8232 127.0.0.0     127.0.0.1      50095  0     50095  0     0      0

e1000g0 1500 192.168.56.0  192.168.56.21  12998  0     12095  0     0      0

e1000g1 1500 10.0.0.0      10.1.1.2       74727  0     91370  0     0      0

(10) Remove the old interface, execute from any One Node.

$ oifcfg delif -global <if_name>[/<subnet>]

eg:

# echo $GRID_HOME

/u01/app/11.2.0.1/grid

Check old Netmask still exists:

# cat $GRID_HOME/gpnp/`hostname`/profiles/peer/profile.xml

Delete old interface:

# $GRID_HOME/bin/oifcfg delif -global e1000g1/192.168.0.0

(11)  Verify the changes:

# $GRID_HOME/bin/oifcfg getif                                ==> Verify old Netmask is Removed.

e1000g0  192.168.56.0  global  public

e1000g1  10.0.0.0  global  cluster_interconnect

# $GRID_HOME/bin/oifcfg iflist -p -n

e1000g0  192.168.56.0  PUBLIC  255.255.255.0

e1000g1  10.0.0.0  PRIVATE  255.0.0.0

Verify old Netmask not exixts:

# cat $GRID_HOME/gpnp/`hostname`/profiles/peer/profile.xml

Do the clean Stop/Start as root:

/u01/app/11.2.0.1/grid/bin/crsctl stop cluster -all

/u01/app/11.2.0.1/grid/bin/crsctl start cluster -all

References:

How to Modify Private Network Information in Oracle Clusterware (Doc ID 283684.1)

Click Here for the PDF

Advertisements

From → RAC

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: