Oracle RAC installation on Solaris SPARC 64 bit

Few weeks back i did a 2 node oracle RAC installation

The Machines were Soalris 10 SPARC 64 bit (Sun-Fire-T2000).
The Shared storage was NetApp FAS2050.
Even though Solaris 10 uses resource control, the kernel parameters were added in the /etc/system (#Metalink: Note:367442.1)
The group OINSTALL and user ORAPRD were created on both nodes

Few parameters were tuned in /etc/rc2.d/S99nettune

bash-3.00# more /etc/rc2.d/S99nettune
#!/bin/sh
ndd -set /dev/ip ip_forward_src_routed 0
ndd -set /dev/ip ip_forwarding 0

ndd -set /dev/tcp tcp_conn_req_max_q 16384
ndd -set /dev/tcp tcp_conn_req_max_q0 16384
ndd -set /dev/tcp tcp_xmit_hiwat 400000
ndd -set /dev/tcp tcp_recv_hiwat 400000
ndd -set /dev/tcp tcp_cwnd_max 2097152
ndd -set /dev/tcp tcp_ip_abort_interval 60000
ndd -set /dev/tcp tcp_rexmit_interval_initial 4000
ndd -set /dev/tcp tcp_rexmit_interval_max 10000
ndd -set /dev/tcp tcp_rexmit_interval_min 3000
ndd -set /dev/tcp tcp_max_buf 4194304
ndd -set /dev/tcp tcp_maxpsz_multiplier 10

#Oracle Required
ndd -set /dev/udp udp_recv_hiwat 65535
ndd -set /dev/udp udp_xmit_hiwat 65535

Check /etc/system is readable by ORAPRD (else RDBMS installation will fail)

-rw-r–r– 1 root root 2561 Apr 17 16:03 /etc/system

Checked the system config on both nodes

For RAM
/usr/sbin/prtconf | grep “Memory size”
Memory size: 8064 Megabytes
For SWAP
/usr/sbin/swap -s
total: 4875568k bytes allocated + 135976k reserved = 5011544k used, 9800072k available

For /tmp
df -h /tmp
Filesystem size used avail capacity Mounted on
swap 9.4G 31M 9.4G 1% /tmp
For OS
/bin/isainfo -kv
64-bit sparcv9 kernel modules
For user
id -a #both UID and GID of user oraprd should be same on both nodes
uid=300(oraprd) gid=300(oinstall) groups=300(oinstall),301(dba),503(tms),504(mscat),102(dwh)
User nobody should exist
id -a nobody
uid=60001(nobody) gid=60001(nobody) groups=60001(nobody)

I had the below entries in the /etc/hosts on both nodes
[]cat /etc/hosts
#Public:
3.208.169.203 myjpsuolicdbt01 myjpsuolicdbt01ipmp0 loghost
3.208.169.207 myjpsuolicdbt02 myjpsuolicdbt02ipmp0 loghost

#Private:
10.47.2.82 myjpsuolicdbt01ipmp1 # e1000g1 -Used this while installing cluster
10.47.2.85 myjpsuolicdbt02ipmp1 # e1000g1 -Used this while installing cluster

10.47.2.76 myjpsuolicdbt01ipmp2 # e1000g0
10.47.2.79 myjpsuolicdbt02ipmp2 # e1000g0

#Vip:
3.208.169.202 myjpsuolicdbtv01
3.208.169.206 myjpsuolicdbtv02

All the interfaces had their ipmp groups.
Confirmed that the interface names of both Private and Public are same across the nodes.
e1000g3 was the Public Interface on both nodes.
e1000g0 and e1000g1 were the Private Interface on both nodes.
-I had 2 interfaces for Private Interconnect, of which i used e1000g1 during the cluster installation.
-Later the other was used using the init.ora parameter cluster_interconnects
-The interface names for e1000g1 on each node were myjpsuolicdbt01ipmp1 and myjpsuolicdbt02ipmp1

Below is the ‘ifconfig -a‘ from Node1

myjpsuolicdbt01 [SHCL1DR1]$ ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.47.2.76 netmask ffffffe0 broadcast 10.47.2.95
groupname ipmp2
e1000g0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 2
inet 10.47.2.77 netmask ffffffe0 broadcast 10.47.2.95
e1000g0:2: flags=9040842<BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 2
inet 10.47.2.77 netmask ff000000 broadcast 10.255.255.255
e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.47.2.82 netmask ffffffe0 broadcast 10.47.2.95
groupname ipmp1
e1000g1:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
inet 10.47.2.83 netmask ffffffe0 broadcast 10.47.2.95
e1000g1:2: flags=9040842<BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
inet 10.47.2.83 netmask ff000000 broadcast 10.255.255.255
e1000g2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 10.47.2.11 netmask ffffffc0 broadcast 10.47.2.63
groupname ipmp3
e1000g2:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
inet 10.47.2.12 netmask ffffffc0 broadcast 10.47.2.63
e1000g2:2: flags=9040842<BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
inet 10.47.2.12 netmask ff000000 broadcast 10.255.255.255
e1000g3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
inet 3.208.169.203 netmask ffffffc0 broadcast 3.208.169.255
groupname ipmp0
e1000g3:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 5
inet 3.208.169.204 netmask ffffffc0 broadcast 3.208.169.255
e1000g3:2: flags=9040842<BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 5
inet 3.208.169.204 netmask ff000000 broadcast 3.255.255.255
nxge0: flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 1500 index 6
inet 10.47.2.13 netmask ffffffc0 broadcast 10.47.2.63
groupname ipmp3
nxge1: flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 1500 index 7
inet 3.208.169.205 netmask ffffffc0 broadcast 3.208.169.255
groupname ipmp0
nxge2: flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 1500 index 8
inet 10.47.2.78 netmask ffffffe0 broadcast 10.47.2.95
groupname ipmp2
nxge3: flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 1500 index 9
inet 10.47.2.84 netmask ffffffe0 broadcast 10.47.2.95
groupname ipmp1


From the above output its clear that
On Node1
e1000g0 is 10.47.2.76 and e1000g1 is 10.47.2.82 – For Private Interconnect
e1000g2 is 10.47.2.11 – For Shared Storage (NetApp)
e1000g3 is 3.208.169.203 – For Public

On Node2
e1000g0 is 10.47.2.79 and e1000g1 is 10.47.2.85 – For Private Interconnect
e1000g2 is 10.47.2.14 – For Shared Storage (NetApp)
e1000g3 is 3.208.169.207 – For Public

Checked for SSH and SCP in /usr/local/bin/
The cluster verification utility(runcluvfy.sh ) checks for scp and ssh in /usr/local/bin/.
Create soft links of ssh and scp in /usr/local/bin/ if they are not there.
cd /usr/local/bin/
ls -l
lrwxrwxrwx 1 root root 12 Apr 25 16:57 /usr/local/bin/scp -> /usr/bin/scp
lrwxrwxrwx 1 root root 12 Apr 25 16:57 /usr/local/bin/ssh -> /usr/bin/ssh


Checked SSH equivalency between the nodes

myjpsuolicdbt01 [SHCL1DR1]$ ssh myjpsuolicdbt01 date
ssh_exchange_identification: Connection closed by remote host
myjpsuolicdbt01 [SHCL1DR1]$

Created the SSH key as per
http://download-uk.oracle.com/docs/cd/B28359_01/rac.111/b28252/preparing.htm#BGBBDHIB
When asked for the “Passphrase” just press [Enter].

From Node1
myjpsuolicdbt01 [SHCL1DR1]$ ssh MYJPSUOLICDBT01 date
myjpsuolicdbt01 [SHCL1DR1]$ ssh MYJPSUOLICDBT02 date
From Node2
myjpsuolicdbt02 [SHCL1DR2]$ ssh MYJPSUOLICDBT01 date
myjpsuolicdbt02 [SHCL1DR2]$ ssh MYJPSUOLICDBT02 date

The time on both nodes were same at any time.
I made sure that from Node1 i could SSH to node2 and also to node1 itself and the same from Node2 too.

Checked for file /usr/lib/libdce.so [ Metalink Note Note:333348.1]
The 10gR2 installer on Soalris 64 bit fails if the file /usr/lib/libdce.so is present.
Check Metalink Note Note:333348.1 for the workaround.

Configure the .profile of user ORAPRD

stty cs8 -istrip -parenb
PATH=/usr/bin:/usr/local/bin
EDITOR=/usr/bin/vi
#umask 077
umask 022
ulimit -c 0
export PATH EDITOR
set -o vi

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export OH=$ORACLE_HOME
export ORA_CRS_HOME=$ORACLE_BASE/product/crs
export CH=$ORA_CRS_HOME
export ORACLE_SID=SHCL1DR1
#export NLS_LANG=Japanese_Japan.UTF8
export NLS_LANG=AMERICAN_AMERICA.UTF8
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/sbin:/usr/bin:/usr/ccs/bin:/usr/ucb:/etc:/usr/X/bin:/usr/openwin/bin:/usr/local/bin:/usr/sbin
export PS1=`hostname`” [$ORACLE_SID]\$ “

####
alias bdump=’cd /u01/app/oracle/admin/SHCL1/bdump/’
alias talert=’tail -f $ORACLE_BASE/admin/SHCL1/bdump/alert_$ORACLE_SID.log’
alias tns=’cd $ORACLE_HOME/network/admin’
alias udump=’cd /u01/app/oracle/admin/SHCL1/bdump/’
alias valert=’view $ORACLE_BASE/admin/SHCL1/bdump/alert_$ORACLE_SID.log’
alias home=’cd $ORACLE_HOME’

Created the directories on both nodes

For ORACLE_BASE
mkdir -p /u01/app/oracle
chown -R oraprd:oinstall /u01/app/oracle
chmod -R 770 /u01/app/oracle
For ORA_CRS_HOME
mkdir -p /u01/app/oracle/product/crs
chown -R root:oinstall /u01/app/oracle/product/crs

For ORACLE_HOME [RDBMS]
mkdir -p /u01/app/oracle/product/10.2.0/db_1
chown -R oraprd:oinstall /u01/app/oracle/product/10.2.0/db_1

For OCR and Voting disks
mkdir -p /u02/oracle/crs/
mkdir -p /u03/oracle/crs/
mkdir -p /u04/oracle/crs/
Check the privileges on the directories [should be oraprd:oinstall]
Created OCR and Voting Disk files
In Linux, the oracle provided cluster file system OCSF2 is used on the shared disk,so ‘touch’ ing ocr_disk1 and vote_disk1would do.
But since i used NetApp as the shared storage (which is mounted on each nodes), i had to create raw files for ocr and voting disk.
OCR
dd if=/dev/zero of=/u02/oracle/crs/ocr_disk1 bs=268435456 count=1
dd if=/dev/zero of=/u03/oracle/crs/ocr_disk2 bs=268435456 count=1
chown root:oinstall /u02/oracle/crs/ocr_disk1
chown root:oinstall /u03/oracle/crs/ocr_disk2
chmod 660 /u02/oracle/crs/ocr_disk1
chmod 660 /u03/oracle/crs/ocr_disk2

VOTING DISK
dd if=/dev/zero of=/u02/oracle/crs/vote_disk1 bs=268435456 count=1
dd if=/dev/zero of=/u03/oracle/crs/vote_disk2 bs=268435456 count=1
dd if=/dev/zero of=/u04/oracle/crs/vote_disk3 bs=268435456 count=1

chown oraprd:oinstall /u02/oracle/crs/vote_disk1
chown oraprd:oinstall /u03/oracle/crs/vote_disk2
chown oraprd:oinstall /u04/oracle/crs/vote_disk3

chmod 660 /u02/oracle/crs/vote_disk1
chmod 660 /u03/oracle/crs/vote_disk2
chmod 660 /u04/oracle/crs/vote_disk3

Downloaded and unzipped Oracle 10.2.0.1 installation files
10gr2_cluster_sol.cpio
10gr2_companion_sol.cpio
10gr2_db_sol.cpio

Ran the Cluster Verification Utility available in 10gr2_cluster_sol.cpio

myjpsuolicdbt01 [SHCL1DR1]$ ./runcluvfy.sh stage -pre crsinst -n MYJPSUOLICDBT01,MYJPSUOLICDBT02 -verbose

Performing pre-checks for cluster services setup

Checking node reachability…

Check: Node reachability from node “myjpsuolicdbt01”

Destination Node Reachable?

———————————— ————————

MYJPSUOLICDBT01 yes

MYJPSUOLICDBT02 yes

Result: Node reachability check passed from node “myjpsuolicdbt01”.

Checking user equivalence…

Check: User equivalence for user “oraprd”

Node Name Comment

———————————— ————————

MYJPSUOLICDBT02 passed

MYJPSUOLICDBT01 passed

Result: User equivalence check passed for user “oraprd”.

Checking administrative privileges…

Check: Existence of user “oraprd”

Node Name User Exists Comment

———— ———————— ————————

MYJPSUOLICDBT02 yes passed

MYJPSUOLICDBT01 yes passed

Result: User existence check passed for “oraprd”.

Check: Existence of group “oinstall”

Node Name Status Group ID

———— ———————— ————————

MYJPSUOLICDBT02 exists 300

MYJPSUOLICDBT01 exists 300

Result: Group existence check passed for “oinstall”.

Check: Membership of user “oraprd” in group “oinstall” [as Primary]

Node Name User Exists Group Exists User in Group Primary Comment

—————- ———— ———— ———— ———— ————

MYJPSUOLICDBT02 yes yes yes yes passed

MYJPSUOLICDBT01 yes yes yes yes passed

Result: Membership check for user “oraprd” in group “oinstall” [as Primary] passed.

Administrative privileges check passed.

Checking node connectivity…

Interface information for node “MYJPSUOLICDBT02”

Interface Name IP Address Subnet

—————————— —————————— —————-

e1000g0 10.47.2.79 10.47.2.64

e1000g0 10.47.2.80 10.47.2.64

e1000g0 10.47.2.80 10.0.0.0

e1000g1 10.47.2.85 10.47.2.64

e1000g1 10.47.2.86 10.47.2.64

e1000g1 10.47.2.86 10.0.0.0

e1000g2 10.47.2.14 10.47.2.0

e1000g2 10.47.2.15 10.47.2.0

e1000g2 10.47.2.15 10.0.0.0

e1000g3 3.208.169.207 3.208.169.192

e1000g3 3.208.169.208 3.208.169.192

e1000g3 3.208.169.208 3.0.0.0

nxge0 10.47.2.16 10.47.2.0

nxge1 3.208.169.209 3.208.169.192

nxge2 10.47.2.81 10.47.2.64

nxge3 10.47.2.87 10.47.2.64

Interface information for node “MYJPSUOLICDBT01”

Interface Name IP Address Subnet

—————————— —————————— —————-

e1000g0 10.47.2.76 10.47.2.64

e1000g0 10.47.2.77 10.47.2.64

e1000g0 10.47.2.77 10.0.0.0

e1000g1 10.47.2.82 10.47.2.64

e1000g1 10.47.2.83 10.47.2.64

e1000g1 10.47.2.83 10.0.0.0

e1000g2 10.47.2.11 10.47.2.0

e1000g2 10.47.2.12 10.47.2.0

e1000g2 10.47.2.12 10.0.0.0

e1000g3 3.208.169.203 3.208.169.192

e1000g3 3.208.169.204 3.208.169.192

e1000g3 3.208.169.204 3.0.0.0

nxge0 10.47.2.13 10.47.2.0

nxge1 3.208.169.205 3.208.169.192

nxge2 10.47.2.78 10.47.2.64

nxge3 10.47.2.84 10.47.2.64

Check: Node connectivity of subnet “10.47.2.64”

Source Destination Connected?

—————————— —————————— —————-

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT02:e1000g0 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT02:e1000g1 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT02:e1000g1 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT02:nxge2 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT02:nxge3 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:nxge2 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:nxge3 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT02:e1000g1 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT02:e1000g1 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT02:nxge2 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT02:nxge3 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:nxge2 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:nxge3 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT02:e1000g1 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT02:nxge2 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT02:nxge3 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:nxge2 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:nxge3 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT02:nxge2 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT02:nxge3 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:nxge2 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:nxge3 yes

MYJPSUOLICDBT02:nxge2 MYJPSUOLICDBT02:nxge3 yes

MYJPSUOLICDBT02:nxge2 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:nxge2 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:nxge2 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:nxge2 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:nxge2 MYJPSUOLICDBT01:nxge2 yes

MYJPSUOLICDBT02:nxge2 MYJPSUOLICDBT01:nxge3 yes

MYJPSUOLICDBT02:nxge3 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:nxge3 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:nxge3 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:nxge3 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:nxge3 MYJPSUOLICDBT01:nxge2 yes

MYJPSUOLICDBT02:nxge3 MYJPSUOLICDBT01:nxge3 yes

MYJPSUOLICDBT01:e1000g0 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT01:e1000g0 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT01:e1000g0 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT01:e1000g0 MYJPSUOLICDBT01:nxge2 yes

MYJPSUOLICDBT01:e1000g0 MYJPSUOLICDBT01:nxge3 yes

MYJPSUOLICDBT01:e1000g0 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT01:e1000g0 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT01:e1000g0 MYJPSUOLICDBT01:nxge2 yes

MYJPSUOLICDBT01:e1000g0 MYJPSUOLICDBT01:nxge3 yes

MYJPSUOLICDBT01:e1000g1 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT01:e1000g1 MYJPSUOLICDBT01:nxge2 yes

MYJPSUOLICDBT01:e1000g1 MYJPSUOLICDBT01:nxge3 yes

MYJPSUOLICDBT01:e1000g1 MYJPSUOLICDBT01:nxge2 yes

MYJPSUOLICDBT01:e1000g1 MYJPSUOLICDBT01:nxge3 yes

MYJPSUOLICDBT01:nxge2 MYJPSUOLICDBT01:nxge3 yes

Result: Node connectivity check passed for subnet “10.47.2.64” with node(s) MYJPSUOLICDBT02,MYJPSUOLICDBT01.

Check: Node connectivity of subnet “10.0.0.0”

Source Destination Connected?

—————————— —————————— —————-

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT02:e1000g1 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT02:e1000g2 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:e1000g0 MYJPSUOLICDBT01:e1000g2 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT02:e1000g2 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:e1000g1 MYJPSUOLICDBT01:e1000g2 yes

MYJPSUOLICDBT02:e1000g2 MYJPSUOLICDBT01:e1000g0 yes

MYJPSUOLICDBT02:e1000g2 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT02:e1000g2 MYJPSUOLICDBT01:e1000g2 yes

MYJPSUOLICDBT01:e1000g0 MYJPSUOLICDBT01:e1000g1 yes

MYJPSUOLICDBT01:e1000g0 MYJPSUOLICDBT01:e1000g2 yes

MYJPSUOLICDBT01:e1000g1 MYJPSUOLICDBT01:e1000g2 yes

Result: Node connectivity check passed for subnet “10.0.0.0” with node(s) MYJPSUOLICDBT02,MYJPSUOLICDBT01.

Check: Node connectivity of subnet “10.47.2.0”

Source Destination Connected?

—————————— —————————— —————-

MYJPSUOLICDBT02:e1000g2 MYJPSUOLICDBT02:e1000g2 yes

MYJPSUOLICDBT02:e1000g2 MYJPSUOLICDBT02:nxge0 yes

MYJPSUOLICDBT02:e1000g2 MYJPSUOLICDBT01:e1000g2 yes

MYJPSUOLICDBT02:e1000g2 MYJPSUOLICDBT01:e1000g2 yes

MYJPSUOLICDBT02:e1000g2 MYJPSUOLICDBT01:nxge0 yes

MYJPSUOLICDBT02:e1000g2 MYJPSUOLICDBT02:nxge0 yes

MYJPSUOLICDBT02:e1000g2 MYJPSUOLICDBT01:e1000g2 yes

MYJPSUOLICDBT02:e1000g2 MYJPSUOLICDBT01:e1000g2 yes

MYJPSUOLICDBT02:e1000g2 MYJPSUOLICDBT01:nxge0 yes

MYJPSUOLICDBT02:nxge0 MYJPSUOLICDBT01:e1000g2 yes

MYJPSUOLICDBT02:nxge0 MYJPSUOLICDBT01:e1000g2 yes

MYJPSUOLICDBT02:nxge0 MYJPSUOLICDBT01:nxge0 yes

MYJPSUOLICDBT01:e1000g2 MYJPSUOLICDBT01:e1000g2 yes

MYJPSUOLICDBT01:e1000g2 MYJPSUOLICDBT01:nxge0 yes

MYJPSUOLICDBT01:e1000g2 MYJPSUOLICDBT01:nxge0 yes

Result: Node connectivity check passed for subnet “10.47.2.0” with node(s) MYJPSUOLICDBT02,MYJPSUOLICDBT01.

Check: Node connectivity of subnet “3.208.169.192”

Source Destination Connected?

—————————— —————————— —————-

MYJPSUOLICDBT02:e1000g3 MYJPSUOLICDBT02:e1000g3 yes

MYJPSUOLICDBT02:e1000g3 MYJPSUOLICDBT02:nxge1 yes

MYJPSUOLICDBT02:e1000g3 MYJPSUOLICDBT01:e1000g3 yes

MYJPSUOLICDBT02:e1000g3 MYJPSUOLICDBT01:e1000g3 yes

MYJPSUOLICDBT02:e1000g3 MYJPSUOLICDBT01:nxge1 yes

MYJPSUOLICDBT02:e1000g3 MYJPSUOLICDBT02:nxge1 yes

MYJPSUOLICDBT02:e1000g3 MYJPSUOLICDBT01:e1000g3 yes

MYJPSUOLICDBT02:e1000g3 MYJPSUOLICDBT01:e1000g3 yes

MYJPSUOLICDBT02:e1000g3 MYJPSUOLICDBT01:nxge1 yes

MYJPSUOLICDBT02:nxge1 MYJPSUOLICDBT01:e1000g3 yes

MYJPSUOLICDBT02:nxge1 MYJPSUOLICDBT01:e1000g3 yes

MYJPSUOLICDBT02:nxge1 MYJPSUOLICDBT01:nxge1 yes

MYJPSUOLICDBT01:e1000g3 MYJPSUOLICDBT01:e1000g3 yes

MYJPSUOLICDBT01:e1000g3 MYJPSUOLICDBT01:nxge1 yes

MYJPSUOLICDBT01:e1000g3 MYJPSUOLICDBT01:nxge1 yes

Result: Node connectivity check passed for subnet “3.208.169.192” with node(s) MYJPSUOLICDBT02,MYJPSUOLICDBT01.

Check: Node connectivity of subnet “3.0.0.0”

Source Destination Connected?

—————————— —————————— —————-

MYJPSUOLICDBT02:e1000g3 MYJPSUOLICDBT01:e1000g3 yes

Result: Node connectivity check passed for subnet “3.0.0.0” with node(s) MYJPSUOLICDBT02,MYJPSUOLICDBT01.

Suitable interfaces for VIP on subnet “3.208.169.192”:

MYJPSUOLICDBT02 e1000g3:3.208.169.207 e1000g3:3.208.169.208

MYJPSUOLICDBT01 e1000g3:3.208.169.203 e1000g3:3.208.169.204

Suitable interfaces for VIP on subnet “3.208.169.192”:

MYJPSUOLICDBT02 nxge1:3.208.169.209

MYJPSUOLICDBT01 nxge1:3.208.169.205

Exception in thread “main” java.lang.NullPointerException

at oracle.ops.verification.framework.network.Subnet.<init>(Subnet.java:66)

at oracle.ops.verification.framework.network.Subnet.getSpanningInterfaces(Subnet.java:286)

at oracle.ops.verification.framework.network.Subnet.getVIPOkSubnets(Subnet.java:313)

at oracle.ops.verification.framework.engine.task.TaskNodeConnectivity.verifyNodeCon(TaskNodeConnectivity.java:492)

at oracle.ops.verification.framework.engine.task.TaskNodeConnectivity.performTask(TaskNodeConnectivity.java:288)

at oracle.ops.verification.framework.engine.task.Task.perform(Task.java:203)

at oracle.ops.verification.framework.engine.stage.Stage.verify(Stage.java:362)

at oracle.ops.verification.framework.engine.ClusterVerifier.verifyStage(ClusterVerifier.java:140)

at oracle.ops.verification.client.CluvfyDriver.main(CluvfyDriver.java:315)

myjpsuolicdbt01 [SHCL1DR1]$

I ignored the last messages about the VIP, as i knew there wasn’t any problem.

Click “Add” to add the second node

Public Node Name :myjpsuolicdbt02
Private Node Name:myjpsuolicdbt02ipmp1
Virtual Host Name:myjpsuolicdbtv02

On Node1

bash-3.00# /u01/app/oracle/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oracle/oraInventory to 770.

Changing groupname of /u01/app/oracle/oraInventory to oinstall.

The execution of the script is complete

bash-3.00# ls -l /u01/app/oracle/product/crs/root.sh

-rwxr-xr-x 1 oraprd oinstall 105 Apr 30 11:14 /u01/app/oracle/product/crs/root.sh

bash-3.00# /u01/app/oracle/product/crs/root.sh

WARNING: directory ‘/u01/app/oracle/product’ is not owned by root

WARNING: directory ‘/u01/app/oracle’ is not owned by root

WARNING: directory ‘/u01/app’ is not owned by root

WARNING: directory ‘/u01’ is not owned by root

Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory ‘/u01/app/oracle/product’ is not owned by root

WARNING: directory ‘/u01/app/oracle’ is not owned by root

WARNING: directory ‘/u01/app’ is not owned by root

WARNING: directory ‘/u01’ is not owned by root

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 1: myjpsuolicdbt01 myjpsuolicdbt01ipmp1 myjpsuolicdbt01

node 2: myjpsuolicdbt02 myjpsuolicdbt02ipmp1 myjpsuolicdbt02

Creating OCR keys for user ‘root’, privgrp ‘root’..

Operation successful.

Now formatting voting device: /u02/oracle/crs/vote_disk1

Now formatting voting device: /u03/oracle/crs/vote_disk2

Now formatting voting device: /u04/oracle/crs/vote_disk3

Format of 3 voting devices complete.

Startup will be queued to init within 30 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

myjpsuolicdbt01

CSS is inactive on these nodes.

myjpsuolicdbt02

Local node checking complete.

Run root.sh on remaining nodes to start CRS daemons.

bash-3.00# /u01/app/oracle/product/crs/root.sh

WARNING: directory ‘/u01/app/oracle/product’ is not owned by root

WARNING: directory ‘/u01/app/oracle’ is not owned by root

WARNING: directory ‘/u01/app’ is not owned by root

WARNING: directory ‘/u01’ is not owned by root

Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory ‘/u01/app/oracle/product’ is not owned by root

WARNING: directory ‘/u01/app/oracle’ is not owned by root

WARNING: directory ‘/u01/app’ is not owned by root

WARNING: directory ‘/u01’ is not owned by root

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 1: myjpsuolicdbt01 myjpsuolicdbt01ipmp1 myjpsuolicdbt01

node 2: myjpsuolicdbt02 myjpsuolicdbt02ipmp1 myjpsuolicdbt02

clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.

-force is destructive and will destroy any previous cluster

configuration.

Oracle Cluster Registry for cluster has already been initialized

Startup will be queued to init within 30 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

myjpsuolicdbt01

myjpsuolicdbt02

CSS is active on all nodes.

Waiting for the Oracle CRSD and EVMD to start

Oracle CRS stack installed and running under init(1M)

Running vipca(silent) for configuring nodeapps

Creating VIP application resource on (2) nodes…

Creating GSD application resource on (2) nodes…

Creating ONS application resource on (2) nodes…

Starting VIP application resource on (2) nodes…

Starting GSD application resource on (2) nodes…

Starting ONS application resource on (2) nodes…

Done.

Then upgraded it to 10.2.0.3.

1]Stop CRS on both the nodes

2]Run the installer

The CRS will be running on both the nodes..

Then i installed RAC(10.2.0.1) and then upgraded it to 10.2.0.3

1]Run the installer for oracle RAC while the crs is running on both the nodes

Then i upgraded it to 10.2.0.3.

The crs,nodeapps and the database should be running on both the nodes.

Advertisements

9 thoughts on “Oracle RAC installation on Solaris SPARC 64 bit

  1. Pingback: release ip address

  2. Pingback: root solaris 9

  3. Pingback: oracle result set count

  4. Pingback: Index « My confrontations with oracle

  5. Thank you for an interesting article, i have a bit of a situation. I,m setting up RAC on 4 nodes, 2 Sun E6900 and 2 Sun M8000. The E6900 ships with ce0-c8 interface cards and M8000 has a mix of bge0 and nxge0 interface cards. I have seen info from Oracle that it is mandatory to have the interface names same for all cluster node members, for both public and private. Any ideas on how to go about sorting this out.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s