Setting up Oracle VM using iSCSI storage

Virtualization Technology isn’t new to the technical magazines, but the frevour at which companies bring out their virtualization products shows how HOT it has become at the data centers at this point when the companies vie each other to go GREEN.

Oracle has brought out the Oracle Virtualization Product using the Xen hypervisor,  an Open Source s/w and widely used in all Linux Distributions for virtualization. This product of Oracle comes with an Oracle VM Server and an Oracle VM Manager, a Java based interface to manage the Virtualization.

I have created my TEST environment using 4 desktops and below is how i went about.

ora_vm_arch

As shown in the above picture i used 4 machines.
Machine Details
————————
Manager
——-
CPU        -Intel P4 3Ghz [1 core]
RAM      -500 MB
HDD       -40 GB [SCSI]
OS           -Oracle Enterprise Linux 5.2

Storage
——-
CPU        -Intel P4 3Ghz [1 core]
RAM      -1 GB
HDD       -80 GB [SCSI]
OS           -Openfiler 2.1

VMS1
——-
CPU        -Intel P4 3Ghz [2 core]
RAM      -2 GB
HDD       -80 GB [SCSI]
OS[Dom]-Oracle VM Server 2.1.2

VMS2
——-
CPU        -Intel P4 3Ghz [1 core]
RAM        -2 GB
HDD        -80 GB [SCSI]
OS[Dom0]-Oracle VM Server 2.1.2

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Files to download
——————
——–
Oracle Enterprise Linux 5.2(32bit)         – http://edelivery.oracle.com/linux
Oracle VM Server 2.1.2(32bit)                  – http://edelivery.oracle.com/linux
Oracle VM Manger 2.1.2(32bit)                – http://edelivery.oracle.com/linux
Oracle EL 5.2 Template(32bit)                  – http://edelivery.oracle.com/linux
Oracle RDBMS 10g(Linux 32bit)              – http://www.oracle.com/technology/software/products/database/oracle10g/htdocs/10201linuxsoft.html
VNC Viewer Plugin for VM Manager(32bit)    – http://oss.oracle.com/oraclevm/manager/RPMS/

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Setting up Manager
——————
———-
Install OEL 5.2

-Firewall    =No
-Selinux    =Disabled
-IP Address    =192.168.20.99/255.255.255.0
-Hostname    =Manger
-Root Pswd    =redhat

Install Oracle VM Manager 2.1.2
http://download.oracle.com/docs/cd/E11081_01/doc/doc.21/e10902/ovmig.htm#BHCDACIH

Install the downloaded VNC viewer plugin. This plugin is required if we want to get the console of guests’ from VM Manager.
[Manager:root]$rpm -ivh ovm-console-1.0.0-2.i386.rpm

Stop the iptables and ip6tables services

[Manager:root]$service iptables stop
[Manager:root]$service ip6tables stop

Make sure that it doesn’t start in the next reboot

[Manager:root]$chkconfig –levels 345 iptables off
[Manager:root]$chkconfig –levels 345 ip6tables off

Add the below to the /etc/hosts
192.168.20.98    storage
192.168.20.99    manager
192.168.20.100    vms1
192.168.20.101  vms2
192.168.20.102  guest1

Setting up Storage
————————-

Install Openfiler 2.1

-Select manual partition
-Delete all existing partitions(if they do exist)
– /                =20 GB*
SWAP        =2 GB
Keep the rest as Free Space, this free space can be used as the shared iSCSI Storage.
-Firewall    =No
-Selinux     =No
-IP address    =192.68.20.98/255.255.255.0
-Hostname    =Storage

* Even 1 Gb would do for “/” [it used only 500 MB for my installation] and less for SWAP

Setting up VMS1
—————–
——
Install Oracle VM Server 2.1.2

[Follow the documentation http://download.oracle.com/docs/cd/E11081_01/doc/doc.21/e10899.pdf%5D
-Go for the default options
-IP Address        =192.168.20.100/255.255.255.0
-Hostname          =VMS1
-Agent Password    =vmagent
-Root Password      =redhat

Stop the iptables and ip6tables

[VMS1:root]$service iptables stop
[VMS1:root]$service ip6tables stop

Make sure that it doesn’t start in the next reboot

[VMS1:root]$chkconfig –levels 345 iptables off
[VMS1:root]$chkconfig –levels 345 ip6tables off

Setting up VMS2
—————–
——-
Install Oracle VM Server 2.1.2

[Follow the documentation http://download.oracle.com/docs/cd/E11081_01/doc/doc.21/e10899.pdf%5D
-Go for the default options
-IP Address        =192.168.20.101/255.255.255.0
-Hostname         =VMS2
-Agent Password    =vmagent
-Root Password      =redhat

Stop the iptables and ip6tables

[VMS2:root]$service iptables stop
[VMS2:root]$service ip6tables stop

Make sure that it doesn’t start in the next reboot

[VMS2:root]$chkconfig –levels 345 iptables off
[VMS2:root]$chkconfig –levels 345 ip6tables off

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Configure iSCSI Storage
———————–
———–
[Check Jeffery Hunter’s RAC with iSCSI for more detailed explanation on setting up shared iSCSI storage and FAQ’s and debugging]

From the browser in Manager:
https://192.168.20.98:446/
Userid          =openfiler
Password    =password

Go to SERVICES/ENABLE-DISABLE tab

op2

Click “Enable” for iSCSI target and the status would change from Disabled to Enabled.

On Storage
[STORAGE]$service iscsi-target status
=>it should be running

Go to GENERAL/LOCAL NETWORKS tab
op4

Add the network, whose machines would mount the storage
Name                  = Network 20
Network/Host= 192.168.20.0    => All our machines are in this network
Netmask            = 255.255.255.0
Type                    = Share
[UPDATE]

Go to VOLUMES/PHYSICAL STORAGE MGMT tab
op6

Here we can see the disk /dev/sda of 74.50 Gb with 2 partitions
[click VIEW]    /dev/sda1 for “/” and /dev/sda2 for “SWAP

Click /dev/sda under “Edit Disk” and Scroll Down to “Create a partition in /dev/sda” with the FREE SPACE in the disk
Cylinders 1 to 2107 of /dev/sda are used by / and SWAP of STORAGE

Mode                       =Primary
Partition Type     =Physical Volume
Startig Cylinder  =<let it be the default>
Ending Cylinder  =<let it be the default>

[CREATE]

op7

op8

:

Go to GENERAL tab

:

Go to VOLUMES/VOLUME GROUP MGMT tab
op9

Enter the Volume Group Name            =data
Select the physical volumes to add     =/dev/sda3

[ADD VOLUME GROUP]

:

Go to VOLUMES/CREATE NEW VOLUME
op10

Select the volume group    =data
Create Volume in data
Volume name        =data
Volume Desc         =data
Reqd Space            =<drag the slider to end>
Filesystem Type  =iSCSI

[CREATE]

Go to VOLUMES/LIST EXISTING VOLUMES

op11

Click “EDIT” for the volume “data” and
Scroll down to ‘iSCSI host access configuration for volume “data”
and select “Allow” for Access
[UPDATE]

op12

Make iSCSI clients available to clients
[STORAGE:root]$service iscsi-target restart

Make sure that the service ‘iscsi-target’ is started after restart
[STORAGE:root]$chkconfig –levels 345 iscsi-target on
[STORAGE:root]$chkconfig –list iscsi-target

Discovering Shared iSCSI on VMS1 and VMS2
——————————————–
——————–
[do it from both VMS1 and VMS2]
Discover the shared iSCSI using the ‘iscsiadm‘ utility that comes along with VM Server.
[http://download.oracle.com/docs/cd/E11081_01/doc/doc.21/e10898.pdf]

$iscsiadm -m discovery -t sendtargets -p <iscsi server>

[VMS1:root]$iscsiadm -m discovery -t sendtargets -p 192.168.20.98
192.168.20.98:3260,1 iqn.2006-01.com.openfiler:data.data

The discovered partitions will appear only after restarting the iscsi service
$cat /proc/partitions
$service iscsi restart
$cat /proc/partitions

[root@vms1 ~]# cat /proc/partitions
major minor  #blocks  name
8     0   78125000 sda
8     1     104391 sda1
8     2   23663745 sda2
8     3   50154930 sda3
8     4          1 sda4
8     5    3148708 sda5
8     6    1052226 sda6
[root@vms1 ~]# service iscsi restart
Stopping iSCSI daemon: /etc/init.d/iscsi: line 33:  3006 Killed         /etc/init.d/iscsid stop
iscsid dead but pid file exists                            [  OK  ]
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
[  OK  ]
Setting up iSCSI targets: Login session [192.168.20.98:3260 iqn.2006-01.com.openfiler:data.data]
[  OK  ]
[root@vms1 ~]#
[root@vms1 ~]# cat /proc/partitions
major minor  #blocks  name
8     0   78125000 sda
8     1     104391 sda1
8     2   23663745 sda2
8     3   50154930 sda3
8     4          1 sda4
8     5    3148708 sda5
8     6    1052226 sda6
8    16   61177856 sdb               ===> the shared disk is discovered    
[root@vms1 ~]#

Create and Format the discoverd iSCSI device using OCFS2 file system
———————————————————————
——————————
[do it only from either VMS1 or VMS2]

Create partition on the shared iSCSI disk
[VMS1:root]$fdisk /dev/sdb
n -new partition
p -primary partition
w -write and quit

[root@vms1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.The number of cylinders for this disk is set to 59744.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): p
Disk /dev/sdb: 62.6 GB, 62646124544 bytes
64 heads, 32 sectors/track, 59744 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot      Start         End      Blocks   Id  System
Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-59744, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-59744, default 59744):
Using default value 59744

Command (m for help): p
Disk /dev/sdb: 62.6 GB, 62646124544 bytes
64 heads, 32 sectors/track, 59744 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       59744    61177840   83  Linux

Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@vms1 ~]#
[root@vms1 ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80000000000 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2            6781        9726    23663745   83  Linux
/dev/sda3              14        6257    50154930   83  Linux
/dev/sda4            6258        6780     4200997+   5  Extended
/dev/sda5            6258        6649     3148708+  83  Linux
/dev/sda6            6650        6780     1052226   82  Linux swap / Solaris
Partition table entries are not in disk order

Disk /dev/sdb: 62.6 GB, 62646124544 bytes
64 heads, 32 sectors/track, 59744 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       59744    61177840   83  Linux
[root@vms1 ~]#

Create the OCFS2 file system on the partition using mkfs utility
[VMS1:root]$mkfs.ocfs2 -b 4K -C 32K -N 4 -L data /dev/sdb1
[VMS1:root]$partprobe

[root@vms1 ocfs2]# mkfs.ocfs2 -b 4K -C 32K -N 4 -L data /dev/sdb1
mkfs.ocfs2 1.2.7
Overwriting existing ocfs2 partition.
Proceed (y/N): y
Filesystem label=data
Block size=4096 (bits=12)
Cluster size=32768 (bits=15)
Volume size=62646091776 (1911807 clusters) (15294456 blocks)
60 cluster groups (tail covers 8703 clusters, rest cover 32256 clusters)
Journal size=268435456
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 3 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful
[root@vms1 ~]#

Configure the shared partition on VMS1 and VMS2
————————————————————————-
[do it on VMS1 and copy the file to VMS2]

Create the cluster.conf file under /etc/ocfs2
This cluster.conf file will have the details of the nodes that will be participating in a Cluster.
So in our case, the cluster name will be OCFS2 and the number of nodes will be 2 (VMS1 and VMS2). Later, should a new VM Server be added to this cluster, the details of it should be added to this cluster.conf.
All the machines(nodes) participating in a cluster should have the same cluster.conf entries.

[VMS1:root]$mkdir /etc/ocfs2
[VMS1:root]$cd /etc/ocfs2
[VMS1:root]$touch cluster.conf
<I had the below in my cluster.conf>

[root@vms1 ocfs2]# cat cluster.conf
node:
     ip_port         =7777
     ip_address      =192.168.20.100
     number          =1
     name            =vms1
     cluster         =ocfs2
node:
     ip_port         =7777
     ip_address      =192.168.20.101
     number          =2
     name            =vms2
     cluster         =ocfs2
cluster:
     node_count      =2
     name            =ocfs2
[root@vms1 ocfs2]#

Create /etc/ocfs2 dir on VMS2 and copy the cluster.conf from VMS1 to VMS2 .

Mounting the shared disk
—————————-
———
[do below on both VMS1 and VMS2]

[VMS1:root]$service o2cb status
[VMS1:root]$service o2cb load
[VMS1:root]$service o2cb online
[VMS1:root]$service o2cb configure =>go for the defaults>
[VMS1:root]$service o2cb start

<if any typo err in the cluster.conf it will throw error, in that case correct the typo error and offline and unload o2cb >
[VMS1:root]$sevice o2cb offline
[VMS1:root]$sevice o2cb unload
:
:

Make sure the service O2CB is started after reboot
[VMS1:root]$chkconfig –levels 345 o2cb on
[VMS1:root]$chkconfig –list o2cb

Try to mount the shared iSCSI disk manually to check:
$mount /dev/sdb1 /OVS -t ocfs2
Unmount it
$umount /dev/sdb1

For the High Availability to work,
The shared disk must be mounted under /OVS and should be in /etc/ovs/repositories instead of mounting by adding details in fstab.
Make entries in the /etc/ovs/repositories using the /usr/lib/ovs-makerepo command
[http://download.oracle.com/docs/cd/E11081_01/doc/doc.21/e10898/ha.htm#CHDEFABI]

$/usr/lib/ovs/ovs-makerepo /dev/sdb1 C “cluster root”
$/usr/lib/ovs/ovs-cluster-check –alter-fstab

<this should be done on all the vm servers(vms1 and vms2)>

[root@vms1 OVS]# /usr/lib/ovs/ovs-makerepo /dev/sdb1 C "cluster root"
Initializing NEW repository /dev/sdb1
Updating local repository list.
ovs-makerepo complete
[root@vms1 OVS]#
[root@vms1 OVS]# cat /etc/ovs/repositories
# This configuration file was generated by ovs-makerepo
# DO NOT EDIT
@211419C38BD34CE4A0B9083A47F7AEBF /dev/sdb1
[root@vms1 OVS]#
[root@vms1 OVS]# /usr/lib/ovs/ovs-cluster-check
Need to remove /OVS mount from /etc/fstab, but --alter-fstab not specified.
[root@vms1 OVS]# /usr/lib/ovs/ovs-cluster-check --master --alter-fstab
Backing up original /etc/fstab to /tmp/fstab.BJZxu20292
Removing /OVS mounts in /etc/fstab
O2CB cluster ocfs2 already online
Cluster setup complete.
[root@vms1 OVS]#

REBOOT both vms1 and vms2 and confirm that the shared disk is mounted under /OVS on both using df -h command.

Now we have,
Storage ready with iSCSI
Manager with VM Manager
VMS1 with VM Server and shared iSCSI storage mounted under /OVS
VMS2 with VM Server and shared iSCSI storage mounted under /OVS

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Create/Add Server pool and Servers from Oracle VM Manager
———————————————————-—————————–
On MANAGER:

Server Pools -> Create Pool ->
Create the pool Pool1 with VMS1 as Server Master, Utility Server (provide root/<root_passwd>) and VM Server
Enable HA

Servers -> Add Server
Select pool ‘Pool1’ and add the server VMS2 as VM Server.

create_server_pool_2

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Creating the Guest OS
——————————–

Create VM Guest is divided into 4 sections:
1]Download and register the template
2]Create the VM guest OS
3]Access VM guest OS
4]Configure Hostname and IP address for Guest OS


1] Download and register the template

Download “Oracle Enterprise Linux 5 Update 2 template -PV Small x86(32 bit).zip” from http://edelivery.oracle.com/linux
Two templates are available for download.
Small – 4Gb
Large – 10gb
The default root password is ovsroot.

Oracle VM Manger can download/register the template from HTTP/FTP source or from a Machine in the Server Pool or Linux P2V Import.

We will download the file and copy it to either VMS1 or VMS2, which are members of the pool ‘Pool1‘.

Create the seed_pool directory to hold the templates so that the VM Manger can detect and register the template. The seed_pool directory should be directly under /OVS. Unzip and untar the file in seed_pool and then import and register the template from VM Manager.

[VMS1:root]mkdir /OVS/seed_pool
[VMS1:root]mv “Oracle Enterprise Linux 5 Update 2 template -PV Small x86(32 bit).zip” /OVS/seed_pool/
[VMS1:root]unzip “Oracle Enterprise Linux 5 Update 2 template -PV Small x86(32 bit).zip”
[VMS1:root]tar -xzvf OVM_EL5U2_X86_PVM_10GB.tgz
[VMS1:root]

Once the OVM_EL5U2_X86_PVM_4GB.tgz is untarred, the template will be available in OVM_EL5U2_X86_PVM_4GB directory. After extracting the tar file the size of the OVM_EL5U2_X86_PVM_4GB.tgz will be around 6.5GB, so make sure that you have enough space in seed_pool before extracting the tar file.

To Discover and Register the template:

VM Manager -> Resources -> Virtual Machine Templates -> Select from Server Pool -> [Next]

template_registration_1

[APPROVE]

2]Create the VM

VM Manger -> Virtual Machines -> [Create Virtual Machine]

create_vm_1_1

[NEXT]

The Preferred Server = Auto/Manual . I chose MANUAL and selected VMS1 as the preferred server

create_vm_2_2

[NEXT]

Select the OEL 5.2 template. Select the one we have registered now (4Gb).

create_vm_3_3

[NEXT]

Enter the Virtual Machine Name and the Console password and enable HA too. The console password is needed to access the guest OS through vncviewer or VM Manger Console option. This password will be in vm.cfg. (Console Password= oracle)

create_vm_4_4

[NEXT]

create_vm_5_5

[CONFIRM]

create_vm_6_6

[CREATE VIRTUAL MACHINE]

After the creation the GUEST2 will be in ‘Powered OFF’ state.
There will be a direcotry created under /OVS/running_pool/ for this Paravitualized Guest.
cd /OVS/running_pool/52_guest2/

Actually what happens is that the manager will make a copy of the template in the seed_pool to running_pool. So the drive should have a minimum of around 6.5 GB of space, since we use the ‘small’ template. So you can monitor this activity from the ovs_operation.log of the utility server, in our case its VMS1.

[VMS1:root]$tail -f /var/log/ovs_agent/ovs_operation.log

Also to monitor the ‘cp’ status.
[VMS1:root]$ cd /OVS/running_pool
[VMS1:root]$du -sh */
and monitor the disk space used

create_vm_7_7

As mentioned after the creation the Virtual Machine will be in “Powered OFF” state.
The directory /OVS/running_pool/<vm name>/, will contain the system.img file and the vm.cfg for this Guest2. The vm.cfg should be modified for the guest2.

I modified my vm.cfg to

[VMS1:root]$cd /OVS/running_pool/52_guest2/
[VMS1:root]$cat vm.cfg
bootloader = '/usr/bin/pygrub'
disk = ['file:/OVS/running_pool/52_guest2/system.img,hda,w']
maxmem = 600
memory = 500
name = '52_guest2'
on_crash = 'restart'
on_reboot = 'restart'
uuid = 'cfd16fd1-1c7f-691a-51b9-2c5cbcc714a9'
vcpus = 1
vfb = ['type=vnc,vncunused=1,vnclisten=0.0.0.0,vncpasswd=oracle']
vif = ['bridge=xenbr0,mac=00:16:3E:5C:1C:F6,type=netfront']
vif_other_config = []

VM Manager-> Select Guest2 and “Power On”

create_vm_8_8

The status of the guest os ‘start’ operation can be checked from the log file (check /var/log/ovs-agent/ovs_operation.log for success in starting or not ) on the VM server where the guest is started.

"2008-12-18 10:59:18" INFO=> xen_start_vm: success. vm('/OVS/running_pool/12_guest1')
"2008-12-18 10:59:18" INFO=> start_vm: vm('/OVS/running_pool/12_guest1') on srv('192.168.20.100') => success
"2008-12-18 10:59:18" INFO=> start_vm: success. vm('/OVS/running_pool/12_guest1') ip=''

3]Access VM Guest OS

Once the Guest2 is running, we can connect to it using the

(i)The ‘console’ option from VM Manager provided ovm-console-1.0.0-2.i386.rpm is installed on the Manager server.
Select the Guest2 -> Console -> ‘provide the vncpassword’
If you have forgot the vncpassword, its mentioned in /OVS/running_pool/<guest_name>/vm.cfg

(ii)vncviewer utility
[Manager:root]$vncviewer <vnc_server>:<port>

[Manager:root]$vncviewer 192.168.20.100:5900

The vncserver is the virtual server(VMS1 or VMS2) hosting the guestOS.
The first guest will use port 5900, the second will use 5901 and so on. Provide the VNC_Password when asked.

(iii)xm console <domain_name>
[VMS1:root]$xm console 52_guest2
Login: root
Passwd: ovsroot
=> the default root password for root in OEL Template is ovsroot

4] Configure Hostname and IP address for Guest OS

By deault the guestOS from template doesn’t have any IP address or hostname. We need to edit 3 files to set it.

(i)/etc/sysconfig/network
HOSTNAME=guest2
NETWORKING=yes
..

(ii)/etc/hosts
<add hostname and ip address>
192.168.20.101        guest2

(iii)/etc/sysconfig/network-scripts/ifcfg-eth0
HWADDR should be the same as the MAC in vm.cfg of VMSERVER(in /OVS/running_pool/<vmserver_name>/vm.cfg)
IPADDR should be the same as in the /etc/hosts file

[GUEST2:root]$cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Intel Corporation 82540EM Gigabit Ethernet Controller
DEVICE=eth0
BOOTPROTO=none
BROADCAST=192.168.20.255
HWADDR=00:08:74:FC:6F:06
IPADDR=192.168.20.99
NETMASK=255.255.255.0
NETWORK=192.168.20.0
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
IPV6INIT=no
PEERDNS=yes

Finally restart the network service

$service network restart

Log files
————-
VM Agent
/opt/ovs-agent-2.2/logs/ovs_trace.log
service ovs-agent status/start/stop/restart
VM Server
/var/log/xen/
var/log/ovs-agent/ovs_operation.log (on utility server)

Troubleshooting
————————

(i) If for some reason the shared disk is not mounted:

1]check for o2cb status (unload,load and online)
2]cat /etc/partitions        => check whether the partition is discovered ?
3]service iscsi restart
4]cat /etc/partitions        => check whether the partition is discovered ?
5]/usr/lib/ovs-cluster-check => OK ?
6]/etc/init.d/ovsrepositories start

(ii) Installing Oracle

The most of the kernel values are already configured in the template. The template OS doesn’t have X server for the GUI installation of Oracle RDBMS.

To install oracle using GUI connect using ssh with X flag

[MANAGER:root]ssh -X oracle@guest1
./runInstaller.sh

(iii)Mounting CD ROM in Guest OS

Check this site http://www.option-c.com/xwiki/Xen_CDROM_Support

(iv)Uninstall VM Manager

/opt/ovs-manager-2.1/bin/runInstaller
<all files pertaining to ovm manager will be removed including the XE database>

(v)Registering existing VM guests in VM manager

Resources-> Virtual Machine Images-> Import -> Select from server pool(discover and register)
The system.img and vm.cfg file are supposed to be under /OVS/running_pool/<guest_name>/

References
http://download.oracle.com/docs/cd/E11081_01/welcome.html
http://forums.oracle.com
http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_iscsi.html

Advertisements

4 thoughts on “Setting up Oracle VM using iSCSI storage

  1. Pingback: Index « My confrontations with oracle

  2. Small correction in the troubleshooting:

    “2]cat /etc/partitions => check whether the partition is discovered ?”

    should read “cat /proc/partitions”

  3. Sweet website , I hadn’t noticed boomslaang.wordpress.com before till my neighbor told me about it.
    Keep up the great work! I will be responding more at boomslaang.wordpress.com

    Looking forward to some good new friends here at boomslaang.wordpress.com

    🙂

    -Randy

  4. Pingback: CFD selectで想像を膨らませてみます « CFD研究blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s