Hello everyone.
I´m currently involved in a consulting service for designing and laying out a migration path from a physical environment to a virtualized environment.
We are at a inicial lab stage trying to figure out how to lay out a path for a migration plan from physical to a virtual reusing the current physical hosts.
Our current challenge is to figure out a way to go from a physical cluster environment running on Novell Cluster Services v1.8 on NetWare v6.5 (SP8) accesing a SAN to a virtual environment running Novell OES11 Cluster Service (SUSE Linux Based) accessing the same SAN.
So far we have been provided with VMWare ESXi v5.0 as our virtualizing environment to work on our labs and come up with a proof of concept and develop the migration plan.
Here´s our setup:
Two(2) hosts running ESXi v5.0 for virtualization connected vía HBA (QLogic) to a SAN as shared space.
Two(2) physical hosts running Novell NetWare v6.5 (SP8) with Novell Cluster Service v1.8 + HBA (Qlogic) to access the SAN as the shared space for the SBD partition (known in the MS world as the quorum partition) and also the shared space for the resources (ip addresses, disks) to serve to the network and migrate when necessary.
We have been successful at the following:
1.- Creating and configuring the Novell Cluster Service on the physical NetWare boxes (creating and accessing the sbd partitions and shared volumes).
2.- Creating one Virtual Novell OES11 machine on each ESXi v5.0 hosts (finding the sbd partition, joining the cluster and seeing the shared volumes).
We have not been successful at the following:
1.- Having a 2nd Virtual Novell OES11 machine on either ESXi v5.0 host join the current Cluster. Anytime we tried to join the cluster we received the "Failed to find the SBD partition".
2.- Having another Virtual NetWare v6.5 (SP8) on either ESXi v5.0 host join the current Cluster. In this case NetWare can´t find through the disk management utility (nssmu) any sbd partition or any cluster enable partition.
I found some documentation about Microsoft clustering services configuration on top of ESX hosts, but only on ESX hosts. I also found a very enlighting video on youtube.com about configuring Microsoft clustering services on top of ESX hosts. The guy adds a 2nd scsi controller to the virtual machine settings to be able to access the quorum partition.
We believe this is our breaking point. We are not able to add a 2nd controller to the VM´s settings to map it to the SBD partition on the SAN.
Is this a limitation from the ESXi version?. Is it possible to do it throuhg CLI ? Is it possible to do it through the gui? I try this last by adding a 2nd hard disk and a new controller was created automatically, but I could never attach it to the LUN where the SBD is.
Thank you in advanced