Quantcast
Channel: VMware Communities: Message List
Viewing all articles
Browse latest Browse all 225804

vMotion issue since applying ESXi 5.0 Update1

$
0
0

We have a simple ESXi 5.0 two node cluster.

 

- Both hosts are HP DL360 G7 servers with 6 nics. We have 3 vswitches in place, each has two physical nics assigned/bound:

.- - - vSwitch0 (vmnic0 & vmnic1) = VM Network and Management Network on 100.200.1.x/24

- - - vSwitch1 (vmnic2 & vmnic3) = iSCSI traffic on 192.168.100.x/24
- - - vSwitch2 (vmnic4 & vmnic5) = vMotion on 192.168.100.x/24

 

Our production network resides on VLAN1 using the 100.200.1.x subnet. VLAN2 (ie: the 192.168.100.x network) is used for iscsi/vmotion traffic only. The all nics are connected to a stack of HP ProCurve gigabit switches.

 

Since updated both hosts with ESXi 5.0 Update 1 we are no longer able to live migrate vm's between hosts. We can however, power off the vm's and cold migrate vm's between hosts. If we disable vmotion on vSwitch2 and enable vmotion on vswitch0 it works fine. All physical nics are the same make/model hardware and all using the same bnx2 driver. Other than applying U1 nothing has changed on the storage or network infrastructure. All nics connect to their relevant VLAN'd ports on the same HP ProCurves. We logged a call with VMWare but they've deemed this as a network issue and have passed the buck back to us. Anyone seen anything like this before?

 

Any advice or suggestions would be welcome.


Viewing all articles
Browse latest Browse all 225804

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>