Hello,
I'm experiencing some performance issues with a virtualized file server in my lab while using ESXi 5 and Starwind as the iSCSI target.
My Hardware:
SAN - Xeon X3440 w/ 16GB RAM. 3 10K Disks in a Raid 5 generating more than 230MB sequential read and 150MB sequential write speeds.
ESXi - Core i3 with 16GB RAM
2 GB NICs dedicated to iSCSI traffic configured with multipath. I have the IOPS setting in VMware set to 1. Both kernels are on a vswitch.
I've unchecked Delayed ACK within ESXi.
Here's what's happening...
I have a virtualized file server running Windows 2008 R2. If I try to copy a file (4GB ISO) from my PC to the virtualized file server, I can only drive approximately 35MB/s. I've run iperf from my station to the VM and I can get saturate the gigabit connection. If I transfer that same 4GB iso file directly from my PC to the SAN (both running Windows), I can also saturate my gigabit ethernet connection (~110MB/s).
I've tried creating a vSwitch to a dedicated NIC, connected it to my VM and running an iSCSI initiator within the VM and could not exceed 40 MB/s on reads. Writes were okay.
However, I've tried installed Hyper-V on one of the ESXi nodes (identical hardware and NICs to my other ESXi host), created a VM within it and I was able to saturate my gigabit ethernet connection for both reads and writes without any tweaking.
Granted I don't have any screenshots of my tests, but it seems to me that ESXi is responsible for the performance issue when copying to/from my PC to the virtalized file server. Is there something I need to configure within the VM or networking to be able to get decent bandwidth? Is this normal behavior? I do not have jumbo frames configured anywhere (on ESXi or my test in Hyper-V).
Does anyone have any ideas? Please let me know if there's more information I can provide...
Thanks!
DarkDiamond