Quantcast
Channel: VMware Communities : All Content - All Communities
Viewing all articles
Browse latest Browse all 207710

traffic separation advice needed

$
0
0

Dear Community,

 

I’m currently studying the NetApp and VMware best practices in order to implement changes that improve stability, performance and security to our System. At the moment the IP-Based storage traffic, such as the public IP network traffic is mixed up within one subnet.

 

I figured out three “easy to pick” topics and want to get them done very soon:

  1. VLAN (traffic separation)
  2. S-flow
  3. Jumbo Frames

 

The first topic (traffic separation) is definitely the most urgent and important section but unfortunately it also tends to be quite complicated. That’s why I hope to get some help from you guys. The two other topics are to be implemented later on, seem to by very straight forward and don’t require further clarification.

 

Existing Hardware and cabling:

 

1 x NetApp FAS 2020 (active/active Cluster)

2 x HP 2910al-48G switches

3 x ESXi Hosts (with dual LAN onboard and dual LAN PCIe Intel Server NICs)

Each Storage controller is connected to switch01 and to switch02; the trunks are configured as single mode with VIF1 and VIF2. The Onboard NIC of each ESXi Host is connected to switch01 and to switch02. The additional PCIe NIC of each ESXi Host is also connected to switch01 and to switch02.

I’m not sure if I really need the additional PCIe NICs but my initial though was to use a dual NIC for redundant storage traffic and a dual NIC for redundant public traffic. However If I don’t need them I don’t use them, if I need them I have them.

Now to get back to the main question:

At the moment the storage, the ESXi Hosts, such as the Windows servers are all on the same Subnet. Let’s say this “all-in-one” subnet is 10.10.10.0/24. According to the white papers I have to:

  • Create a separate VMKernel for each storage protocol

  • Place each VMkernel on a different subnet

  • Mkernel ports can be in the same switch

  • For NFS datastores each VMport has only one active VMnic and one or more standby VMnics

Further I want to keep my Windows Servers on the 10.10.10.0/24 subnet and if I look at these two pictures:

 

http://ulmi.org/bilder/netapp/traffic01.jpg

http://ulmi.org/bilder/netapp/traffic02.jpg


 

I question myself the following things:

 

  • If the VMkernels for NFS/iSCSI are configured to 192.168.x.x, what will happen to the public traffic of my Windows VMs on 10.10.10.0/24?

  • Are these pictures only showing the storage traffic settings?

  • Are there more configurations required on the VMKernel for the 10.10.10.0 subnet?

  • Especially the Exchange Server is critical, because it is still physical (not yet migrated into Vsphere) but its Databases are lying on the filer (10.10.10.x) and mounted via iSCSI raw device mapping.

Maybe I’m not deep enough into this material but from my perspective it’s not clearly stated in the documents.

 

Please give me some advice.

 

Kind Regards,

Michael


Viewing all articles
Browse latest Browse all 207710

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>