Hi Everyone,
Let me preface this post by saying that I have done a lot of installs with 1gb networking and lots with ESX and Hyper-V. Traditionally it iSCSI networks have always been on separate switches from the LAN/VM traffic.
With 10GB switches I am looking for some general feedback on using 2 x force 10, 2x catalyst 5000 or 2 x procurve 5900's, and separating the traffic via VLAN's. I see no reason why this will not work, outside of the concerns around back-plane throughput and overall processing power of the switch or port buffering.
Has anyone implemented such solutions outside of on core switches, and had good success? Any reason you can think of not to, outside of possible performance?
We are looking to standardize on 10gb on this project, need more then 24 ports per switch, require redundancy, and would prefer to keep costs low while still meeting the performance expectations. Peformance demands are not through the roof, and in monitoring a set of 4 x 3750g's that this workload is coming from, I do not see any kind of immediate performance concerns.
Thanks
↧
Sharing 10GB Switch With iSCSI / Network on Different VLAN's
↧