Continuing from my prior post on iSCSI switches … this time I found some great info searching through the EqualLogic technical whitepapers.
iSCSI traffic is just standard TCP/IP, so there is nothing really special about the packets or
protocol. What is different about iSCSI traffic is the amount of it (usually there's a lot) and
the fact that it must be delivered with minimum delay (latency).
Most network traffic from desktops or small servers is low volume and intermittent, and
most switches are perfectly adequate to handle traffic like this. However, iSCSI tends to
generate lots of traffic, at high speeds, for long periods; for example, it is not unusual for an
EqualLogic array to be receiving and/or sending four simultaneous unbroken streams of
network packets, running at full wire speed (1 Gbps) for minutes or hours at a time. Many
network devices are simply not up to the challenge of delivering that volume of traffic
without having to discard packets.
Manufacturers generally rate their switches according to a category of use. Common terms
such as "workgroup", "wiring closet", "SOHO" and "unmanaged" are generally associated
with switches that may not be adequate for high-traffic networks such as iSCSI.
"Enterprise", "Layer 3" or "L3", "managed", and "datacenter/server farm" are associated
with higher-performance switches that may be acceptable for high-traffic applications.
We have found that the following are requirements for a network switch that will be used
in a high-traffic environment such as iSCSI:
1 - The switch should be able to process and forward a continuous stream of data at full
wire speed (1 Gbps) on all ports simultaneously. Many switches will not in fact do this
even though their advertising copy implies that they can.
Some switches are designed so that multiple Ethernet ports (usually 4 or 8) are assigned to
a "port group", and the amount of bandwidth that can be handled by that port group is
limited. This means that each port can handle the full 1 Gbps when used by itself, but as
soon as you start using two ports in that port group, you find that each one will only move
data at half of that speed (512 Kbps), and if you use four ports, you will only see one-fourth
of that speed (256 Kbps) on each port. This will give unacceptably poor performance in
high-traffic applications such as iSCSI.
In some switches, the internal data channels are not fast enough to handle the full flow of
data. The internal backplane or "fabric" should be rated for at least 1 Gbps per port. So, on
a 24-port switch, the internal backplane should be designed to handle 24 Gbps or more.
2 - The switch must minimally support "receive flow control", meaning that if it is sending
traffic to another device, and that device sends a "pause frame" back to the switch to tell it
to stop sending, the switch must stop sending traffic until the receiving device tells the
switch to proceed. This is necessary to prevent the receiving device from having to discard
packets because it is getting overwhelmed.
Some switches also support "send flow control"; that is, if the switch decides that it is being
overwhelmed by some sender, it can send "pause frames" to the sender to ask it to
momentarily stop sending traffic. This is a benefit if available but is not required.
3 - The switch must have adequate, dedicated buffer space *per port* to allow it to buffer
bursts of packets that it receives but momentarily cannot forward, or to buffer packets that
it is holding on behalf of a receiver that has used flow control to momentarily pause
transmission. We have found that a minimum of 256 KB *per port* is a desirable value.
Note that many switches have a buffer allocation *per port group* or *for the entire
switch*, which means that the entire group of ports (4 or 8), or every port on the switch,
shares the same buffer space. If this amount of buffer space, divided by the number of
ports in the port group or on the entire switch, does not give at least 256 KB per port, then
the switch will likely be a poor performer in a high-traffic application like iSCSI.
4 - If you are using your switch to route iSCSI traffic, you must ensure that it has adequate
speed and throughput to route the packets at wire speed with minimal delay (latency).
Many switches that advertise L3 routing capabilities cannot in fact perform the task
adequately for an iSCSI network.
So what switches are good? Great question! I heard an EqualLogic engineer in the past refer to the following:
Dell 54XX good
Dell 62XX best
HP 2800 bad
HP 2900 ok
HP 3500 better
How exactly he arrived at that list I don’t know and haven’t gone to investigate .. so you’ll have to do your own homework ;-)
We’ve been using a pair of Cisco 24port 2960G-24TC for our iSCSI network since installing our first EqualLogic array July 2006. We were recommended this model by our EqualLogic SE when we purchased the array.
I was curious if it meet the above recommendations so I went googling…
*32 Gbps switching fabric and forwarding bandwidth 32Gbps..looks like that satisfies #1
*For sure we’re using flow control..but, nothing I could find stats if it does both directions
*64MB DRAM & 32MB Flash..again, nothing I could find specifically about buffer ports, but I did find this on a forum post “On 2960 switches each ASIC has shared buffers (ingress of 192KB and egress of 384KB).” Not so helpful :-)
Although we’ve not done any performance testing, we’ve not noticed any speed issues of these switches for almost 4 years. If we were looking for iSCSI switches today, we’d look at HP since we’re using HP switches for everything else.
There ya go. Hope that’s helpful. Maybe some switching gurus will chime in with additional info. Of course this will all change when 10Gig becomes cheap enough to be mainstream..which I hope is very soon :-)
[ FYI - if you're investigating EqualLogic Storage I'd love to chat with you and provide quotes. I've been moonlighting for a Dell Premier Partner since 2008 doing EqualLogic sales and implementation with customers large and small all across the country. And yes, we beat Dell direct pricing :-) Shoot an email to jason.powell at vr6systems dot com - thanks! ]
Recent Comments