I'm very pleased to report that we had an uneventful Easter weekend in IT ... things worked as expected although our credit card transactions are still laggy.
Today Ed and I spent about 5 hours talking about next steps for our network infrastructure. Before hand I spent some time cleaning up my office and rearranging it a bit so we have more wall space for brainstorming ... time to get another 4x8 whiteboard for me crib :-)
A couple people left comments on my prior posts inquiring about our network diagram ... here's the basic IDF diagram in all it's whiteboard glory
We have 5 IDF's (aka wiring closets/rooms) ... we don't count our server room. No, the "new IDF" is not really new anymore, it is the most recent though and is the future network core and server room. Even our cat5e IDF ties are short enough that we get Gig to the desktop to everyone.
After chatting with Terry last week and Ed and I pondering on this over the weekend, here's what our dream scenario would look like
It includes putting a new HP 5406 + GBIC module in the new IDF which then frees up managed Dell 5324 switches to displace the remaining non-managed 2624's we had. We'd add 2 new fiber runs and end up with fiber home runs from each switch back to the 5406. The server rack gets 6 cat6 direct runs to the 5406 (only 4 are currently needed). This would give us a fast fully managed switch environment and the shortest path from clients to servers ... and the big key is finally moving us to routable VLANs. Problem is the price tag .. CHA-CHING!
So instead for the immediate future we'll put a 5406 in the new IDF, replace all 2624's with 5324's, run the straight server shots, and start adding lots of VLANs. We'll work on the rest as time and budget allow.
So now to adding more VLANs ... we currently have 2. We basically started with clean slate and started listing what devices or "unique" end users are attached to each IDF. Then we started writing down what devices/users should be in their own VLAN for management, monitoring, security, performance, etc. Then finally we started assigning IP ranges based on a few criteria to each VLAN group staying with the 10.10.X.X scheme (because that's my favorite). By 5:30pm we had what we felt was a great draft to work from.
On paper we're going from 2 to 18 VLANs which will really carve up our traffic for much improved network performance and better monitoring/troubleshooting. Now it's time to start getting bids on the HP 5406 so we can start work on this when we get back from the roundtable next week.
Jason,
Perhaps at the Roundtable, on Wednesday during ChMS time, a few of us could break and talk about VLANS. I need to look at that option, but have many holes in my knowledge on implementation. Maybe we could put our heads together and learn.
Chris
Posted by: Chris McGuffin | April 10, 2007 at 05:26 AM
Jason,
I would love to be in a discussion like Chris mentioned. That is definitely an area I need schooled in.
Mike
Posted by: Mike Mayfield | April 10, 2007 at 09:16 AM
recently muddled my way through similar plan. biggest reason for bothering to VLAN was to isolate multicast network traffic. multicast not an issue until network load balancing &/or Virtual machines. We used a HP4208vl. Seems to have worked pretty well. A side benefit I didn't expect was the way that the smaller HP switches I use as satelites can piggy back the VLAN config from the 4208. Don't know if Dell switches can participate in same way when using a HP core. might want to check it out before choosing a mixed vendor network. (also note: found myself way out of depth on this stuff so don't take what I say as gospel).
Posted by: Dean | April 10, 2007 at 12:03 PM
I've been doing some VLAN work at both the church and office over the past 6 mos, and would love to share ideas.
Our networks aren't so big that we separate by physical location. We tend to stay pretty flat, but VLAN by function such as Data, VoIP, Public WiFi, Private WiFi, Facilities/Building, A/V Gear, etc.
Given the small amount of routing that occurs between the VLANs, we're using our Firebox X700e (Office) or our SonicWall Pro 2040 Enhanced (Church) as our Layer 3 device. In both cases, it works well. As we grow, I'm sure we'll get a L3 switch as I'm sure the Firebox/SonicWall will run out of steam.
Given that your other switches are Dell, any particular reason you're not looking at the PowerConnect 6000 series?
Posted by: Bryan Johnson | April 11, 2007 at 10:33 AM
Greetings all from Bethlehem Baptist in Minneapolis...
We've got Dell switches that we installed two and three years ago for Bethlehem's network. Their VLAN implementation has been such a major pain that I'm planning to replace the whole lot of them with ProCurves or Ciscos. A buddy of mine who helps me work on network infrastructure knows VLANs and told me that its a lot easier on those other manufacturer's devices. I'd stay away from Dell if VLAN is part of your overall plans for infrastructure. The devices have been fine otherwise, but VLAN is no fun on the PowerConnect at all.
Posted by: Andy Lang | April 13, 2007 at 04:05 PM
HP? yuck! Why not go with Cisco or foundry? too much $?
recently found your page and find it informational and entertaining. keep up the good work.
Posted by: Justyn Fortenberry | July 09, 2007 at 11:38 AM