Update: Forgot to include the specs on our FlexPods. Have included those at the bottom of the post.
Something awesome happened this week – the hardware on which we’ll be running my virtual desktop project and more has arrived!
Here are a couple of pics of one of the pair of FlexPods we purchased a couple months ago. This unit, in two racks, is in the data center in my building, while its twin is located at a data center a few miles away. One of the pics shows both racks, while the other is a close-up crop of just the Cisco UCS compute section.
I’m really excited about being able to deploy these units, so I thought I’d explain why we decided to go with an integrated stack.
Until now, it would be fair to say we’ve been a Dell server shop. Sure, we have a mix of everything, from some IBM AIX systems to a dwindling number of Sun boxes, and even an AS400 slated to be turned off next year. But the bulk of our infrastructure runs on Dell servers – racks and racks of Dell pizza box servers.
Early on, when I was only looking for hardware on which to run my VDI project, our Citrix reseller pitched us on the Cisco UCS system and I was intrigued by the concept, but doubtful that I could get buy-in from our executive leadership to move away from Dell to what seemed to be a fairly new product line. I’ll admit I also had the thought, “what’s a network company doing making servers?” So for a while, I set thoughts about Cisco UCS aside, believing when we eventually bought hardware, it would be large numbers of Dell servers.
Once we started seriously looking at a the type and amount of storage I would need, in addition to the bulk storage we wanted to deploy for user home areas for everyone on campus, we began talking with all of the major storage vendors. I believe it was at this time that our CTO came to us to talk about something he’d heard about – an integrated stack of server blades, storage, and networking. He was pretty keen on the concept, and soon our CIO and others in leadership were as well. So forget having to sell our C-level folks on the integrated stack idea – they worked on selling it to me and my fellow admins.
I believe the main thing that appealed to our CIO & CTO about the integrated stack was the thought that it would facilitate a transition to offering our IT services in the form of a more standard service catalog, and would also make the future move to a public cloud like Amazon’s EC2 simpler.
Once we settled on the scope for the combined project – support for my large virtual desktop rollout as well as capacity to support an additional 900 virtual servers for everything else, the integrated stack concept started to make more and more sense. Our goal was to buy a quantified unit of computing power & storage now, and expand in a predictable manner as both my virtual desktop and our general virtualization needs grow.
I’m excited to be working with the FlexPods, and I continue to be delighted by the sheer geek factor of the Cisco UCS platform. I know some might ask why we elected to go with the NetApp’s FlexPod instead of the VCE Vblock, and I’ll address that in a future post.
Update: Thanks for asking about the specs, Shaun. Here they are; keep in mind this is per FlexPod, and we have two.
144 15k SAS drives, 600 GB ea. NetApp FAS6210 controller
Cisco UCS blades:
4 B230 M1 blades with dual Intel 6550 CPUs, 128 gigs of RAM
8 B250 M2 blades with dual Intel 5670 CPUs, 192 gigs of RAM
14 B200 M2 blades with dual Intel 5670 CPUs, 48 gigs of RAM*
* except for 1 of the B200 M2’s, which has 96 gigs of RAM
Was reviewing this post to do some quick math about the total RAM and CPU/core capacity on our FlexPods and realized I’d mistakenly stated we had 6 B250 M2 blades. We actually have 8. If my math is correct, we’re rocking 2.8 TB of RAM per FlexPod, 52 physical CPUs, and 312 cores. Holy cow.