FlexPods have landed

Update: Forgot to include the specs on our FlexPods.  Have included those at the bottom of the post.

Something awesome happened this week – the hardware on which we’ll be running my virtual desktop project and more has arrived!

 Here are a couple of pics of one of the pair of FlexPods we purchased a couple months ago.  This unit, in two racks, is in the data center in my building, while its twin is located at a data center a few miles away.  One of the pics shows both racks, while the other is a close-up crop of just the Cisco UCS compute section.

I’m really excited about being able to deploy these units, so I thought I’d explain why we decided to go with an integrated stack.

Until now, it would be fair to say we’ve been a Dell server shop.  Sure, we have a mix of everything, from some IBM AIX systems to a dwindling number of Sun boxes, and even an AS400 slated to be turned off next year.  But the bulk of our infrastructure runs on Dell servers – racks and racks of Dell pizza box servers.

Early on, when I was only looking for hardware on which to run my VDI project, our Citrix reseller pitched us on the Cisco UCS system and I was intrigued by the concept, but doubtful that I could get buy-in from our executive leadership to move away from Dell to what seemed to be a fairly new product line.  I’ll admit I also had the thought, “what’s a network company doing making servers?”  So for a while, I set thoughts about Cisco UCS aside, believing when we eventually bought hardware, it would be large numbers of Dell servers.

Once we started seriously looking at a the type and amount of storage I would need, in addition to the bulk storage we wanted to deploy for user home areas for everyone on campus, we began talking with all of the major storage vendors.  I believe it was at this time that our CTO came to us to talk about something he’d heard about – an integrated stack of server blades, storage, and networking.  He was pretty keen on the concept, and soon our CIO and others in leadership were as well.  So forget having to sell our C-level folks on the integrated stack idea – they worked on selling it to me and my fellow admins.

I believe the main thing that appealed to our CIO & CTO about the integrated stack was the thought that it would facilitate a transition to offering our IT services in the form of a more standard service catalog, and would also make the future move to a public cloud like Amazon’s EC2 simpler.

Once we settled on the scope for the combined project – support for my large virtual desktop rollout as well as capacity to support an additional 900 virtual servers for everything else, the integrated stack concept started to make more and more sense.  Our goal was to buy a quantified unit of computing power & storage now, and expand in a predictable manner as both my virtual desktop and our general virtualization needs grow.

I’m excited to be working with the FlexPods, and I continue to be delighted by the sheer geek factor of the Cisco UCS platform.  I know some might ask why we elected to go with the NetApp’s FlexPod instead of the VCE Vblock, and I’ll address that in a future post.

Update: Thanks for asking about the specs, Shaun.  Here they are; keep in mind this is per FlexPod, and we have two.


144 15k SAS drives, 600 GB ea.  NetApp FAS6210 controller

Cisco UCS blades:

4 B230 M1 blades with dual Intel 6550 CPUs, 128 gigs of RAM
8 B250 M2 blades with dual Intel 5670 CPUs, 192 gigs of RAM
14 B200 M2 blades with dual Intel 5670 CPUs, 48 gigs of RAM*
* except for 1 of the B200 M2’s, which has 96 gigs of RAM

Update 7/28/11

Was reviewing this post to do some quick math about the total RAM and CPU/core capacity on our FlexPods and realized I’d mistakenly stated we had 6 B250 M2 blades. We actually have 8.  If my math is correct, we’re rocking 2.8 TB of RAM per FlexPod, 52 physical CPUs, and 312 cores.  Holy cow.

This entry was posted in Hardware and tagged , , . Bookmark the permalink.

7 Responses to FlexPods have landed

  1. Shaun says:

    So what are the spec’s?? Geek out with RAM, Processing, Storage, etc…

    I’ve wanted to look at this as well and it could be an option for us in our environment. However if cloud offerings were not part of your end result would you still have picked the FlexPod over separate storage, servers, and networking?

    • Mike Stanley says:

      Doh, serves me right for posting late at night. I’ll update the post in just a bit with specs.

      Would I have picked the FlexPod or any integrated stack over separate components if I were only shopping for hardware for my virtual desktop project? Yeah, I think I would. Lots of efficiencies and it fits with our overall technology strategy to integrate and consolidate when we can.

  2. Shaun Jones says:

    What are your thoughts around endpoints to tie into the flexpod?

    • Mike Stanley says:

      Hmmm, I’m guessing you mean client devices? We’ve done limited testing on thin clients from Wyse, HP, and Dell. We’re going to do some more thorough testing of the Wyse Xenith “zero client” soon, and I see that being the most likely candidate for general purpose use. For folks who need multiple beefy monitors, we’re looking at the Wyse R10L, or possibly the Embedded Windows version of that.

      Aside from that, the first several hundred endpoints we’ll be connecting will be Dell SFF Optiplex desktops in our labs, streaming a XenDesktop image via Citrix PVS. Add to that a mix of Macs, laptops (PC & Mac) and everything from the iPhone & iPad to the new Evo3D two of my admin buddies just got this weekend.

  3. Pingback: My Take on the vSphere 5 Licensing Kerfuffle | Single Malt Cloud

  4. Pingback: Quick Geek Shout-out | Single Malt Cloud

  5. Pingback: Cisco Live 2012 – First Time Speaker’s Perspective part II | Single Malt Cloud

Comments are closed.