One of the things I inherited when I took on the current role I’m in at the University of Tennessee Institute of Agriculture was management of our video conferencing infrastructure. We’re a state-wide organization in every sense of the word – we have employees in all 95 counties in TN, in 10 Research and Education Centers, three regional offices, as well as a large presence on the Knoxville campus. Remote meetings of one kind or another, be they video conferences or conference calls, are critical to getting the business of the Institute done, and I appreciate that more today, a little over a year after taking this position, than I did on day one.
The Situation: Aging Legacy H.323 Infrastructure
What I was handed when I walked through the door was a legacy H.323 infrastructure comprised of all Polycom equipment, including a bridge, a gatekeeper, a “video firewall”, and about a dozen room units in various locations on the Knoxville campus and around the state. While there was a software client available for use with this infrastructure, it was a version behind Polycom’s current product last year, and two versions behind this year, with end-of-life approaching fast. It was also a terrible product, buggy and dependent on Adobe Air, and required separate manually created accounts for use.
The Problem: Increasing Demand, Disappointed Users
To top it off, our H.323 infrastructure was only capable of supporting six simultaneous connections of good video quality. So at any time, a total of six connections could be made to our bridge, be they room units or software clients, across any number of meetings. If there was a need for more, our users had two options – reschedule their meeting for a different time or pay $40 per hour per connection to Knoxville campus IT to bridge the call. On more than one occasion, we simply had to tell our users we couldn’t meet their needs, and that really bothered me.
One of the first projects our CIO tapped me for was overhauling out video conferencing infrastructure to meet the needs of the Institute. So I put together a plan to address our growing needs, which included more than just bridging meetings between a dozen room units. We needed this new infrastructure to do the following:
- Allow our users to join meetings from anywhere in a reliable way.
- Allow our users to join meetings from more than just their PC’s or Macs – tablets and smart phone clients were a must.
- Integrate with our Active Directory.
- Allow for unauthenticated guest access to the external folks our employees work with every day. Given our Institute includes UT Extension, Ag Research, and 4-H, that potentially includes interfacing with every citizen in the state.
- Allow for more simultaneous connections and meetings.
- Be as self-service as possible. My goal was to remove IT from this process and make hosting and participating in video conferences as easy as making a phone call.
Running On-Premises H.323 Infrastructure: Hope You Have Deep Pockets
I took a good hard look at what we had and researched what it would take to build it out and make it better. I located the purchase orders for our Polycom servers, just the equipment needed to provide the backbone to which all of the other Polycom room units and software clients would connect. It was staggeringly expensive, six figures, and with a maintenance cost of five figures every year – not including the cost of the endpoints and their maintenance. I knew I wouldn’t have that much money to spend on this project, but I also had been told Polycom was good about preserving the capital value of its equipment over time, making it possible for customers to upgrade or replace blades or cards without ripping out the entire system.
Polycom’s current generation of hardware and software could meet all of the needs I’d detailed for our overhauled infrastructure. In fact, the Knoxville campus central IT organization (the folks our users had to pay to host video conferences when our system was full) was in the process of deploying a new Polycom system to replace its own aging Tandberg system. I sat in on the implementation of that system and it was capable of meeting the technical requirements I’d established.
So I began working with our Polycom reseller to determine what it would cost to either bring that functionality to our infrastructure, or to combine our systems with that of the main campus. My hope was to get out of the video conferencing business to the extent possible. UTK has an entire team of people dedicated to the service, and it was just one of the hats I wear in my position. It was clear that the UTK system could provide a front end to our bridge, and if I could demonstrate that we would not only improve functionality, but also reduce the workload on our side, I knew I’d get more support for whatever capital expenditure might be required.
Only it didn’t work out that way. While it was true that we would be able to re-use some of our existing server equipment, essentially by adding new cards to them, the cards and licensing for them would have cost nearly as much as the original equipment itself. Even simply buying the cards to add to UTK’s infrastructure would have cost six figures, and upgrading our own would have cost tens of thousands of dollars more. And here’s the kicker – that pricing was based on absolutely no increase in capacity whatsoever. Six figures to continue to support 6 simultaneous connections (not meetings).
This was so completely outside the realm of possibility, both from a budget standpoint and a logic standpoint, that I ran the numbers on what it would cost to simply host all of our meetings at the $40 per hour per connection rate, and for the cost of upgrading our system we would have been able to fund the entire thing “as a service” for 5 years. That got me to thinking – if we would consider simply converting to “video conferencing as a service” – what were my other options?
This ended up being pretty long, so I’m breaking it into separate posts. You can find part II here.