Virtualization: We've Only Scratched the Surface.
In my role at Park Place International, I am fortunate to visit frequently with customers to discuss real-world healthcare IT problems. We try to help customers create sometimes simple, sometimes not-so-simple phased technology migration/adoption plans or roadmaps.
When members of the Park Place team attend partner technical conferences for technology updates, training, and certifications, we usually get a heavy dose of the partner technology vision or roadmap. One of the useful things that I think we do here at Park Place is to try to connect customer needs constructively to the emerging partner roadmaps. Six of our team members, including me, recently returned from EMC World 2013. (PPI team members will participate in similar forums across our entire partner ecosystem over the next several months.) Floating above the buzz of deep dives on particular technologies, or new product roadmap sessions, was a broader theme of taking virtualization to the next level.
What does that mean? According to some statistics, 80% of servers running critical production applications in enterprises large and small are virtual servers. We are so used to that in the IT business now that we sometimes forget the fundamentals of virtualization. Those workloads are essentially liquid, manageable, and transportable across most of our technology infrastructure. By comparison, the other two fundamental building blocks of technology infrastructure, storage systems and networking, are far less “virtualized” in general. Network routes, identities, and credentials don’t quite “flow” through the ether as effortlessly as a virtual server guest workload in the throes of a vMotion. Storage, while effectively virtualized at the system and disk pool level for several years now, still lacks the fundamental liquidity and transportability available to server workloads.
Needless to say, there is business opportunity in deepening the virtualization of the storage and networking components of technical architecture, and the industry is pursuing it with unbridled zeal. Advancements in these areas could solve fundamental, real-world problems that matter to hospitals, like transporting an entire data center and all its workloads seamlessly from Point A to Point B in the event of a technical, regional, or local disaster. After years of waiting and sometimes spending vast sums on technologies that only get us part of the way, we are just that close. The deepening of virtualization also holds out the promise of solving less dramatic, but no less pressing issues, like handling the influx of personally owned touch devices into what needs to be a secure healthcare private cloud. Another bit of promise that is among my personal favorites is the realization of the hybrid cloud.
To me, hybrid cloud isn’t “you do this and I do that and we’re all securely connected over the Internet” as it exists today, it’s more “sharing, specialization, and resource pooling for resiliency”. For example, in a typical hybrid cloud scenario today, your internal private cloud manages critical production resources, while a healthcare service provider cloud like our OpSus only provides discrete services, like disaster recovery, or archiving, or MEDITECH Infrastructure-as-a-Service. In the not-too-distant future, deeper storage virtualization and network virtualization, along with continuing improvements in server virtualization, may allow balancing your IT needs between your internal systems and a service provider’s systems as easily as you fade the audio mix in your car from left to right or from forward to back. It will feel more like your private cloud and the service provider’s private cloud act as a single, secure hybrid entity, dynamically sharing your IT service workloads. The analogy does little justice to the complexity of the software that makes it possible, but it does properly envision the end goal of IT as a resilient, predictable, and liquid resource that can be managed like a utility.
While we await this approaching nirvana, we should be careful not to forget fundamentals. I never thought kids should get a calculator in school until they can do a certain amount of mental math. I’m not sure IT organizations should virtualize an asset behind layers of automated management and provisioning until they understand the nature of that asset thoroughly. A big chunk of what our Technical Consultants find “broken” or underperforming in the field, particularly around storage and virtualization, is not the technology itself but the design, configuration, and routine maintenance/update/upgrade of that technology. So, while I applaud what will become possible when we “abstract” our network switches, routers, and storage arrays behind increasingly sophisticated network and storage hypervisors, let’s not forget that this abstraction layer is at the mercy of simple things like keeping device firmware up to date. As Scotty famously dismissed trans-warp drive in his marvelous Scottish brogue in Star Trek III: The Search for Spock: “the more complicated they make the plumbing, the easier it is to clog up the works.”
I hope to see many of our readers at International MUSE in Washington, DC later this month.