Todd and I did an ITProGuru vs. show together and as we were prepping for that, Todd had a bunch of really insightful ideas. I asked him to put together his thoughts so I could share with my audience. Here are those thoughts. This is a must read… In the words of Navis Learning.com’s Todd Cioffi:
Recently, I was asked by Dan Stolts, the “ITProGuru”, to be the inaugural guest for his new online video series “ITProGuru vs.” That discussion can be seen here.
After we spoke, Dan offered me the opportunity to post for his audience of IT professionals about any points that I would want to add, or to expand on what we covered.
Rather than letting the ITProGuru fans have all the fun, I split the post so that Navis readers could see some of the points as well.
To catch the rest, check out Dan’s blog, ITProGuru.com.
Now, today’s topics:
1: Planning for Failure
2: Normalizing expectation
3: Bits v. Business
1: Planning for Failure
The Cloud shifts how people think about their infrastructure.
At one time, the common service model for hardware was based on buying the most bullet-proof equipment you could afford and planning for it to always work. However, experience (and common sense) has shown that just about everything fails eventually.
It makes more sense, then, to expect inevitable failure and work to minimize its impact. The flexibility of virtual environments allows engineers to focus less on the most expensive hardware and instead spend resources on planning to handle failure gracefully and seamlessly.
Remember that when RAID first came on the scene in the late 1980s, it stood for Redundant Array of Inexpensive Disks. (It was marketing that changed the “I” to “Independent”.) RAID said “Don’t buy one expensive thing that you can’t afford to have break. Buy a bunch of cheap things that back each other up and then replace pieces when they fail.” IT has come a long way in the last three decades – yet much of it is still that simple, just refined.
This approach is fundamental to Cloud services. According to NIST, any Cloud resource needs to be, in brief: requestable, available, shareable, scalable, and measurable. All of those attributes involve uncertainty. You don’t know when, for whom, where, with whom, or how many with the Cloud, but you have to be able to run a meter to track utilization. With that many moving parts, you have to plan for something not being around when you’re looking for it and then be able shrug it off to find it elsewhere.
If you want a great example of planning for routine failure in the Cloud, go do a web search for Netflix’s “Chaos Monkey”. They constantly break their stuff on purpose, just to make sure they don’t care.
—
2: Normalizing expectation
When a successful new service enters the market, it goes through phases of normalization. These phases could be summarized in this way: pay to have, nice to have, expect to have, need to have.
Let’s take hotel WiFi as an example. For those that had gotten used to bringing their own data cable to jack into the Princess phone on the nightstand (that being an improvement from trudging down to the “Business Center”), WiFi was a Great Leap Forward.
In the early days of this offering, marketing could tout WiFi as a draw or frame it as a bonus for special clientele. Guests wouldn’t think twice about being charged a fee for the add-on privilege of connectivity in their room. Then it became a noted treat, a perk to differentiate establishments. Eventually guests came to assume that it would be there, even if they didn’t use it, like a lobby bar or a pool. Finally, they take the service for granted, like a wake-up call – only noticing when it isn’t there.
In a similar vein, in the span of roughly a decade, business websites went from their presence being a customer magnet to their absence being a customer repellent.
Cloud, too, will follow this premium-to-pedestrian trajectory; we just don’t know what all the market expectations will be yet.
We have already seen it become commonplace for people to not only have multiple devices, but also be using them simultaneously. This has begun shifting “normal” from being device-driven to access-driven. It’s becoming less about your hardware and more about your identity.
In the same way that end-users have shifted from hard media software packages to instant web downloads, it could be that the entire model of “installing software” is going to be less utilized as consumers become more used to “accessing functionality/services” on demand.
It’s too soon to predict how, when, or even whether we’ll all trade in our stiff, heavy, breakable hardware for flexible, fiber-optic, touch fabrics, or HUD glasses and motion-sensing glove controllers, but it’s hard to imagine that they wouldn’t be reaching into the Cloud.
Between now and then? For IT Pros and businesses alike, that’s the fun part.
—
3: Bits v. Business
For those who followed our discussion, you may recall Dan and me talking about some confusion and misperceptions surrounding the differences between Cloud and Virtualization. The whole idea for sharing this discussion with you started when I approached Dan at an event to bounce around some ideas on this very topic.
The trainer in me keeps kicking this particular idea around in my head to be able to summarize it in an easily graspable form and I think I may have found it, so I’ll beta-test it right here. Reduced to the basest level:
Virtualization is about the bits. Cloud is about the business.
To expand on that concept:
Virtualization is about the back-end technology that makes the electrons dance. Cloud adds an economic and marketing layer that takes that capability and turns it into a profit center.
Virtualization is about enabling all of this functionality, whether infrastructure, platform or software. Cloud is about encapsulating some part of that same functionality into a brandable service.
It’s no accident that everything in the Cloud is billed “*aaS” – meaning “[something] as a Service”. Remember, the last NIST attribute required to be “Cloud” is that you have to be able to monitor usage.
For that reason, if “utility” is heard when discussing Virtualization, it likely references usefulness and flexibility, like a “utility infielder” in baseball who can play multiple positions. “Utility” heard when discussing the Cloud is more likely about paying your “Cloud bill” as if it were a public utility like electricity or gas.
Ultimately, perhaps the biggest difference between the two is this:
Virtualization can stay mostly hidden from its consumers. Since it’s out of their control, there are plenty of end-users who don’t want to know anything about all of the shiny boxes with the blinking lights and the miles of cable – or the mystical creatures than manage them. Of course, those technical wizards are often perfectly content to stay isolated in their realm where no one pays attention to the men and women behind the curtain.
By its very definition, however, Cloud cannot stay isolated. It is a consumable, billable resource – ubiquitously available and on-demand. It is this ability to have what you want, when you want it, for only the time that you need it, that brings the near-term decision-making about technology utilization out of the sole realm of IT and delivers it to the market strategist, the project manager, the research scientist, the sole proprietor entrepreneur, … well, everybody.
That technology elevation and exposure to a broader audience means that there is not only a growing demand for technologists that can build the Cloud, but also technology-aware business process engineers who can explain things like how the Cloud impacts CapEx and OpEx and whether the business timing is right for using a public, private or hybrid cloud.
Of course, helping people build those skills is part of what we, at Navis Learning do.
To find out more about building your Cloud skills – and getting certified to boot – contact us at technicaltraining@navislearning.com.