
Chris Uttenweiler over at DLT doesn’t care for all this talk of cloud lock-in. He thinks it’s an inflated problem, blown out of proportion by “third-tier” providers and “self-titled” consultants. This isn’t surprising, since DLT resells Amazon Web Services1, which is the lock-in poster child.
He starts with a strawman:
…we have another FUD top-ten hit ripping through the charts: data portability & cloud lock-in.
Simply put, data portability is about how you get your data in and out of the cloud. Cloud lock-in is the fear that you will become so dependent on specialized cloud services that you cannot leave the service. According to some CEO/CTO’s, we did not have these problems back in the day when the only choices were co-location and managed services, these issues are only respective with cloud.
I have not met these CEO/CTOs, and I suspect they don’t exist.
Uttenweiler has misrepresented the situation. It’s not that cloud services create a new lock-in problem, it’s that they hold the promise of mitigating the problem. NIST acknowledged this specifically in their cloud computing roadmap. NIST believes, and I agree, that by encouraging the use of standards we can make the market for cloud services more liquid, reducing costs and improving quality for the customers. It’s simple economics.
He’s right that data has mass, and it’s always been hard to change platforms, providers, and move your data from one facility to another. His solution, though, is to blame the victim:
Make the right decisions about your needs and goals and you can mitigate this pain. However, that doesn’t mean you should hide from services that could help you run your IT infrastructure more efficiently and economically, as many sensationalist authors and third-tier CSP’s would have you do.
Actually, hiding from services that encourage lock-in is exactly what it means, and there’s nothing sensational about it. If you are using a service that employs open standards and (even better) open source software, you stand a much better chance of reducing the friction of a change in providers. It’s not a guarantee, but it’s a drastic improvement over a closed, proprietary alternative because it increases the chances that you’ll have an alternative in the first place. If you rely on VMware or Google App Engine, you are probably going to have a harder time switching providers than you might with OpenStack or OpenShift.
The way I explain this problem is in terms of entry and exit costs. Too many folks interested in cloud services focus on the entry costs: per-hour server rates, and so on. The Federal acquisition rules encourage this behavior by providing thousands of pages on instruction on how to consume a service, and almost no guidance on how to leave a service. Products like Amazon S3 price accordingly: it’s super-cheap to put data in, and costs orders of magnitude more to take it out.
If you put equal weight on the entry and exist costs for a particular good or service, you’ll see that the efficient and economic services Uttweiler is promoting may not, in fact, be as efficient or economic as you might have hoped.
So I don’t agree that encouraging customers to think about lock-in is “FUD”. I call it good business sense.
- DLT is also a great partner of Red Hat’s. Seriously, they’re good people and have been with us since the beginning. ↩