Today, we announced that Red Hat Enterprise Linux is shooting for its 14th Common Criteria certification. My job means I get excited about Common Criteria certifications, which also means I’m unpopular at dinner parties. This certification, though, has me more excited than usual, because it means much more than a rubber stamp from a certification body. With this certification, we’re including the SELinux security system and the KVM virtualization system. In short, it means being able to run many systems on one piece of hardware, and making sure that those systems can’t touch each other.
If you’ve seen me speak in the last few months, you’ve heard me talk about how the modular design of open source projects. Staying modular allows for new features to emerge by combining different, often unrelated components. Also, this “small pieces, loosely coupled” approach brings new features faster because improvements to one component don’t disrupt the whole system. I talk about the Linux community getting their “chocolate in their peanut butter, and their peanut butter in their chocolate,” which is more fun to say than “Linux creates an environment for emergent capabilities.”
Because the architecture permits this kind of innovation, we’re able to solve problems when we weren’t even trying. This combination of SELinux and KVM is a great example: we can now create sandboxes, so that if one computer is infiltrated, the attacker won’t immediately have access to everything on the same machine. This feature, which we call sVirt, is usually interesting only to security nerds. But I think it has the potential to solve some very pressing real-world problems, especially in the military. To illustrate what we’ve made possible, let’s go to Iraq.
The tactical elements in Iraq, like tanks, Humvees and unmanned aerial vehicles, run all kinds of systems in a variety of security enclaves. So the Blue Force Tracker has its own housing, the diagnostics have their own berth, and radios have a box of their own. These systems are, for the most part, physically isolated from each other. In part, this is for security. In part, it’s for modularity — I want to be able to take my radio out of the vehicle if it’s disabled and have it still work. It’s also because different companies built each system. For all these reasons, this is about as inefficient as it could be. The most precious resources in a tactical vehicle, like power and space, aren’t being shared.
This scarcity has a very unpleasant side-effect. The scarcity encourages people to make systems that are larger than necessary, so they can capture a “footprint” on the vehicle, and thus seek rent on the space they’ve claimed, which make the problem worse. The shortage of space gets downright tragic when you learn that any individual piece of software is likely using less that 15% of the computing power available, which means that 85% of the power that could be running a computer is instead being burned away as heat.
So power, space, weight, and cooling are inefficient sum-zero competitions on tactical platforms, as long as this scarcity is in place.
For the warfighter, the heat generated by these Balkanized systems, combined with the heat outside the vehicle, means the regular air conditioning is useless. Between running the air conditioners, the computers, and everything else, you can imagine the kind of mileage these vehicles get.
The scarcity also means program managers are not at liberty to adopt the best applications for a particular mission. If a new capability is needed, physical hardware has to be installed. The PM has to account for power, weight, and cooling requirements to add what could be a weightless piece of software. Hardware makes new systems more expensive to procure and slower to deploy.
Integrators who are eager to provide a good idea or better system to the military have to compete with other integrators for space and power on the platform. This friction could prevent the best and most useful software from reaching the field — the integrators should be competing on their capabilities and cost, not real estate.
It’s sad, but this broken market isn’t remarkable, it’s just a grim reality. Having separate boxes works nicely with the acquisition process, and program managers can easily manage each box from each contractor. The physical separation, or “air-gapping”, crudely addresses security concerns by preventing one system from leaking information to another. It certainly encourages modularity. So we live with the platform that we have, and everyone from the contractor to the warfighter has optimized their process around this flawed arrangement.
But look over here…
In the commerical world, though, data centers are embracing virtualization. By hosting many workloads on a single physical machine, we’re able to use 100% of our hardware, instead of just 15%. Virtualization also makes it simple to add new systems to an existing physical infrastructure. Instead of each program bringing their own box, they bring only software, which weighs nothing and consumes no space. As it should.
By using these data center patterns, tactical vehicles can escape the sum-zero trap. Physical space is no longer a limiting factor, removing the perverse incentives for a “land-grab” on the platform. Power use is no longer such a big problem. The heat problem is diminished, because sixteen separate computers can been consolidated onto two redundant computers.
But for all that virtualization can provide the tactical environment, it has not — to date — addressed the security concerns. A secure multi-tenant virtualization environment is still a “dark art” in the security world.
The Open Source community delivers, by accident.
Which brings us back to the Common Criteria certification and this ingenious and mostly accidental combination of SELinux and KVM.
For the program manager, this means that more and better innovations can be delivered more quickly to a tactical platform — without worrying about space and heat demands.
For the integrators, they can deliver their products on industry-standard Linux and Windows systems using a known quantity. This multi-tenant platform isn’t anything arcane, expensive, or novel: it’s the same Linux they’ve been working with for years.
Warfighters, of course, are relieved of the odious heat problem, but more importantly: it’s easy to re-provision the hardware with the computing workloads they need to accomplish their mission, without requiring the forklift upgrade which makes new workloads so slow and expensive today. It’s even built on a robust, smart open source platform, so we can be sure that this isn’t the last clever new feature or innovative approach. Moving to SELinux and KVM through Linux, we’ll also been in the best possible position to incorporate new ideas.
Now, those of you who work in this market know that there still many obstacles between what we have now and the kind of solution I’ve described. There is the matter of redundancy by design, the need to use system in mounted and dismounted situations, and the security standards for something like this are still murky. Nevertheless, we’re closer than we’ve ever been. And that’s exciting.
And a word on innovation.
A virtualized system like this could solve a bunch of existing problems, but it could also solve problems we haven’t yet anticipated. A Marine with the unimprovable nickname of Major Neutron once said, “Don’t pack it if you can’t hack it.” Jim Stogdill asserts “coding is maneuver.” In other words, the more we can tinker, the more we can adequately respond to a changing environment.
Having a virtual platform in the hands of a warfighter can encourage Major Neutron’s flavor of innovation in the field by providing a safe place to play. You’ll remember the example of SFC Stadtler, who jerry-rigged WiFi from parts he found in the trash. Think what a soldier like SFC Stadtler could do with a safe sandbox within the computers on his Humvee. With properly mediated access to the radios, maps, and other components, he could actually piece together the systems he needs without having to pull wires out of abandoned buildings. Because his tinkering is inside a sandbox, he could play without fear of breaking anything. So sVirt’s sandboxing isn’t just about consolidation or saving power, it’s can also be about enabling innovation at the edge.
So I think you can see why I’m excited. I love the idea that open source community can deliver solutions to practical problems — even when it’s a complete accident.
So what have I missed? What stands between the current platforms and a properly secured virtual platform using this newly certified Red Hat Enterprise Linux? Can you think of other applications for a safe sandbox in virtual environments?
[I want to thank the many folks who gave me comments on the initial draft of this post. It's been greatly improved with your help.]