Tuesday, December 30, 2014

To Share or Not To Share

Sharing, with respect to software, exists on many different levels. Much has been written about the sharing of source code and this continues to be an interesting topic. However, I wish to take a look at a different level of sharing with an eye on a trend in "recent history" towards less sharing, or partial sharing, or no sharing at all. Primarily this will concern containers and more specifically the much hyped Docker framework around containers as well as the recently introduced Rocket tool-set. Thus, I will be looking at sharing on the binary level.

Lets start out by taking a ride on the way back train when the computing world was very different. Way back when sharing, at runtime, did basically not exist. Binaries were compiled code and everything was pretty much one big blob. Every application carried it's own set of standard functionality. Translated into today's terminology an Independent Software Vendor (ISV) effectively shipped an appliance. The biggest issues with this are fairly obvious. Everyone ships more or less code that they don't want to ship, i.e. standard functionality. If there is a security issue the ISV has to rebuild and ship the big ugly blob again and again and again. The ISV is responsible for much more than is desirable, from the ISVs point of view.

Now fast forward to an intermediate point from way back to today and the common use of shared libraries. Sharing at runtime allows ISV application blobs to be significantly smaller. Standard functionality is pulled in at runtime from a library that resides on the system where the application is installed. Not only does the ISV need to ship less stuff, but the ISV also has to worry a lot less about code and associated issues outside the ISVs field of expertise. The responsibility to worry about security issues in standard functionality moved from the ISV to the customer. The customer is responsible for maintaining the system. This division of responsibility is especially common for problems that have no real solution. For security issues the real solution would be to have no vulnerabilities. However, this is not possible and thus the responsibility for worrying about the security issues is divided. ISVs worry about the code in their application and the customer worries about the system where the application is installed. The introduction of sharing thus provided benefits to the ISV in that some responsibility moved to the customer. The customer gained the benefit of control, in that the customer does not need to sit around and wait for a new application release to get a security issue in standard functionality fixed. The customer also addresses issues of many applications with only one update. Overall a win-win for ISVs and customers.

One pain point introduced with the sharing is an increase in difficulty of application portability from system A to system B. Where system A and system B my have the same OS but have different patch levels. The effect may be that the application runs on system A but not on system B.

With this as the background fast forward to the IT landscape as it exists today. We now live in a world where it is likely that system A runs distribution A and system B runs distribution B, making the portability problem a bit more complicated. Additionally, probably as much code is delivered to customers that is written in a scripting language and sharing for dynamic languages takes on a different but similar problematic. For binaries a partial solution to the portability problem is symbol versioning. Symbol versioning resolves the basic underlying problem of "same name but different behavior", but obviously if a given system does not ship the needed version then the ISV is once again left holding the bag, i.e. the ISV cannot support the system that does not deliver the proper symbol. This also implies that as an ISV one has to take great care about the support matrix and picking the build system. Generally an ISV compiling on the oldest distribution customers ask for, provides binaries that work on more modern distributions as well. I am aware that I am papering over many details, but I do want to get too far away from the topic at hand, sharing. Thus, we have arrived in a world where application portability has pitfalls and complications with respect to managing dependencies. However, these issues are mostly well understood and ISVs deal with these issues on a more or less routine basis.

Enter appliances, in the form of VMs or containers, or other form factors. An appliance allows an ISV to solve the portability issue by shipping along the tested run time environment. The crux is that for appliances in a VM the ISV also ends up shipping the OS with  many moving parts. Once the ISV ships it there is an implied responsibility not just for the application in the appliance, but for everything delivered with the appliance. As previously discussed ISVs are not really interested in having this responsibility.

Containers on the other hand, at least according to the proclaimed idea of implementation and use, require an ISV to only package run time dependencies. This would be a lot less than an appliance that is delivered as a VM. However, this is still very much comparable to the pre-sharing days, where an ISV shipped a lot of code that is outside the ISV core competency.

Rightfully so container proponents claim that an application installed in a container can be moved from one container host to the next and it will work just the same. The container encapsulates all the necessary run time libraries for the application and thus is an independent unit that can be moved about. Effectively  the container replaces what used to be a big ugly binary blob provided by the ISV in the pre sharing days with a system that has the concept of sharing at runtime on the inside. For the ISV this implies that the application is still as small as in the shared world and for the customer it means that applications can be isolated from each other with relative ease and mobility.

So far so good. However, while this is very neat, easy to talk about, and looks good on paper none of the problems were really solved. As a matter of fact it is just a reversion to the problems that existed prior to using shared standard functionality on a system. If an ISV ships a container, the ISV takes implicit responsibility for the content of the container, which is beyond the ISVs core competency, the application. While the footprint of responsibility compared to a full blown VM is less for the ISV shipping a container, the footprint is still way bigger than the ISV would like. The solution would be for an ISV to deliver a container to a customer and take responsibility only for the application in the container. But this goes against human nature. When one buys a car one expects the dealer to take responsibility for the whole car and not just certain parts of it. If the dealer would come up with the proposition that the starter is made by some other manufacturer and is also shared with other brands and model and therefore the car owner has to deal with the starter manufacturer if something goes wrong with it we'd all have a hissy fit and would tell the dealer to go.....

Thus, containers, while solving the portability issue in a less resource intensive way than VMs, still suffer the same issue as VMs when it comes around to getting a portable unit delivered with an application by an ISV.

A logical step is for customers to build their own containers. Building a container is significantly less effort than building a full blown VM as only the run time dependencies for an application need to be considered. However, this creates the next problem. Each container has it's own set of runtime libraries which suit the application that is inside the container. This clearly creates a version tracking problem. With no system in place to track the content of containers and relate this back to potential security issues in each version this certainly very quickly becomes a systems management nightmare.

One solution would be to enforce that libraries that are used at runtime by applications and end up in containers all come from the same pool, i.e. the OpenSSL library that ends up in multiple containers is of the same version in all containers. This re-introduces the portability issue for applications. If application A wants OpenSSL version X and application B wants OpenSSL version Y the "same version in all containers" policy will get tested. Yes, the container itself is still portable, but from an ISV perspective nothing is gained and from the customer perspective a more complicated and difficult systems management layer is introduced.

With no introspection system available, correlation of version to security issues being of utmost importance, the best option to get at least a partial handle on version proliferation is to include an update stack in each container. With update stack I mean a stack that manages updates via the tried and true package mechanism used by all Linux distributions. This of course is a major departure from one of the primary benefits of containers, advertised as "runtime dependencies only." With the inclusion of the update stack the container just grew significantly which makes it much less likely that an ISV woudl want to take on the responsibility of distributing such a container.

Last but not least, if there exist a group of ISVs that do ship containers to deliver their application and a security vulnerability is discovered with a runtime library in the container the customer has to wait until the ISV ships a new container. Unless of course the container contains an update stack in which case the customer may have a chance to fix the vulnerability. In any event the customer can easily get into a "run vulnerable or turn a critical app off" situation.

Thus, the situation overall is not pretty. For customers using containers that are delivered by ISVs the choice may well be to either turn an important container off when a security vulnerability is disclosed or to run in a vulnerable state. That is of course if the customer is even aware of the issue. Without an update stack or other introspection system customers have almost no chance of knowing when they are exposed. For those containers built by the customer the situation is not much different due to the library proliferation problem. At least with careful selection of the library pool from which the container builder is allowed to draw the customer has some chance of knowing about vulnerabilities and getting on to of them. The choice between "run vulnerable or turn it off" is mitigated as the customer has control of the container content and the containers effected can be rebuilt with a version of a fixed library in relative short order. But, as indicated previously, this does not really solve the portability problem for ISVs. The ISV still ends up having to test with a number of incarnations of standard libraries.

There are, of course, use cases where containers shine. However, touting application portability as one of the primary use cases and proclaiming that containers can solve the application portability problem is from my point of view misleading. There are too many issues, as outlined above, that have no solution or the solution to the problems leads us right to where we are, or have been in the past. Being able to increase the application density on a machine, as opposed to using VMs, while achieving isolation of those applications is a much more compelling use case for containers, from my point of view. Yes, containers are portable, but they do not solve the application portability problem, and they create another set of systems management problems unless each container also contains an update stack, or new introspection and management capabilities are developed.

Looking at the status quo that brought us the "application portability problem" one has to conclude that we are actually not in bad shape. On a system where things are shared, a fix of one vulnerable library fixes many applications. While ISVs have an un-easy relationship with the "a library changed underneath me" concept, ISVs have come to the conclusion that it is a necessary and unavoidable mode of operation for customers. Shipping an application in a container resolves the "a library changed underneath me" situation, but creates a plethora of other issues at the customer level. I claim that these new problems are much worse. Solving the new problems with the logical choices leads us back to the "a library changed underneath me" problem for an ISV.

Containers are not the only solution proposed to the "application portability problem." The proposition of linked systems exhibits the same basic problems as outlined for containers. It is far too easy for a customer to end up in a situation where a critical system would either have to be turned off or be left running in a vulnerable state. In a linked systems approach where an application prescribes a certain tree of the linked system the customer has no option to swap the tree for a new tree with a fixed library unless the ISV provides a new application that accepts the new linked system tree that includes the fixed library. Same result, different approach to getting there.

For both cases containers and linked systems the basic problems go away if all applications are open and can be built and delivered at the speed of the disclosure of vulnerabilities. This maintenance can then be performed by customers themselves or by dedicated companies. However, the prospect that all applications a business would ever want and need are open is way of in the future, if realistic at all. Thus, any system proclaiming to solve the "application portability problem" has to take into account that the world does not fit into a neat bucket. Reality is that ISVs are not in a position to chase after every vulnerability in every library they may depend on. With applications not being open source this implies that the ideas of creating a new container or linked system in a hurry have serious issues and thus the problem is not really solved.

Sharing and versioning problems also exist at the language level. Take node.js with the npm management framework as an example. Each application pulls the versions specified by the application developer into a directory structure used by that application only. This basically creates the same management nightmare as discussed. Python and Ruby also have certain issues, although there is sharing as opposed to NPM where the is no sharing. I could reminisce about these issues, however, this would certainly detract from my primary topic which was to look at the sharing issue by focusing on containers and the much hyped Docker solution to containers.

In summary, sharing is an important concept that solves a number of problems and provides control to customers when managing issues. One can argue that on some level containers solve the application portability problem that arises with the concept of sharing. After all a container can be moved about many hosts with no problems. However, as shown, the portability problem is not really solved. For the application the same issues exist inside a container as they do on the outside. Any library used by an application has the potential of requiring an  updated/fixed version for security issues. Therefore, from the ISV perspective little changes. In cases where the application is under complete control of the container builder the basic premise of containers can work. Rebuilding the container for a security fix in an application runtime library is relatively easy and the container can then be quickly pushed around to testing, staging, and production. However, even in this case great care must be taken concerning library version proliferation and the understanding and management of the content of containers. Indiscriminately rebuilding all containers one might operate for every security fix is certainly not a feasible solution.

1 comment:

  1. Very nice blog post. I kept this tab open for weeks as I wanted to read it in one go with focus. I agree to lot of the points.

    However, I feel that the time of sending applications to the customer site and working with updates for security issues etc. should stop in the near future. I feel that the hosted model and offering services is a much easier model. The time to push updates is so incredibly easy.

    Thanks for the nice post.

    ReplyDelete