Skip to content


Container Management System – From Google Borg to Kubernetes

Since its introduction, Kubernetes has been one of the leading container management system globally. It has been fueled by the widespread adoption of application packaging in containers to isolate them from their dependencies on other processes to minimize interference in the incidence of failure. For a highly extensive service, the introduction and growth of Kubernetes over the past half of the decade has been immensely rapid. All of this would not have been possible without Borg. How was Kubernetes developed from Borg?

Borg as Container Management System

Google has been using container technology for a long time; since 2003-2004 when Borg was developed as a small-scale internal project alongside Google’s search engine. It grew with the search engine, as its potential in container technology was used to build virtually everything on Google today. Gmail, Google Docs, GCP, and all other services offered by Google run on Borg. Borg has a basic structure of a single master node, known as the Borgmaster, relaying instructions to agent machines, known as Borglets, responsible for running the nodes in a cluster. The Borgmaster directly interacts with the user to uptake instructions and orchestrate their execution in containers, provided the user has the authority. 

Up to 2013, container technology was entirely internal until Docker came into the picture. At that time Docker was an open-source tool that permitted online software developers to package their applications to be rapidly moved between machines. Docker was embraced by Google the following year as a way to reveal to the general public what had been made possible by Borg. At that time, Docker had one key limitation: it could only run on a single node. This meant that automation was not possible and that each development had to be packaged manually. This could quickly get tedious when workloads moved to tens up to hundreds. 

Kubernetes as Container Management System

Since Google had already explored the possibility of automating and managing containers at scale using Borg, the ideas to bring this to the public began to surface. Google intended to develop an open-source version of Borg which is now known as Kubernetes. The same developers who were involved in the Borg project branched off to Kubernetes. To eliminate any dependencies on Borg, Kubernetes was developed using a different language (Go) from Borg’s (C++). From the beginning, it was created with the intention of making it an open source platform which involved acquiring an open-source license (Apache); while integrating it into GCP’s ecosystem. 

Borg is Kubernetes’ predecessor. Most of the features in Kubernetes, originated from Borg, with some little modifications. Their basic structure is similar; Kubernetes also possess a master node linking the agents (known as Kublets) and the user. They also have similar internal key-value stores under different names: Paxos for Borg and etcd for Kubernetes. What separates Kubernetes from Borg, is that it is an open-source solution. It is not just Google’s project alone, the open-source community has made major contributions to what it is today. Currently, there are thousands of developers from the open-source community constantly working on it. 

Kubernetes also allows users to label workloads, something that was not extensively explored in Borg. Actions on labeled workloads can be executed together. Due to its extreme robustness, scale and, breadth of features, Borg is still Google’s primary container management system. 

gke autopilot
Google Kubernetes Engine Autopilot

Modern applications require complex infrastructure that promises scalability as well as continuous development and deployment. To simplify the process of deploying complex infrastructure…

Google Cloud Security Weeks #4

Week 4 of our Google Cloud Security Weeks started and we would like to give you some more interesting insights into the Google Cloud. Last time we talked about services that can be blocked or unblocked by administrators…

anthos service mesh
Anthos Service Mesh

Anthos Service Mesh is a Google Cloud’s fully managed service mesh for complex microservice architectures. It works on a single basic principle: separating your business logic from your network functions. 

Get in touch with us

Ready to start your next project with us? Give us a call or send us an email and we will get back to you as soon as possible!

Call us

+43 (720) 34 91 83


Am Heumarkt 4/17, 1030 Wien, Austria