LPI 102-500 – 102.6: Linux as guest virtualization
- Linux as guest virtualization
This lesson is about virtual machines and containers and the differences and details about them. This is a completely new topic in the Airpick One catalog. When I took the exam myself, the topic didn’t even exist. But LPI is of course moving with the times because at the moment there is a real container hype and accordingly it makes sense to include this topic in the exam catalog. API won’t expect you to be an absolute expert in virtual machines or container technologies. It’s more about knowing what it is, what the differences are and what the basic function looks like. I would like to show you this briefly using a diagram. We see here a comparison between a virtual machine and Alex LXT. The latter is a Linux container technology. There are other container technologies, but more on that later. First let’s take a look at the virtual machines. If you have installed your Linux test systems via VirtualBox, as I described in the first videos, then you have already got to know virtual machines. In practice, virtual machines are operating systems installed in another operating system. You can install and use a Windows system on a Linux system, so to speak. It works the other way around, of course. How the virtual machines are set up can be seen in the diagram on the left.
At the very bottom we find the physical server hardware. The server operating system is installed on this hardware. So directly on the hard drive a so called hypervisor is installed within the operating system. The hypervisor is the software that later connects the virtual operating systems with the server hardware and the host operating system. There are various providers of such solutions. For example VirtualBox KVM or VMware. With the help of the hypervisor we can install another operating system. This virtual operating system is completely installed and indirectly it also uses the server hardware. But as I said, only indirectly, because the operating system cannot access the server hardware directly I mean the guest operating system.
Now the hypervisor emulates the corresponding hardware. The hypervisor therefore provides virtual hardware that the operating system then can use. The operating system itself then has the appropriate binaries and libraries installed as well as various applications. We have three different guest operating systems in this diagram. Of course, every single operating system has its own space on the hard drive for example, 30GB per system, depending on the system. It can also be significantly more. Each system uses indirectly, but it uses the appropriate hardware of the server. The hypervisor also uses the corresponding hardware. A lot of computer power is required so that the hardware can be emulated and that these systems run smoothly and without interference. The server can become significantly slower the more virtual machines you have installed and the more virtual machines are currently switched on.
On the other hand, in this case we have the Linux containers. It looks a little different here. Here too, of course, we have the classic server hardware at the bottom. The host operating system is installed directly on the hardware which in the case of Alex LXT, is always Linux. With the help of container technology there is no need for a hypervisor. This is no longer necessary. The virtual operating system as we know it is also no longer necessary. Here in the graphic it is no longer shown at all as if we were no more virtual operating systems at all. In the case of Docker, this is for example even so, Docker is used to simply virtualize applications or to outsource applications in self contained containers. As an example we will install a web server. This docker container then basically just has the web server in its container and the files that it needs for this web server to work. This means that such a docker container does not always necessarily need the entire operating system for the web server to work but as I said, only the hardware itself and the corresponding files that it needs so that the software can start accordingly. The hardware load is, of course, significantly lower. Because if we had installed three web servers and three times the corresponding binaries and libraries that are required for the operating of the web server, then two important components are missing compared to a virtual machine once the hypervisor is missing and the complete virtual operating system is missing too. If you compare that and the server hardware may produce a maximum of five virtual machines.
On the other hand, we can probably operate 50 containers without causing a bottleneck. Now that’s not a number that comes from a manufacturer. I just threw them roughly into the room to make the difference clear. With Linux containers it is the case that the applications or the operating systems that are installed by a container access the hardware directly. This means that there is no emulated hardware, no virtual hardware created by hypervisor but each container accesses the physical hardware directly. Accordingly, the entire physical hardware is available to each container.
With Linux containers it is already the case that a web server cannot be installed here. But complete operating systems are actually installed. But compared to virtual machines these are minimalist versions of an operating system. If I now for example, would like to run an Apache web server in a container you do not have to install a Redhead Enterprise or Ubuntu server with several gigabytes of storage space. It is sufficient if you use a minimal version of a Linux system. For example Alpine Linux. This is about three megabytes in size. The web server can then be installed within Alpine Linux. This means that a guest operating system is installed. But in principle it is not worth mentioning since the system is kept so minimal that it hardly requires any storage space and hardly any CPU and memory. Another advantage of containers over virtual machines is that the container is ready and started within a few seconds. In the case of a virtual machine, it can take a few minutes until everything is completely ready for use. We will take a look at a small Linux container example in a moment. However, we will not discuss any commands or anything like that.
This is not required in the exam and that would also go too far for this course. So I will switch to my terminal now with the command Alexey list, the program shows us which containers we are currently running. We can see here the name, the state, the IP address and container type. And via the command Alex image list we can see the corresponding images that we have saved on the hard drive. And we see here an Ubuntu 16 version and it has only 160 megabytes. I think you know it when you download Ubuntu 16 from the website. Then I think it should be 2GB or 3GB. Here it is 170 megabytes. We have here a Fedora 33 with 97 megabytes. We have a Dbian system here with 65 megabytes, another Debian system with 237 megabytes. And here we have some Alpine distributions with two to five megabytes. So they are very small, very small operating systems. And of course they started very quickly. Let’s start the Alpine Linux system here. This one, I think. Here we just enter the command Alexi Launch and then my Airpine and I choose the container name lpig Airpine and the container has already started. That’s it. If we now enter Alexi list, we see that our new container is running. And now compare the installation time with the installation via a virtual machine.
That’s a huge difference. So let’s log in to Alpine LX. And now we are on the road. In Alpine Linux we see that we can find the FHS directory structure here. If we now want to install a web server from within the program, we can of course do that. We can store a website, save it, et cetera. We can start the next container with the same configuration within a few seconds to store another website and so on. The system start of larger containers takes significantly longer than that of Alpine. Maybe 20 seconds instead of 2 seconds. Let’s try it out. I leave the container and we can use this year Ubuntu 16 Alex, the fingerprint and the name Alpic Ubuntu, for example. And you see, it takes a little bit longer, but normally just 20 or 30 seconds maximum. So the start of the container is complete. Now see list. And here we have our Ubuntu system, which is running. By the way, every single container that is active is completely independent of its environment. This means that even if the container accesses the hardware of the host operating system directly, and even if this container produces an error and crashes, the other containers, or even the host system, are never affected. Okay, that was a little digression into LXC LXT container technology. And I’ll see you in the next video.