Thr “Biome” of Virtualisation

The “Biome” of Virtualisation

Surely, someone associated with cloud computing would be knowing way beyond the scope of this article. This write-up is intended towards the masses, not much familiar to the technical jargon of ‘Cloud’ and its associated computing prowess— ‘Virtualisation’. There’s a lot that can be said about virtualisation, and all of it might not even sum up into a single book. That said, this article skimps over the surface of the ‘biome’ of virtualisation.

The roots of virtualisation can be traced back majorly to the era of the 1980s. The technology was in a nascent stage before that decade. The introduction of virtualisation could be majorly attributed to the introduction of Intel 80386 microprocessor, the first ever processor with a 32-bit architecture and the x86 instruction set. In layman terms, this provided for a more versatile and a flexible instruction set to the chip-programmers for writing their assembly (machine) codes, in comparison to its predecessor, the 16-bit Intel 80286. The concept of virtual memory was first introduced in the Intel 80286, and the support for paging followed in the Intel 80386. Without the introduction of the same, virtualisation would not have been as we know it today. The possibility of creating virtual machines rose on account of memory and storage virtualisation. However, the adoption of this tech was not until the late ’90s, when hypervisors really picked up the pace and got worked upon.

Ever made or dealt with Virtual Machines and their management on your PC or in your office environment using Microsoft Hyper-V, VMware etc.? Heard of emulators called JVM (Java Virtual Machine) or the CLR (Common Language Runtime)? I can sense system administrators and developers grinning over this. Yes, you got it right that’s virtualisation. However, this is just the tip of the iceberg of possibilities and modern day implementations of virtualisation. As a technology, virtualisation has come far from what it was defined as, in the era when emulators and hypervisors were introduced.

Let’s not beat around the bush and come straight to the brouhaha around virtualisation, virtual machines and hypervisors.

VMs are nothing but an emulation of a physical computer system and are intended to mimic the operation and functionalities of the same, albeit in an isolated environment. A VM (virtual machine) or ‘guest’ machine is designed to function autonomously on a physical ‘host’ and leverage the underlying hardware of the host machine. A host, in turn, may comprise of multiple guest machines. Virtual machines are of two types, namely:

System virtual machines: These VMs share hardware resources (memory, CPU, storage, network and others) with the underlying host and are capable performing tasks and running applications in the same manner that would have been possible on the host. This sharing of resources is facilitated by a hypervisor, which in turn can be made to run on bare hardware or above an Operating system such as Linux or Windows. For example, Vmware, Hyper-V, VirtualBox may be used for creating such VMs.

Process virtual machines: These VMs are mainly application oriented, and they intend to provide with a platform independent environment to the applications, which in turn ensures their portability. These have a negligible footprint and are a custodian of WORA (Write Once Run Anywhere). For example, Java leverages JVM to run the compiled ‘Bytecode’ on any machine irrespective of the underlying platform or hardware. Similarly, Microsoft’s dotNet Framework leverages CLR to attain the same.

Now, coming onto the hypervisors. As mentioned above, it is the hypervisors that are responsible for seamlessly negotiating the required resources for the guest machines from the host. Hypervisors are of two types, namely:

Bare metal/native/type 1: These hypervisors run directly on the hardware of the host and as a result are highly efficient and scalable whilst offering a higher degree of isolation and security. For example, KVM, Vmware ESX/ESXi, Citrix Xen Server, Hyper-V.

Embedded/Hosted/type 2: These hypervisors run on top of the operating system of a host machine and are comparatively slower as they have an added layer in the form of the OS to communicate with the hardware. But, on the contrary, they are easier to set-up and use — for example, Virtual Box, Vmware Workstation.

Finally, moving towards the bigger boat of ‘virtualisation’. Virtualisation allows creating emulated environments comprising of pools of Virtual machines and hypervisors, which in turn, are derived from physical hardware systems. Virtualisation takes various forms depending upon the application. Some notable types that it can take are OS virtualisation, Desktop virtualisation, Network virtualisation, Storage virtualisation, Data virtualisation, Hardware virtualisation, Memory virtualisation, Server virtualisation, IO virtualisation. To keep things simple, let us not venture into each type individually for the scope of this article. The nomenclature by itself speaks about each type to quite an extent.

Often, it is the high costs associated with virtualisation that prevent many organisations from venturing into this domain. But, on the contrary, the pros of it outweigh the cons by far for medium and large scale companies, especially in the services sector. Virtualisation provides with efficient use of resources, reduction in costs, increased productivity and scalability, increased data redundancy and what not. For Data Centres, it has become more of a requirement as the aim is to maximise the hardware use, by instantiating the maximum possible VMs from a physical server. This corresponds to a reduction in CapEx on account of a condensed hardware footprint and the OpEx on account of reduced cooling and power requirements. Moreover, management of hundreds VMs across a cluster and in turn a datacentre gets much simpler from a centralised tool.

Virtualisation is a means to Cloud computing, and a means to create your own private/public clouds which could further be leveraged for various applications such as Data science, Business Intelligence, IoT, AI, ML, Mining and what not. The list is just endless. For instance, the Azure Cloud uses Hyper-V for its virtualisation needs and the RedHat Cloud uses KVM based RedHat Virtualisation. Virtualisation is the base, which lends its formal definition to the Cloud. Simply put, Virtualization is a technology and Cloud computing a methodology. Nonetheless, they go hand-in-hand.

Image Source: Freepik.com

As far as the domains associated with Big Data, Cloud and Application development are concerned, it seems that they are to sail together into the future of technology. As an example, it could be inferred from the fact, that the computing power required for processing Big Data is substantially high, which makes it economically and at times even physically unviable for many small companies to set-up on-premise servers, high in hardware configuration. Moving to cloud proves to be a much cheaper alternative for them and even for organisations or individuals associated with application development. As recent trends show, more and more companies, even the bigger ones are now moving towards the cloud going in for IaaS, SaaS or PaaS models depending upon their needs. Not to forget the solutions that many cloud providers such as Google, Azure, AWS already provide for building, training and deploying ML and AI models, moving organisational transactions to Blockchain, deploying applications containers, just to name a few from the plethora of solutions on offer. There is a lot that can be benefitted from, as the base frameworks already exist, and the best part– companies no longer need to develop the technologies from scratch, thus, enhancing their productivity by manifolds.

Image Source: Freepik.com

A Bonus: To get started with virtualisation, one could, of course, sign up for a free-limited day trial with Vmware or RedHat for their respective products. However, do check out the ovirt-engine project with which, the RedHat virtualisation shares its roots. Ovirt is undoubtedly an appreciable project by and for the community. You could get started and have a feel of it merely on a CentOS machine just by sourcing and installing the required packages. (However, if you wish to set-up Hosts and Clusters and take it ahead from there, you might be better off with high-spec hardware configuration). Moreover, the in-depth documentation on ovirt’s webpage comprises of all information one would ever need! Go ahead; it’s FOSS!