The Virtualization & VPS Server Handbook - Chapter 1

created: 02.05.2008 updated: 04.27.2009
 
5
Your rating: None Average: 5 (6 votes)

 

 

 

 

 

 

 

 

Chapter 1.1 – Why this handbook?

Internet has ensnared the entire planet in its world wide web. What was once thought as a pipe dream of a handful of computer nerds is now a reality. It has become the primary medium of contact and has opened up a literally seamless world where trade, commerce and interactions are taking place without the hindrance of political, geographical, cultural and social boundaries.

So much so that, a recent damage in the undersea internet cables somewhere in Mediterranean, made front page news in all the leading newspapers of the world and caused a 30% drop in productivity in internet based service industries in Asia. These industries employ hundreds of thousands of people with an annual turnover of over a hundred billion dollars. And, the impact of this damage was felt all over Europe, US East Coast and the Middle East, especially in Saudi Arabia, UAE, Yemen, Qatar and Bahrain. India’s $11 billion outsourcing industry which employs 700,000 people was also badly hit.

Anybody who’s somebody is now caught up in the net. It has become a marketplace where ideas, services and goods are bought and sold with utmost ease. Websites have become an extremely important forum for soliciting and transacting business and web hosting companies have sprouted more quickly than mushrooms during monsoon. So it has become almost imperative for all net users to become aware of the intricacies of web hosting so that they can make the most of what is available.

The purpose of this handbook is to inform readers of the nuts and bolts of the latest phenomenon in web hosting – vps hosting. This feature is the newest rage in the web world and is sweeping the net so, it's all the more necessary to know the P’s and Q’s of this facility before one signs on the dotted line. This handbook will surely help users make a better choice of a vps host.

It simply can't be overemphasized that a proper choice of a host can make or mar an e-business, so it's essential to get to the bottom of this new technology

 


back to index

 

 

Chapter 1.2 - Virtualization - is it realy an brand new concept?

Virtual Private Server (vps) technology has swept the web hosting world off its feet – everyone is excited about this latest development, but is it really a radically new paradigm?

Well, frankly speaking, the concept of virtualization has been around for last fifty or so years – it's only recently that the concept is being put to use in web hosting.

But what is virtualization after all?

Loosely speaking, it essentially lets one computer do the job of multiple computers, by sharing the resources of a single computer across multiple environments. Robert P. Goldberg, in his 1974 paper titled Survey of Virtual Machines Research, described virtualization as a framework or methodology of dividing the resources of a computer into multiple execution environments, by applying one or more concepts or technologies such as hardware and software partitioning, time-sharing, partial or complete machine simulation, emulation, quality of service, and many others.But the term "virtualization" is not always used to imply partitioning - breaking something down into multiple entities. It also means making multiple physical resources (such as storage devices or servers) appear as a single logical resource. Computer clusters, grid computing, and virtual servers use virtualization techniques to combine multiple discrete computers into larger metacomputers. In fact, this aspect is widely used while applying “clustering technology” by vps hosts. We will discuss more of it at a later stage.The original sense of the term ‘virtualization’, dates from the experimental IBM M44/44X system way back in the 1960s, and was concerned with creation of a virtual machine using a combination of hardware and software. Later, in the early CP-40 days, the creation and management of such virtual machines was also referred to as creating “pseudo” machines. These pseudo machines were the precursor of a revolution in the concepts of “backup” and “restore”. Such virtual machines were thought of as "hot standby" environments for physical production servers and capable of providing backup images that can "boot" into live virtual machines and take over workload from a production server experiencing an outage. It is no wonder then that such a virtual machine was defined by Popek and Goldberg as an efficient, isolated duplicate of a real machine.Generically speaking, in order to virtualize, a layer of software is used to provide the illusion of a "real" machine to multiple instances of "virtual machines". This layer is traditionally called the Virtual Machine Monitor (VMM).A VMM could itself run directly on the real hardware - without requiring a "host" operating system. In this case, the VMM is the (minimal) OS. A VMM could be hosted, and would run entirely as an application on top of a host operating system. It would use the host OS API to do everything. Furthermore, depending on whether the host and the virtual machine's architectures are identical or not, instruction set emulation may be involved.

I suppose it's getting a little too complicated at this stage, so let's get back to how it all began.


back to index
 

 

Chapter 1.3 - History of Virtualization & VPS technology

IBM had provided a 704 computer and services of some of its technicians to MIT in early 60s. This led to the development of Compatible Time Sharing System (CTSS). MIT continued its endeavor to improve the CTSS concept since the potential and possibilities of virtualization were very well understood by the people who were at the helm of affairs. This research led to “Multics” but IBM lost out to General Electric's GE 645.

Irrespective of this “loss”, IBM remained the most dominant force in this space. Almost around this time it was developing the 360 family of computers. A number of IBM-based virtual machine systems were developed at IBM's Cambridge Scientific Center. CP-40 (developed for a modified version of IBM 360/40), CP-67 (developed for the IBM 360/67), and the famous VM/370 were developed in close succession.

Right from its conception stage at IBM's Cambridge Scientific Center, CP-40 (the first version of CP/CMS) was intended to implement full virtualization. This required hardware and microcode customization on an S/360-40, to provide the necessary address translation and other virtualization features. The experience gathered in the course of CP-40 project led to the development of IBM System/360-67 in 1965. A new avatar of CP-40 in the form of CP-67 now took the center stage. By 1968 IBM made available both versions to the customers in source code form, as part of unsupported IBM Type-III Library.IBM announced introduction of System/370 in 1970. Initially this series did not include virtual memory but in 1972 the company shifted its stance and announced that it would be available on all S/370 models. Around this time it also introduced several virtual storage operating systems, including VM/370. By the late 80s VM became ubiquitous.However, in a series of disputed and bitter battles, time-sharing lost out to batch processing through IBM political infighting, and VM remained IBM's "other" mainframe operating system for decades, losing over MVS.Several PhD students at Stanford University had in the meanwhile started working on virtualization space in the 90s. They had formed a company called VMware. On February 8, 1999, VMware introduced the first of a line of x86 virtualization products, “VMware Virtual Platform”.It kept on providing improvements on its original product and in the year 2005 VMware decided to provide high quality virtualization technology to everyone for free. However, they omitted the ability to create virtual machines and did not distribute the acceleration tools that come with VMware workstation.Microsoft also had its share of virtualization. Windows NT had several subsystems, or execution environments, such as the virtual DOS machine (VDM), the Windows on Win32 (WOW) virtual machine for 16-bit Windows, the OS/2 subsystem, the POSIX subsystem, and the Win32 subsystem. Similarly, Windows 95 used virtual machines to run older (Windows 3.x and DOS) applications.With the acquisition of Connectix, makers of Virtual PC, in early 2003, Microsoft included virtualization as a key component of its server offerings for the industry. The idea of running multiple operating systems simultaneously on one machine gained ground with Microsoft's SQL Server 2000 which boasted of multiple instance capability. The company also started virtualizing its applications. Add to this the fact that the Exchange Server, File/Print Servers, IIS Server, Terminal Server, etc. didn't really need virtualization support in the operating system.During 2006, Microsoft worked on a new Type 1 hypervisor system codenamed "Viridian", with intention to ship it as "Hyper-V" within 180 days after release of Windows Server 2008. New versions of the Windows operating system beginning with Windows Vista include extensions to boost performance when running on top of the Viridian hypervisor.The year 2006 also saw the introduction of two new concepts Application Virtualization and Application Streaming. Application Streaming really took off with Microsoft’s acquisition of Softricity on July 17, 2006. At one stroke Windows Applications overcame the problem setup requirements – it became just click and run.Virtualization has now found its way in the form of embedded systems in mobile phones. Hypervisors used in embedded systems have to be real-time capable, which is not a requirement for hypervisors used in other domains. The first hypervisor deployed in a commercially-sold mobile embedded system (a Toshiba mobile phone) is OKL4. It supports x86, ARM and MIPS processors.It has been a long journey for virtualization and it's hardly begun. With such stupendous progress of technology virtualization will surely scale heights we may not dare to imagine today.


back to index

 
 

CHAPTER 2 ->