Loading...

Virtualization, Cloud & Building a Private Cloud

Bachelor Thesis 2015 67 Pages

Computer Science - Applied

Excerpt

Contents

Introduction

Historical background

The definition of Virtualization

The benefits of virtualization

Virtualization Types

Virtual Machine

Hardware Role in Virtualization

Businesses and Virtualization Providers

Virtualization Management

Challenges face virtualization in datacenters

Conclusion

Project Title

Introduction

Definition and Background of Cloud Computing

Service Models of Cloud Computing

Cloud Services & Case study

Architect The Cloud

Deployment Models of cloud Computing

Mistakes you have to avoid

Choosing the right cloud service model

Private Cloud Principles, Concepts and Patterns

Conclusion

References

Project Title

Introduction

Installing and Configuring Hyper-V on Windows Server 2012R2

Configuring the Network and Storage Fabric

Creating virtual application packages with the Server App-V Sequencer.

Installing and Configuring App Controller

Integrating private cloud monitoring with Operations Manager 2012R2

Installing and Integrating Service Manager

System Center Orchestrator 2012 R2

System Center Data Protection Manager (DPM)

Hybrid Cloud

Conclusion

References

Introduction

Virtualization is one of the biggest technical innovations happened with very long history back to IBM 1960s. Its main purpose is to increase the level of system abstraction to help the end users getting better and better performance. For example, the user can have multiple operating systems Linux, Mac OSX in two different virtual machines running on the top of one single physical desktop or workstation that has windows operating system. This development frees us from the complex relationship between systems, applications and the hardware characteristics required by both of them. In this way we became easily able to move the system from one machine to another a part of that we now have the ability to even virtualize the applications using two ways in The first one we virtualize the whole environment that the application works within to get in the end portable application which we can run in any machine. In the second method we virtualize the way how we interact with the application where the application is located on one server and any user can use that application regardless the local operating system the user has, which can be windows, linux, Max OSX, Android etc.

For example if we follow the first method then we can take application like adobe photoshop and turn it into portable software that any user can run without need for licensing or installation but in the other method we keep Photoshop shared on a server and the users can use it remotely, same as google Docs.

Virtualization is also designed to save the cost of the IT department as never before. we saw that in the pervious examples. We don't need to hire many IT specialists to take care of 100 computers, instead we created all these computers on the top of two-three servers which make it is easier for us to manage and operate as well as helping us sharing our hardware and software since I can get more RAM and CPU capacity when the other virtual machines located on the same server are idle. In this way I am not limited to fixed amount of RAM and CPU.not just that but I can even move freely from one overload server to another one that has less load with zero down time. Yes I know, it sound interesting and we are going to see that for real in the last project so stay tuned.

In this research paper I will try to explain what is virtualization, how it works, and how it can benefit our organizations. I hope it will add something new to all of you.

Historical background

One of the common mistakes about virtualization is that many of us think that the history of virtualization is part of the history of VMware which was founded in 1998, but Actually virtualization started back in early 1960's by companies like General Electric (GE), Bell Labs, and International Business Machines (IBM).

In the Early 1960's, IBM had different types of systems, each system has its own requirements in addition to the ability to run only one process at a time. Most of IBM's customers were scientific labs and up so these systems were acceptable and match their needs.

Because of the wide range of hardware requirements, IBM started working on the S/360 Mainframe which was considered as a replacement of many of IBM's systems. In 1963 Massachusetts Institute of Technology (MIT) started a research that required a computer capable of more than one simultaneous user. MIT got proposals from various vendors including GE and IBM, IBM was not willing to make commitment towards a time sharing computer because they thought that there will not be a huge demand on this kind of product but GE had different thought that is why they were chosen by MIT in the end.

Later on, IBM started to notice the demand for such a system when Bell Labs asked for a similar system. IBM started a project to build the CP-40 mainframe which evolved to CP-67 which is the first commercial mainframe that supported Virtualization. The operating system used for CP-67 was called CP/CMS where CP stands for Control Program, and CMS stands for Console Monitor System. CMS was a small single operating system designed to be interactive while CP was the program to create virtual machines on the mainframe. The interaction part was a big development in this system since all previous systems from IBM did not support this feature. You had to feed the program into the computer, it would finish processing and give you back the results as output on the screen or printed. In the new system, the user was able one way or another to interact with the program as it is running.

The traditional approach of time sharing computer was based on dividing the computer resources among many users who became able to interact concurrently with a single computer. This approach was evolved into Unix later. The CP approach to time time sharing allowed each user to have their own complete operating system. Than main advantages of using virtual machines vs time sharing operating system was more efficient use of the system since virtual machines were able to share the overall resources of the mainframe instead of having the resources split equally between all users. It was also more secure since every user has its own completely separated operating system in addition being more reliable since no one user could crash the entire system but only their own. We already faced that in the recent operating systems from Microsoft.

IBM was the first to bring the concept Virtual Machine to the market and we still use VMs with the same concept but instead of IBM's mainframe, we use different types of servers and workstations even personal computers sometimes.

In 1987, Insignia Solutions demonstrated a software emulator called SoftPC that allowed Unix's users from running Dos applications on their unix machines. At that time, a PC running MS DOS costs around 1500$ while SoftPC gave users with a Unix workstation the ability to run DOS applications for a mere 500$. by 1989, Insignia Solutions had released a Mac version of SoftPC.

In 1997, Apple created a program called virtual PC and sold it through a company called Connectix. This software allows the users to run Windows on theMac computer in order to work around software incompatibilities. One year after, VMWare was stablished in 1998 and started to sell a product called VMWare workstation in 1999. after that other vendors started to enter the market like Microsoft with Microsoft Virtual PC 2004 and Citrix Inc with Xenserver in 2007

illustration not visible in this excerpt

The definition of Virtualization

"Virtualization software makes it possible to run multiple operating systems and multiple applications on the same server at the same time," said Mike Adams, director of product marketing at VMware.

As we know, there are so many definitions for virtualization which mostly reflect what each virtualization vendor is providing but we can summarize that by describing virtualization as a way to create a virtual version of something which can be a computer device, storage, tablet, router, switch etc.

illustration not visible in this excerpt

Figure 2. http://www.techchangers.com

The benefits of virtualization

1- Underutilized Resources Many data centers have machines consuming only about 10-15% of the total processing capacity which means, there is 85-90% of the machine resources are unused most of the time.

Sometimes we do that in-purpose because we have this wrong thoughts that the operating system is going to crash if we have more load caused by different applications specially when some vendors recommend to keep their applications separate from other applications.

This is not just wasting of the machines' resources but also the operation resources since these machines still use space, consume Figure 3. http://farooqz.com/virtualization-consolidation/ electricity and cooling.

What virtualization does to overcome this problem is enabling a single machine to support multiple systems which will maximize the use of the available resources and keep the applications separated on different virtual machines in addition to solving the incompatibility's issues.

illustration not visible in this excerpt

2- Go Green

lets first have a look at these two facts

- A study commissioned by AMD and performed by a scientist from the Lawrence Berkeley National Laboratory showed that the amount of energy consumed by data centers in the U.S. doubled between 2000 and 2005. Furthermore, energy consumption is expected to increase another 40 percent by the end of the decade. Current energy consumption by data center servers and associated cooling costs represents 1.2 percent of the total energy consumed in the U.S.
- As part of this study, the United States Environmental Protection Agency (EPA) has convened a working group to establish standards for server energy consumption and plans to establish a new “Energy Star” rating for energy efficient servers.

In the end Transforming physical servers into virtual machines and consolidating them into far fewer physical servers using virtualization technology, means reducing the monthly power and cooling costs. One of the great features that virtualization provides is the ability to move virtual machines to less number of servers in the data center at specific time mostly at night when we have less load, and shutdown the rest servers to save electricity. Can you imagine how powerful is this feature!
3- Reduce the data center footprint The rise of the Internet has exponentially increased the transformation for manual to electronic, from paper to digital and make the world much smaller. That makes companies want to communicate and reach as many customers as they can using the worldwide connectivity of the Internet. Naturally, this has accelerated the move to computerized business processes. The internet required more and more servers every day to serve more customers around the world. That is means we need more space to extend our data centers causing a real estate problem for companies.

Virtualization, by offering the ability to host multiple guest systems on a single physical server, allows organizations to reduce the overall footprint of your entire data center territory, and avoiding the expense of building out more data center.

That means less servers, less racks, less storage, less networking needed all of which can be translated into money.

illustration not visible in this excerpt

4- Testing/Lab Environment

Virtualization allows us to build labs or test environment, operating on its own isolated network which really good for companies and individuals since they don't have to worry about the main machine or Operating system. We as students are lucky to have virtualization exist in our life.

5- Faster and Easier Management

there are so many tasks that the system administrators have to do everyday including but not limited to monitoring hardware and software performance, replacing defective hardware components, installing & repairing operating systems and applications software, backing up the servers' data, protecting the network from internal and external attacks. Upgrading the software and hardware...etc.

Virtualization offers the opportunity to reduce overall system administration costs by reducing the overall number of machines that need to be taken care of. Although many of the tasks associated with system administration continue even in a virtualized environment. Virtualization makes provisioning and deployment of services much faster since we can easily clone virtual machines from templets and master images in minutes. It just takes few clicks to get a virtual machine up and running.

6- incompatibility issues

it is impossible for individuals and organization to be tie only to one vendor or few vendor regarding the incompatibility issues. We all suffer from this problem when we need to run application that is compatible only with specific software or hardware. Here virtualization comes with its way to abstract the underlying hardware and replace it with virtual hardware that gives us more flexibility to run applications. I don't need now to buy servers that are compatible with my applications since I can just create virtual machines with what ever software & hardware required by the vendor. Not only that, we are able now to virtualize the whole environment that interact with the application to create portable version that can run on any system.

7- High Availability

Virtualization offers so many features that can not be found on physical machines such as live migration that allows us move virtual machines from one server to another with zero downtime!. We also have storage migration, fault tolerance and disaster recovery. All these features give the virtual machines the ability to move from one server to another and quickly recover from any unexpected disaster, will increase the availability that every organization is trying to reach.

8- Providing Disaster Recovery

One of the biggest challenges that every organization faces, is having disaster recovery plan which is not easy to have in the term of money. Here virtualization comes to provide us three different features that make DR plans much much easier and cheaper.

A- Hardware abstraction capability removes the dependency on a particular hardware vendor or data center model. In this way building disaster recovery site does not required identical hardware that matches the production environment.
B- Consolidating the infrastructure to fewer physical servers, makes organizations able to afford having disaster recovery site again in the term of money!
C-Almost all virtualization platforms e.g Microsoft, VMWare...etc provide automated disaster recovery solutions where we can easily test how our disaster recovery is going to work in reality. The implementation of these solution whatever from Microsoft or any other vendor, became extremely easy as we are going to see in the third project.

9- Isolate applications

Lets go a little bit back when administrators found out that Windows NT was a monolithic OS since many operations were performed at the Kernel level of the OS which means when an application froze, it would freeze the entire system. For this reason administrator adopted one app/one server model to isolate applications. In this way if the administrator wants to try new technology to meet new business needs the first thing they think about is to buy a new hardware so that their applications will not be impacted by other applications sharing the same hardware.

Over time, Microsoft solved the monolithic OS problem but people's habits did not change and still today we see admins who refuse to have their applications hosted with others on the same server since they do not trust the stability of the other application, or even because some vendors require for their applications to be isolated in order to support them. These reasons forced administrators to deploy more and more physical servers which makes data centers run out of space, cooling systems being overrun by the sheer number of physical servers, and power costs jumping to the roof because of the rising cost of nonrenewable resources. Virtualization is not going to change peoples' habits, instead it is going to provide application isolation by deploying many virtual machines on the top of one physical server which is going to solve the compatibility issues as well.

10- Support old applications

Normally we have old applications that do not run on modern operating systems or even not supported by the newest hardware. Here comes virtualization with its wide solutions to virtualize the application environment where you can run the old application on any system and any hardware.

11- Move to the cloud

Visualization is a foundational element of cloud but it is not cloud. The cloud can and most often does include virtualization solutions with all the features we discussed before.

Virtualization Types

There are three areas of IT where virtualization applied in datacenter which are, server virtualization, storage virtualization, and network virtualization. There are other application of virtualization but we are not going to discuss all of them here.

1- Server Virtualization

There are three types of server virtualization as following:-

A- Operating system virtualization: is a way of splitting a single machine into multiple partitions called

Virtualization Environment (VE) or Virtualization Private Server| (VPS) also called Container. This method is different than the traditional virtual machine method since it supports only the same operating system in each partition which must be the same operating system of the host server. So if the physical server runs windows 10 then all the containers must run windows 10. in the traditional virtual machine method, each OS has to communicate with the host OS through the virtualization layer but Virtualization Environments communicate directly with the host OS which make their performance more better in addition to that, VEs are much small in the size which allows one server to host many of them.

illustration not visible in this excerpt

This approach is an excellent choice when we want to have a similar sit of OS which we do in web hosting companies. They use a container for each website where every website think that it has complete control over the machine but in fact each website shares the same hardware with many other websites hosted on the same server. An example of Operating System Virtualization is Solaris from Sun and Virtuozzo from SWSoft.

illustration not visible in this excerpt

B- Hardware Emulation: Here we use virtualization software called hypervisor to emulate the hardware and this emulated hardware is referred to as a Virtual Machine Monitor (VMM).

The virtualization layer provides the ability to host not just multiple operating systems but also different ones e.g having linux and windows running simultaneously on the top of one machine that may have OS like windows Server or without OS in case we are using just Hyper-v or ESX as a hypervisor. One of the drawback in this system is that the OS in the VMs works slower since it has to communicate through the hypervisor to reach the physical hardware.

Microsoft, VMware and Citrix are the top vendors providing hardware virtualization solution.

C- Paravirtualization

Another approach to server virtualization where we use a very thin layer of virtualization software that multiplexes access by guest operating systems to the underlying physical machine resources which seems to be the same as using hardware emulator but in fact it is not.

First, because it is small in the code. Second, we have seen in the hardware emulation the guest OS works slowly since it has to interact all the time with the hypervisor layer but here, Paravirtualization's thin software layer just works as a guid, allowing one guest OS to access the physical resources while stopping other guest OSs from accessing the same resources at the same time. Examples of Paravirtualization are Xen and Virtual Iron.

2- Storage Virtualization

It is used to manage physical storage from multiple devices so that they appear as a single storage pool.

Storage Virtualization can take many forms

A- Direct Attached Storage (DAS)
B- Network Attached Storage (NAS)
C- Storage Area Network (SAN)

illustration not visible in this excerpt

They can be linked through many ways like Fiber Channel, ISCSI,

Fiber Channel on

Ethernet, or over the Network File System. The explosion of data caused by applications that generate more data than what we can physically store in one server in addition to many other applications specifically internet based ones, have multiple machines that need to access the same data plus having all the data stored on one machine is a big rick. For all these reasons, data has to move to virtualization where we use virtualized storage to avoid data access problems, unitized storage and simplified data management since storage virtualization is going to help automate the expansion of storage capacity, reducing the need for manual provisioning. Storage resources can also be updated on the fly without affecting application performance, thus reducing downtime.

We can see how V Storage solved the underutilized resources issue, If you created a LUN of 500 GB and you are using only 200GB, only 200GB of the actual storage is provisioned which is going to reduce the cost of storage.

In the end, We can summarize what storage virtualization provide five points:

Create artificial storage volume.

It helps creating one artificial storage volume out of multiple physical storage devices. These physical devices can be different in size and vendor.

Support distributed file systems

distributed systems are systems that allow end users to interact with, regardless its location. It can be in the same location or in a remote location. Virtual Storages act as they are directly connected to the device but in fact they are not.

Create multiple storage volumes

As virtualization helps us create one virtual storage out of many physical devices, it also helps creating multiple volumes out of one physical storage device to keep data separate from each other.

Reduce underutilized storages

by dividing the storage devices into virtual volumes that can be used by itself or as part of bigger virtual volume.

Incompatibility issue

every operating system uses different file system to retrieve and store data, but storage virtualization makes it possible for all these systems to share the same storage device.

3- Network Virtualization

when we apply virtualization to a network. Then it creates a logical software based view of the hardware networking resources e.g switch, router,...etc this enables NV to support the complex requirements in multi-tenancy environment. By using NV solutions, network resources can be deployed and managed as logical or virtual services rather than physical resources. As result, organizations can:

A- Enhance enterprise agility
B- Improve network efficiency
C- Reduce capital and operational cost
D- Provide high standards of security, scalability, and availability.

breaking a physical network into multiple virtual networks that are isolated from each other is called external network virtualization, while network virtualization can also be applied within virtual servers to create networks between virtual machines, this is called internal network virtualization.

In some cases, having multiple virtual networks is highly recommended to provide security by isolating the existing traffic using VLANs that block broadcast traffic. Network virtualization also improves utilization of the network by supporting additional workload using the same hardware.

Network Virtualization same as any virtualization application required redundancy since any failure will take dow the network and the service. Redundant switches and routers can be provided using failover techniques to shift traffic when fault occurs.

There are many important things that we need to take care of when we plan a virtual network e.g the Bandwidth and the number of VLANs.

Virtual Machine

We mostly have played before with virtual machines in either a virtualized data center or in home labs, so what is a virtual machine? It is the result of transforming a physical server using hypervisor software which acts as a coordinator to manage multiple operating systems in VMs. Every virtual machine consists of several files:

1- Configuration file: contains the settings information of the VM like the amount of RAM, storage and CPU...etc. Because it has only settings, this file is usually small and in text or xml format.
2- Hard disk file: which is a virtual hard disk that acts as a physical hard disk. It can take one of two main forms, virtual machine disks (VMDK) from VMware and virtual hard disk (VHD) from Microsoft. The size of the hard disk can be dynamic or fixed based on the virtual machine configurations but mostly dynamic to avoid underutilized storage issues.
3- In-memory file: this file contains informations that is in the RAM of the virtual machine.
4- Virtual machine state file: it saves the state of the machine when it goes in standby or hibernation mode.

Hardware Role in Virtualization

We find the rule of one application/one server commonly applied in non-virtualized datacenter where organizations tend to assign one physical server to each application but in virtualized datacenter, one physical server hosts many virtual machines which makes the hardware more important since many are riding on the top of one physical hardware.

After we discussed the term of virtualization and its benefits, lets see how an organization determine wither or not virtualization is a good choice for them. First we need to consider the cost of virtualization solutions, level of security available, availability and scalability.

In general, businesses that work more on an operational expenditures with less IT staff and less security concerns are not mostly going to adopt completely virtualization solutions.

illustration not visible in this excerpt

Businesses who work more on a capital expenditures with high security concerns are more likely to adopt virtualization solutions.

Virtualization is almost a need for large companies where the IT infrastructure grows very fast. It does not mean that small businesses should not take advantages of virtualization for example they can reduce the operational cost and increase their availability. However small businesses need to answer three questions before taking decision about virtualization.

1- Do you have a robust IT environment?

Small businesses with a single application and one to two servers may not find much benefits of using virtualization.

2- Will your IT needs change as your business grows?

Virtualization makes it easy for organizations to scale their infrastructure up and down as the business required. So if your IT needs changes over time then visualization can benefit your business.

3- Is there a virtualization system that fits your needs?

Most of the virtualization solutions are provided by vendor like Microsoft, VMware and Citrix targeting large companies and enterprises where they can make more profits but still there are some open source solutions but with limited features targeting small business. I believe that in the future will will have open source solutions with more features targeting the growth of small and medium size companies specially startups.

Businesses and Virtualization Providers

There are few questions for businesses to answer in order to illustrate what they are looking for in a virtualization provider.

- Is there a long term vision of that solution? We have to know if this solution will advance and help our business in the long term.
- What type of support is available for the solution? It is important that the vendor has a wide range of solutions, services, branches, and support.
- Does the solution provide flexibility and integrity with the existing resources? Flexibility supports the growth of your business.

Virtualization Management

Virtualization makes it easy for IT staff to manage their datacenters including but not limited to deploying new computers in the form of virtual machines, backup and restore, update the systems, sharing data, migrating data, access control,...etc.

Ten steps to Virtualization success

There are ten steps recommended by Sun and AMD to virtualization success which we can summarize as following:-

1- Do not wait to move one time to virtualization. Start now small then expand later.
2- Do not relay on training since the implementation of this technology is new not just to you but for everybody.
3- Do not believe that virtualization is static, it grows as your business does. What you deployed for the last year may need changes to meet the current and future business needs.
4- Do not overlook a business case since everything changes very fast.
5- Do not ignore the importance of hardware which became more important since it hosts many machines on the top of it, not just one system as before.
6- Do not miss any part of your datacenter that can benefit from virtualization.
7- Find a management solution that matches your virtualization solutions. For example the Sun xVM family provides virtualization and management of both physical and virtual, multi platform Linux, Windows, and Solaris environments.
8- Collaborate with the leading virtualization providers like Microsoft, VMware and Citrix.
9- Examine your administrative processes to determine what tasks can be replaced or reduced by more virtualization solutions.
10- Celebrate the virtualization success!!!

Challenges face virtualization in datacenters

Although virtualization technology has made datacenter consolidation possible, it has also created many challenges. Infrastructure is complex, difficult, and time consuming to scale. Network ownership is split between network and server administrators. Security level, the massive number of virtual machines. The main idea behind virtualization is to move to a virtual environment that runs more virtual workload on less number of physical machines, it does not mean that hardware value goes down but the truth is that hardware became more and more valuable like never before since we run many computers in the form of virtual machines on the top of one physical device. If the system administrators did not scale the hardware capabilities to match the virtual needs they may have lots of troubles. It does not matter how many hardware devices you have in your datacenter, but what really matter is using the right physical hardware that supports your virtual environment. Other big challenge specially for small organizations that use old application that may not perform well in specific virtualization environment.

In this case datacenter administrators have to test the performance of these application using different virtualization solutions for example Microsoft, VMware and Citrix to determine which platform works better for us.

In traditional datacenters we normally secure the operating system with all its applications and data but when we moved to the virtual environment we have one more thing to take care of which is the underlying virtualization layer where vulnerabilities can exit. In case something wrong happened to this layer, then all the guest machines on the top of this layer will be in trouble.

Another key challenge is that the virtualization world is evolving so fast where there are no standards yet to manage this new world. That may keep organizations locked in to a certain vendor's solutions who may not be able to support the needs of their businesses in the future also make it costly for them to move to another vendor. The explosion in the number of virtual machines adds another challenge to virtualization since it is very easy to deploy virtual machines. For example some organizations had before virtualization a limited number of physical server but after virtualization the number doubled many times. One of the solutions available for this problem is to have virtual machine life cycles specially when the organization staff is changing all the time(freelancer).

Licensing is another challenge faces organizations. The licensing system became more complex, while some vendors set license fee based on the CPU or the number of cores, others have difference way to license their solutions which can be the number of user, the number of virtual machines or the number of physical devices. The designer of the system has to be careful with the licensing system to not come in the end and find themselves spending more than what they should in the term of money. Since all the virtual environment is connected to one or more shared storage, that makes the storage devices more important than before. Imagine if failure happened in one of the shared storages then the entire virtual environment connected to this storage will be down. For this reason, storage should be a priority when we plan a virtual environment.

Compatibility still an issue in the virtual environment. One of the greatest advantages of virtualization is the ability to move virtual machines free between physical servers which is essential specially in case we face trouble in one physical server, but this ability depends on the processor architecture if it is Intel or AMD.

Which means, virtual machines running on server with Inter CPU can not move to another sever that has AMD CPU.

I believe that virtualization is able to face all these challenges, and all these problems will be solved over the time.

In the end I can summarize all the previous challenges that virtualization faces in the next eight point:

1- The need of hardware that supports the virtual environment.

2- Some application's vendors are not willing to to support their applications on virtual servers or adding restricted rules regarding the virtualization environment.
3- Security issues.
4- No standards which makes organizations locked in to a certain vendor.
5- growth of the number of virtual machines, so we need to plan virtual machine life cycles.
6- So many confusing license model, per CPU, per core, Per user, per machine etc.
7- Availability of virtual storages since all virtual resources will access a shared storage.
8- Compatibility issues. For example we can not have cluster using servers that have CPUs from different vendors AMD, Intel.

[...]

Details

Pages
67
Year
2015
ISBN (eBook)
9783656882046
ISBN (Book)
9783656882053
File size
4 MB
Language
English
Catalog Number
v287640
Grade
1
Tags
virtualization cloud building private

Author

Share

Previous

Title: Virtualization, Cloud & Building a Private Cloud