Loading...

Design, Implementation and Evaluation of Grid Environment for DV to MPEG4 Conversion

by Jagpreet Sidhu (Author) Sarabjeet Singh (Author)

Scientific Study 2017 118 Pages

Computer Science - General

Excerpt

Table of Contents

1 Introduction to Distributed Computing
1.1 Goals and Advantages
1.2 Limitations
1.3 Grid Architecture Types
1.4 Distributed Computing Implementations
1.5 Grid computing
1.5.1 What is Grid Computing?
1.5.2 Virtual Organization
1.5.3 Comparison with other Distributed Computing technologies
1.5.3.1 Comparison with Cluster Computing
1.5.3.2 Comparison with CORBA
1.1.3.3 Comparison with P2P
1.6 Goals of the book

2 Background and Related Work
2.1 Globus
2.1.1 GRAM (Globus Resource Allocation Manager)
2.1.2 MDS (Monitoring and Discovery Service)
2.1.3 GridFTP
2.1.4 Globusrun-ws
2.1.5 LDAP Service
2.1.6 Globus Security Infracture
2.1.6.1 Public Key Cryptography
2.1.6.2 Digital Signatures
2.1.6.3 Certificates
2.1.6.4 Mutual Authenti cation
2.1.6.5 Confidential Communication
2.1.6.6 Securing Private Keys
2.1.6.7 Delegation, Single Sign-On and Proxy Certificates
2.2 Sun Grid Engine
2.3 Video formats
2.3.1 Digital video
2.3.2 Moving Picture Experts Group (MPEG)
2.3.2.1 MPEG Standards

3 Problem Formulation & Objectives
3.1 Problem Formulation
3.2 Objectives

4 Design of Grid Environment for DV to MPEG4 Conversion
4.1 Challenges in Design of Grid
4.2 Challenges in Video Conversion
4.3 Proposed Work Flow
4.3.1 Components of proposed solution
4.3.2 Proposed Solution

5 Implementation and Configuration Setup
5.1 Installing Grid Environment
5.1.1 Installing and Configuring Linux
5.1.1.1 Installing Linux
5.1.1.2 Setting up Accounts
5.1.2 Deploying Torque (Open PBS)
5.1.2.1 Deploying RSH
5.1.2.2 Downloading and Installing Torque
5.1.2.3 Configuring and Deploying Torque
5.1.2.4 Starting PBS
5.1.3 Deploying Sun Grid Engine
5.1.3.1 Downloading SGE
5.1.3.2 Unpacking the SGE distribution
5.1.3.3 Installing and Configuring SGE
5.1.3.4 Deploying The PostgreSQL Relational Database
5.1.4 Initialization Postgres
5.1.4.1 Starting the database
5.1.5 Fixing Java and ANT
5.1.5.1 Removing default Java
5.1.5.2 Java installation
5.1.5.3 Fixing Java and ANT
5.1.6 Deploying The Globus Toolkit
5.1.6.1 Preliminaries
5.1.6.2 Building and Installing
5.1.6.3 Creating a Certificate Authority
5.1.6.4 Obtaining a Host Certificate on nodeB and making copy of Container Certificate and gridmap file
5.1.6.5 Configuring the RFT Service and SUDO (/etc/sudoers) file
5.1.6 .6 Starting the Container and Starting globus-gridftp-server
5.1.6 .7 Repeat for the steps for nodeA with some configurational changes
5.1.6 .8 Completing Deployment on nodeC
5.1.7 Connecting Globus Gram WS and Torque (Open PBS)
5.1.8 Connecting Globus Gram WS and SUN Grid Engine
5.1.8.1 Turning on reporting for SGE
5.1.8.2 Building the WS GRAM SGE jobmanager
5.1.9 A Distributed Grid video encoding setting for the book
5.1.9.1 Installing MPlayer and Mencoder
5.2 Video Conversion Process in Grid Environment
5.3 Mencoder

6 Experimental Results
6.1 Configuration of the Environment
6.2 Evaluation of Grid with 6 minute video
6.3 Evaluation of Grid with 8 minute video
6.4 Evaluation of Grid with 10 minute video
6.5 Interpretation of Results

7 Summary, Conclusion and Future Scope
7.1 Summary
7.2 Conclusion
7.3 Future scope

Appendix A Python Source Code of 3 Scripts

Appendix B Linux Commands to Execute Scripts on Grid Clien

Appendix C Linux Commands to Execute Scripts on Grid Client with Output Results

References

Abstract

The term, grid computing, has become one of the latest buzzwords in the IT industry. Grid computing is an innovative approach that leverages existing IT infrastructure to optimize compute resources and manage data and computing workloads. According to Gartner, "A grid is a collection of resources owned by multiple organizations that is coordinated to allow them to solve a common problem".

Video encoding is a lengthy, CPU intensive task, involving the conversion of video media from one format to another. Furthermore, and of particular interest to distributed processing, input video files can be easily broken down into work-units. These factors make the distribution of video encoding processes viable. The majority of research on distributed processing has focused on supercomputing and parallel computing, however, in practicality, grid services tend to be flexible, homogenous and offer good support for CPU intensive task by breaking down a CPU intensity task into small subtask which can to individual submitted to individual job scheduler in grid cluster. Consequently it can be very efficient for such technologies (grid) to efficiently exploit the growing pool of resources available around the edge of the Internet on home PCs by creating a grid pool for your CPU intensive jobs and using the unmanaged or sleeping cycles of a computer in remote area to participate in grid. These are the resources that the GRID uses to provide a distributed video encoding service. The same grid can be easily extended by adding more nodes and the unattended CPU cycles for CPU intensive jobs.

The entire grid has been developed using open source tools available over the internet. Globus 4.0.1 toolkit is the main toolkit that has been used for the evaluation purpose.

All experiments have been conducted and results have been recorded in University Computer Labs. The Clients, Certificate Authorities, Grid Nodes and other resources required during evaluation have been configured and taken from different labs of CSE department.

Acknowledgements

The greatest thanks and that which overarches everything, goes to WahaGuru, without whom, neither I, nor anything of this work, nor the universe in which we live, would exist. Thank you for inspiring me to do everything with all my heart; for you and not for men. Thank you for always being present in every aspect of this past year and for allowing me to come to you with all my worries and concerns. This work was only possible because of your love.

Secondly I must thank Dr. Sarbjeet Singh who has been the greatest supervisors and co-author of this book. They know that I like structure and have provided that in the most positive way. I hope this work brings much credit to you as supervisors; it wouldn’t have been possible without your invaluable advice.

To lab attendant Mr. Santosh Kumar and Mr. Amar, thank you for putting up with me through the longer hours of grid execution and experiment evaluation and performance. You are always there for me and your constant support has made this work possible, thank you!

I hope that this is a work that you can be proud of in years to come and that all I have achieved over the past four years will continue to bring you joy.

I would like to acknowledge the financial support of my father Mr. Habans Singh Sidhu who encourages me to leave my job and complete this book on time by providing me the financial and mental support.

Lastly , To my many other friends, particularly Manu Bansal, Deepinderdeep Singh, Kuldeep Chand and Anil Kumar Goel, thank you for your support and friendship which has made this year a happy and fruitful one.

Table of content for figures

1 Figurel.1: Distributed Computing

2 Figure 1.2: The Reliability of Distributed Computing

3 Figure 2.1: Globus Architecture

4 Figure 2.2: Delegating user Credentials

5 Figure 2.3: Mpeg Compression methodology

6 Figure 4.1: Video compression by single stream

7 Figure 4.2: Video compression by Purposed grid computing system

8 Figure 5.1: Grid Design plan for the book work

9 Figure 5.2: Video compression by Purposed grid computing system

10 Figure 5.3: System Components

11 Figure 5.4: Components of Conversion server

12 Figure 6.1 : Physical Layout of Grid Environment in University Lab

13 Figure 6.2: Evaluation of 6min video on grid nodes

14 Figure 6.3 : Evaluation of 8min video on grid nodes

15 Figure 6.4: Evaluation of 10min video on grid nodes

Table of content for Tables

1 Table 5.1: Network and Host configuration details

2 Table 6.1 : Evaluation of 6min video on grid nodes

3 Table 6.2: Evaluation of 8min video on grid nodes

4 Table 6.3: Evaluation of 8min video on grid nodes

5 Table A.1: Encoding scripts summary

6 Table B.1: Linux commands to schedule the job on grid

Chapter 1 Introduction

1. Introduction to Distributed Computing

Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent processes, or multiple programs, running under a loosely or tightly controlled regime [1].

In distributed computing a program is split up into parts that run simultaneously on multiple computers communicating over a network. Distributed computing is a form of parallel computing, but parallel computing is most commonly used to describe program parts running simultaneously on multiple processors in the same computer. Both types of processing require dividing a program into parts that can run simultaneously, but distributed programs often must deal with heterogeneous environments, network links of varying latencies, and unpredictable failures in the network or the computers [1].

illustration not visible in this excerpt

Figure 1.1: Distributed Computing [2]

Distributed computing is the next step in computer progress, where computers are not only networked, but also smartly distributes their workload across each computer so that they stay busy and don't squander the electrical energy they feed on. This setup rivals even the fastest commercial supercomputers built by companies like IBM or Cray. When you combine the concept of distributed computing with the tens of millions of computers connected to the Internet, you've got the fastest computer on Earth [2].

1.1 Goals and Advantages

There are many different types of distributed computing systems and many challenges to overcome in successfully designing one.

1. The main goal of a distributed computing system is to connect users and resources in a transparent, open, and scalable way. Ideally this arrangement is drastically more fault tolerant and more powerful than many combinations of stand-alone computer systems.
2. Distributed computing first and foremost advantage over traditional supercomputers is its frugality, making use of every spare moment your computer's processor is idle. The latest Pentium chip sits unused most of the time while your monitor flashes a screen saver or while your keyboard records your typing. These basic functions use very little processing power, while the rest goes to waste. Distributed computing can take full advantage of a computer's capabilities by keeping it busy with numbers to calculate [3].
3. It's also easy on the wallet. If enough users sign up, these linked computers often referred to as virtual parallel machines can surpass the fastest supercomputer by as much as four times for a fraction of the supercomputer's cost. More power for less money, what scientist or engineer with a large, overwhelming project could resist the concept of more power for less money? Provide a slick screensaver with an amazing visual of the data that's being processed by the computer, and Internet users will sign up in droves, adding to the computing power. This is one of the reasons SETI@Home's project is so popular [3].

illustration not visible in this excerpt

Figure 1.2: The Reliability of Distributed Computing [3]

4. Reliability is no less important than speed (see figure 1.2). With a supercomputer, any one problem may bring the system to its knees. If you distribute the workload across several computers, however, there are fewer problems since each computer is independent and its problems don't affect the other computers. Less time is spent troubleshooting the problem [3].

But, like trying to control a roomful of children, working with individual computers can present many a challenge to those who wish to harness this strong beast of a machine.

1.2 Limitations

If not planned properly, a distributed system has its own limitations, some of which are mentioned below:

1. If not planned properly, can decrease the overall reliability of computations if the unavailability of a node can cause disruption of the other nodes. Leslie Lamport famously quipped that: "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable."[4]
2. Troubleshooting and diagnosing problems in a distributed system can also become more difficult, because the analysis may require connecting to remote nodes or inspecting communication between nodes [1].
3. Many types of computation are not well suited for distributed environments, typically owing to the amount of network communication or synchronization that would be required between nodes [1].
4. If bandwidth, latency, or communication requirements are too significant, then the benefits of distributed computing may be negated and the performance may be worse than a non-distributed environment [1].
5. Security is a major issue and concerns both the organization using distributed computing and the user whose computer is doing the work. The organization needs to be able to trust the data results that the user's computer provides; several problems in the past include the "tweaking" of the software to report a faster processing speed to malicious data with fake alerts, which needed to be recalculated again [5].

1.3 Grid Architecture type

Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely-coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system.

Distributed programming typically falls into one of several basic architectures or categories: Client-server, 3-tier architecture, N-tier architecture, Distributed objects, loose coupling, or tight coupling [1].

Client-server — Smart client code contacts the server for data, then formats and displays it to the user. Input at the client is committed back to the server when it represents a permanent change.

3-tier architecture — three tier systems move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are 3-Tier.

N-tier architecture — N-Tier refers typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.

Tightly coupled (clustered) — refers typically to a cluster of machines that closely work together, running a shared process in parallel. The task is subdivided in parts that are made individually by each one and then put back together to make the final result.

Peer-to-peer — an architecture where there is no special machine or machines that provide a service or manage the network resources. Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and servers.

Space based — refers to an infrastructure that creates the illusion (virtualization) of one single address-space. Data are transparently replicated according to application needs. Decoupling in time, space and reference is achieved.

Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes. Through various message passing protocols, processes may communicate directly with one another, typically in a master/slave relationship. Alternatively, a "database-centric" architecture can enable distributed computing to be done without any form of direct inter-process communication, by utilizing a shared database. [2]

1.4 Distributed Computing Implementations

Distributed computing implements a kind of concurrency. It interrelates tightly with concurrent programming so much that they are sometimes not taught as distinct subjects. [6]

Concurrency: Distributed computing implements a kind of concurrency. It interrelates tightly with concurrent programming so much that they are sometimes not taught as distinct subjects [6].

Multiprocessor systems: A multiprocessor system is simply a computer that has more than one CPU on its motherboard [7]. If the operating system is built to take advantage of this, it can run different processes (or different threads belonging to the same process) on different CPUs.

Multicore systems: Intel CPUs from the late Pentium 4 era (Northwood and Prescott cores) employed a technology called Hyper-threading that allowed more than one thread (usually two) to run on the same CPU. The more recent Sun UltraSPARC T1, AMD Athlon 64 X2, AMD Athlon FX, AMD Opteron, AMD Phenom, Intel Pentium D, Intel Core, Intel Core 2, Intel Core 2 Quad, and Intel Xeon processors feature multiple processor cores to also increase the number of concurrent threads they can run [8] .

Multicomputer systems: A multicomputer may be considered to be either a loosely coupled NUMA computer or a tightly coupled cluster. Multicomputer is commonly used when strong computer power is required in an environment with restricted physical space or electrical power. Common suppliers include Mercury Computer Systems, CSPI, and SKY Computers. Common uses include 3D medical imaging devices and mobile radar. A method of constructing a multicomputer system consisting of a multiplicity of multiple computing units connected together in multiple dimensions [9].

Computing taxonomies : The types of distributed systems are based on Flynn's taxonomy of systems; single instruction, single data (SISD), single instruction, multiple data (SIMD), multiple instruction, single data (MISD), and multiple instruction, multiple data (MIMD)[10].

Computer clusters: A cluster consists of multiple stand-alone machines acting in parallel across a local high speed network. Distributed computing differs from cluster computing in that computers in a distributed computing environment are typically not exclusively running "group" tasks, whereas clustered computers are usually much more tightly coupled. Distributed computing also often consists of machines which are widely separated geographically. Cluster computing for applications scientists is changing dramatically with the advent of commodity high performance processors, low-latency/high-bandwidth networks, and software infrastructure and development tools to facilitate the use of the cluster [11].

Grid Computing: The term “the Grid” was coined in the mid-1990s to denote a proposed distributed computing infrastructure for advanced science and engineering [12] .A grid uses the resources of many separate computers, loosely connected by a network (usually the Internet), to solve large-scale computation problems. Public grids may use idle time on many thousands of computers throughout the world. Such arrangements permit handling of data that would otherwise require the power of expensive supercomputers or would have been impossible to analyze.

1.5 Grid computing

In the last few years, a crucial gap has developed between the advance of networking capability (the bits per second a network can handle) and microprocessor speed (based on the number of transistors per integrated circuit). Networking capability essentially doubles every nine months today, although historically this growth was much slower. And Moore’s Law dictates that the number of transistors per integrated circuit still doubles every 18 months. Therein lays the problem. Moore’s Law is slow compared with the advancement in network capability. If you accept as a given that core networking technology now accelerates at a much faster rate than advances in microprocessor speeds, then it becomes apparent that in order to take advantage of the advances in networking, a more efficient way of harnessing microprocessor capacity is required. This new point of view changes the historical trade-off between networking and processing costs. Similar arguments apply to bulk storage. Grid computing is the means to address this gap, this change in the traditional trade-offs, by tying together distributed resources to form a single virtual computer [13].

1.5.1 What is Grid Computing?

Grid Computing principles focus on large-scale resource sharing in distributed systems in a flexible, secure, and coordinated fashion. This dynamic coordinated sharing results in innovative applications making use of high-throughput computing for dynamic problem solving. Grid computing uses the resources of many separate computers connected by a network to solve large- scale computation problems. The SETI@home project [14], launched in 1999, is a widely-known example of a very simple Grid computing project. Most of these projects work by running as a screensaver on users’ personal computers, which process small pieces of the overall data while the computer is either completely idle or lightly used.

In 1998, it was stated that a computational Grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities [8]. This definition was primarily centered on the computational aspects of Grids. Later iterations broadened this definition with more focus on coordinated resource sharing and problem solving in multi-institutional virtual organizations. Grid computing differentiates itself from other distributed computing technologies through an increased focus on resource sharing, co-ordination, manageability, and high performance. The sharing of resources, ranging from simple file transfers to complex and collaborative problem solving, is accomplished under controlled and well-defined conditions and policies. In this context, the critical problems are resource discovery, authentication, authorization, and access mechanisms.

Grid computing offers a model for solving massive computational problems by making use of the unused resources (CPU cycles and/or disk storage) of large numbers of disparate, often desktop, computers treated as a virtual cluster embedded in a distributed telecommunications infrastructure. Grid computing focus on the ability to support computation across administrative domains sets it apart from traditional computer clusters or traditional distributed computing. Grids offer a way to solve grand challenge problems like protein folding, financial modeling, earthquake simulation, climate/weather modeling etc. Grids offer a way of using the information technology resources optimally inside an organization. They also provide a means for offering information technology as a utility bureau for clients, who pay only for what they use, as with electricity or water.Grid computing has the design goal of solving problems too big for any single supercomputer, whilst retaining the flexibility to work on multiple smaller problems. Thus Grid computing provides a multi-user environment. It involves sharing heterogeneous resources (based on different platforms, hardware/software architectures, and computer languages), located in different places belonging to different administrative domains over a network using open standards. In short, it involves virtualizing computing resources.

The following are the characteristics of a Grid [15]:

1) Coordinates resources that are not under centralized control.
2) Uses standard, open, general-purpose protocols and interfaces.
3) Delivers non-trivial qualities of service.

The growing popularity of Grid computing has resulted in various kinds of Grids, common ones being known as Data Grids, Computational Grids, Bio Grids, Cluster Grids and Science Grids. Functionally, one can classify Grids into several types:

1) Computational Grids which focuses primarily on computationally-intensive operations.
2) Data Grids, or the controlled sharing and management of large amounts of distributed data.
3) Equipment Grids which have a primary piece of equipment e.g. a telescope, and where the surrounding Grid is used to control the equipment remotely and to analyze the data produced.

1.5.2 Virtual Organization

A virtual organization (VO) is a dynamic group of individuals, groups, or organizations who define the conditions and rules for sharing resources. The concept of the VO is the key to Grid computing. These are some of the common characteristics that typically exist among participants of a VO:

1. Concerns and requirements exist concerning resource sharing.
2. Resource sharing is conditional, time-bound, and rules-driven.
3. The collection of participating individuals and/or institutions is dynamic.
4. Sharing relationship among participants is peer-to-peer in nature.
5. Resource sharing is based on an open and well-defined set of interactions and access rules.

All VOs share some characteristics and issues, including common concerns and requirements that may vary in size, scope, duration, sociology, and structure. The members of any VO negotiate the sharing of resources based upon the rules and conditions defined by the VO, and the members then share the resources in the VO’s constructed resource pool. Assigning users, resources, and organizations from different domains to a VO is one of the key technical challenges in Grid computing. This task includes identification and application of appropriate resource-sharing methods, rules and conditions for member assignment, security delegation, and access control among the participants.

1.5.3 Comparison with other distributed computing technologies

Grid computing has recently enjoyed an increase in popularity as a distributed computing architecture. As Grid computing matures, the application of the technology in additional areas will increase. Grid computing can be differentiated from almost all distributed computing paradigms by this defining characteristic: The essence of Grid computing lies in the efficient and optimal utilization of a wide range of heterogeneous, loosely coupled resources in an organization tied to sophisticated workload management capabilities or information virtualization [13].

1.5.3.1 Comparison with Cluster computing

Grid computing is often confused with cluster computing. The key difference is that the resources which comprise the Grid are not all within the same administrative domain. Grids consist of heterogeneous resources. Cluster computing is primarily concerned with Computational resources; Grid computing integrates storage, networking, and computation resources. Clusters usually contain a single type of processor and operating system; Grids can contain machines from different vendors running various operating systems. Grids are dynamic by their nature. Clusters typically contain a static number of processors and resources; resources come and go on the Grid. Resources are provisioned onto and removed from the Grid on an ongoing basis. Grids are inherently distributed over a local, metropolitan, or wide-area network. Usually, clusters are physically contained in the same complex in a single location; Grids can be (and are) located everywhere.

Cluster interconnection technology delivers extremely low network latency, which can cause problems if clusters are not close together. Grids offer increased scalability. Physical proximity and network latency limit the ability of clusters to scale out; due to their dynamic nature, Grids offer the promise of high scalability.

Cluster and Grid computing are completely complementary; many Grids incorporate clusters among the resources they manage. Indeed, a Grid user may be unaware that his workload is in fact being executed on a remote cluster. And while there are differences between Grids and clusters, these differences afford them an important relationship because there would always be a place for clusters; certain problems will always require a tight coupling of processors. However, as networking capability and bandwidth advances, problems that were previously the exclusive domain of cluster computing could be solvable by Grid computing.

1.5.3.2 Comparison with CORBA

Of all distributed computing environments, CORBA probably shares more surface level similarities with Grid computing than the others. This is due to the strategic relationship between Grid computing and Web services in the Open Grid Services Architecture (OGSA). Both are based on the concept of service-oriented architecture (SOA).A key distinction between CORBA and Grid computing is that CORBA assumes object orientation, but Grid computing does not. In CORBA, every entity is an object and it supports mechanisms such as inheritance and polymorphism. In OGSA, there are similarities to some object concepts, but there isn’t a presumption of object-oriented implementation in the architecture. The architecture is message oriented; object orientation is an implementation concept. However, the use of a formal definition language (such as WSDL, Web Services Definition Language) in WSRF (Web Services Resource Frame-work) means that interfaces and interactions are just as precisely defined as in CORBA, sharing one of the major software engineering benefits also exhibited by object-oriented design.

Another distinction is that OGSA Grid computing is built on a Web services foundation. CORBA integrates with and interoperates with Web services. One of the problems with CORBA was that it assumed too much of the “endpoints,” which are basically all the machines (clients and servers) participating in a CORBA environment. There are also issues of interoperability between vendors’ CORBA implementations, how CORBA nodes are able to interoperate on the Internet, and how endpoints are named. This means that all of the machines in the group had to conform to certain rules and to a certain way of doing things (all assuming the same protocols like IDL, IOR, and IIOP) for CORBA to work. This is an appropriate approach when building high-reliability, tightly coupled, pre-compiled systems.

Another important distinction between Grid computing and CORBA is that, OGSA Grid computing defines the following three categories of services: Grid core services, Grid data services and Grid program-execution services. CORBA does not pay specific attention to data or program-execution services, because it is based on essentially remote procedure call (RPC). An RPC is a protocol that one program can use to request a service from a program located in another computer in a network without having to understand network details. It is a synchronous operation that requires the requesting program to be suspended until the remote procedure returns results. Many of the services specified and implemented in Grid core services (as well as the WSRF) are similar to foundational services found in CORBA. But data and program- execution services are unique to Grid computing.

1.1.3.3 Comparison with P2P

The hallmark of a P2P system is that it lacks a central point of management; this makes it ideal for providing anonymity and offers some protection from being traced. Grid environments, on the other hand, usually have some form of centralized management and security (for instance, in resource management or workload scheduling). This lack of centralization in P2P environments carries two important consequences:

1) P2P systems are generally far more scalable than Grid computing systems. Even when you strike a balance between control and distribution of responsibilities, Grid computing systems are inherently not as scalable as P2P systems.
2) P2P systems are generally more tolerant of single-point failures than Grid computing systems. Although Grids are much more resilient than tightly coupled distributed systems, a Grid inevitably includes some key elements that can become single points of failure.

This means that the key to building Grid computing systems is finding a balance between decentralization and manageability. Also, while an important characteristic of Grid computing is that resources are dynamic; in P2P systems the resources are much more dynamic in nature and generally are more fleeting than resources on a Grid. For both P2P and Grid computing systems, utilization of the distributed resources is a primary objective. Given a wealth of computing resources, both of these systems will try to use them as much as possible.

A final distinction between the two systems is standards - the general lack of standards in the P2P world contrasts with the host of standards in the Grid universe. Entities like the Global Grid Forum, refine existing standards and create new ones.

1.6 Goals of the book

The popularization of broadband network and the development of MPEG-4 compression technology urge to the multiplexing development of Internet information. Among these techniques, Video-on-Demand is the most popular service used by universities to deliver the lecture and research content to remote location on the globe. However, it takes extremely long compression time to convert audio and video data that is the Digital Video also popularity knows as DV standard format into MPEG-4 format which is the new compression standard used on internet and mobile devices. Although MPEG-4 enhances the compression ratio, it still needs massive storage equipment to deposit the audio and video data. The price of MPEG-4 related hardware equipment still stays at a high level currently. Thus, these problems can easily be solved by using the Grid computing technology, or PC Grid. In this book, we use the Linux PC grid made on the middle ware of Globus Toolkit 4.0.1 open source to achieve the high performance video conversion grid for very large files of camcorder format (DV format) to a very good compression format MPEG-4. Moreover, we use desktop system to make a grid computing technology to make it more convenient and realistic environment for the compression. In video conversion aspect, we use software tool called “mencoder” to perform the parallel the video conversion, with the goal to achieve the best execution time, by enabling that each node to perform in its best processing potency. We want to purpose a grid based technique by using some open source based tool which can reduce the conversion time of the compression from DV to MPEG-4 format, so that more information can to reach at remote location in the globe by better conversion and compression infrasture.

The rest of the book is organized as follows: Chapter 2 gives the background and related work for the Conversion Grid infrasture. The objective and the architecture of problem formulation of video Conversion Grid is described in Chapter 3 and 4. Details about the implementation Details of video Conversion Grid are given in Chapter 5. In Chapter6, Evaluation of test runs in the Grid. Chapter 7 gives conclusions and discusses about future work.

CHAPTER 2 Background and Related Work

This chapter discusses the background materials for the Video Conversion grid infrasture. The architecture of the Video Conversion grid infrastructure is covered in the chapter 4. Details about LDAP Service, Globus, Condor, digital Video standard and Video compression standards are given in the subsequent sections.

2.1 Globus

The open source Globus Toolkit is a fundamental enabling technology for the ’’Grid” , letting people share computing power, databases, and other tools securely online across corporate, institutional, and geographic boundaries without sacrificing local autonomy [16]. The toolkit includes software for security, information infrastructure, resource management, data management, communication, fault detection, and portability. It is packaged as a set of components that can be used either independently or together to develop applications. The Globus Toolkit was conceived to remove obstacles that prevent seamless collaboration. Its core services, interfaces and protocols allow users to access remote resources as if they were located within their own machine room while simultaneously preserving local control over who can use resources and when.

Globus architecture is shown in figure 2.1[ 16]. The toolkit has 3 components known as pillars. These are: Resource Management, Information Services and Data Management. The toolkit uses the GSI (Globus Security Infrastructure) to provide a common security protocol for each of the pillars. GSI is based on public key encryption, X.509 certificates and Secure Socket layer (SSL) protocol.

illustration not visible in this excerpt

Figure 2.1: Globus Architecture [17]

2.1.1 GRAM (Globus Resource Allocation Manager)

The Globus Resource Allocation Manager provides a standard interface to all the local resource management tools a site uses. The Globus resource management has the high-level global resource management services layered on top of local resource-allocation services. The GRAM service is provided by a combination of the gatekeeper and the job manager. The gatekeeper performs the task of authenticating an inbound request using GSI, and mapping the users global ID on the Grid to a local username. The incoming request specifies a specific local service to be launched, the latter usually being a job manager. The user needs to compose the request in a Resource Specification Language (RSL) that is handed over to the job manager by the gatekeeper. After parsing the RSL, the job manager translates it into the local scheduler’s language. The GRAM also provides the capability to stage in executables or data files, using Global Access to Secondary Storage (GASS) [18].

2.1.2 MDS (Monitoring and Discovery Service)

MDS stands for Monitoring and Discovery Service. MDS in GT2 was called Meta-computing Directory Service. The features provided in GT2 by the Monitoring and Discovery Service (MDS2) are now provided by the GT3 Information Services component, which is now also known as MDS3. When used in conjunction with standard Open Grid Services Infrastructure (OGSI) mechanisms that provide a consistent way of querying any Grid service about its configuration and status information, these services provide all of the capabilities of MDS2 and more, all within an OGSA-compliant environment 16].

The main part of MDS is LDAP server. The information is gathered by information repositories (GRIS - Grid Resource Information Service) and is organized in trees. The composition of information is facilitated by registration service (GIIS - Grid Index Information Service). The information for each node is gathered by launching executables called information providers.

MDS uses slapd as the LDAP directory server. It implements version 3 of Lightweight Directory Access Protocol. It supports certificate-based authentication and data security (integrity and confidentiality) services through the use of TLS (or SSL). Slapd is threaded for high performance. A single multi-threaded slapd process handles all incoming requests using a pool of threads. This reduces the amount of system overhead required while providing high performance.

2.1.3 GridFTP

This is a data transfer protocol based on FTP, highly optimized to give secure and reliable performance in a Grid. Among the various features it provides, the important ones are GSI security, partial file transfers, authenticated data channels and third-party (direct server-to-server) transfers. The protocol also allows developers to add plugins for customized reliability and fault tolerance features [19].

2.1.4 Globusrun-ws

Globusrun-ws (WS GRAMclient) is a program for submitting and managing jobs to a local or remote job host. WS GRAM provides secure job submission to many types of job scheduler for users who have the right to access a job hosting resource in a Grid environment. All WS GRAM submission options are supported transparently through the embedded request document input. Globusrun-ws offer additional features to fetch job output files incrementally during the run as well as to automatically delegate credentials needed for certain optional WS GRAM features. Online and batch submission modes are supported with reattachment (recovery) for jobs whether they were started with this client or another WS GRAMclient application [20].

2.1.5 LDAP Service

LDAP stands for Lightweight Directory Access Protocol. As the name suggests, it is a lightweight protocol for accessing directory services, specifically X.500-based directory services. LDAP runs over TCP/IP or other connection oriented transfer services [9]. The LDAP information model is based on entries. An entry is a collection of attributes that has a globally- unique Distinguished Name (DN). The DN is used to refer to the entry unambiguously. Each of the entry’s attributes has a type and one or more values. The types are typically mnemonic strings, like”cn” for common name, or”mail” for email address. The syntax of values depends on the attribute type. Directory entries are arranged in a hierarchical tree-like structure. Traditionally, this structure reflected the geographic and/or organizational boundaries. In addition, LDAP allows us to control which attributes are required and allowed in an entry through the use of a special attribute called objectClass. The values of the objectClass attribute determine the schema rules the entry must obey.

An entry is referenced by its distinguished name, which is constructed by taking the name of the entry itself (called the Relative Distinguished Name or RDN) and concatenating the names of its ancestor entries. Operations are provided for adding and deleting an entry from the directory, changing an existing entry, and changing the name of an entry. Most of the time, though, LDAP is used to search for information in the directory.

The LDAP search operation allows some portion of the directory to be searched for entries that match some criteria specified by a search filter. Information can be requested from each entry that matches the criteria. LDAP directory service is based on a client-server model. One or more LDAP servers contain the data making up the directory information tree (DIT). The client connects to servers and asks it a question. The server responds with an answer and/or with a pointer to where the client can get additional information (typically, another LDAP server). No matter which LDAP server a client connects to, it sees the same view of the directory; a name presented to one LDAP server references the same entry it would at another LDAP server. This is an important feature of a global directory service, like LDAP. LDAP provides a mechanism for a client to authenticate, or prove its identity to a directory server, paving the way for rich access control to protect the information the server contains. LDAP also supports data security (integrity and confidentiality) services.

2.1.6 Globus Security Infrastructure

GSI uses public key cryptography (also known as asymmetric cryptography) as the basis for its functionality. Many of the terms and concepts used in this description of GSI come from its use of public key cryptography [22].

The primary motivations behind GSI are:

- The need for secure communication (authenticated and perhaps confidential) between elements of a computational Grid.
- The need to support security across organizational boundaries, thus prohibiting a centrally-managed security system.
- The need to support "single sign-on" for users of the Grid, including delegation of credentials for computations that involve multiple resources and/or sites.

2.1.6.1 Public Key Cryptography

The most important thing to know about public key cryptography is that, unlike earlier cryptographic systems, it relies not on a single key (a password or a secret "code"), but on two keys. These keys are numbers that are mathematically related in such a way that if either key is used to encrypt a message, the other key must be used to decrypt it. Also important is the fact that it is next to impossible (with our current knowledge of mathematics and available computing power) to obtain the second key from the first one and/or any messages encoded with the first key [23].

By making one of the keys available publicly (a public key) and keeping the other key private (a private key), a person can prove that he or she holds the private key simply by encrypting a message. If the message can be decrypted using the public key, the person must have used the private key to encrypt the message.

Important: It is critical that private keys be kept private! Anyone who knows the private key can easily impersonate the owner.

2.1.6.2 Digital Signatures

Using public key cryptography, it is possible to digitally "sign" a piece of information. Signing information essentially means assuring a recipient of the information that the information hasn't been tampered with since it left your hands.

To sign a piece of information, first compute a mathematical hash of the information. (A hash is a condensed version of the information. The algorithm used to compute this hash must be known to the recipient of the information, but it isn't a secret.) Using your private key, encrypt the hash, and attach it to the message. Make sure that the recipient has your public key[22].

To verify that your signed message is authentic, the recipient of the message will compute the hash of the message using the same hashing algorithm you used, and will then decrypt the encrypted hash that you attached to the message. If the newly-computed hash and the decrypted hash match, then it proves that you signed the message and that the message has not been changed since you signed it.

2.1.6.3 Certificates

A central concept in GSI authentication is the certificate. Every user and service on the Grid is identified via a certificate, which contains information vital to identifying and authenticating the user or service [22].

A GSI certificate includes four primary pieces of information:

- A subject name, which identifies the person or object that the certificate represents.
- The public key belonging to the subject.
- The identity of a Certificate Authority (CA) that has signed the certificate to certify that the public key and the identity both belong to the subject.
- The digital signature of the named CA.

Note that a third party (a CA) is used to certify the link between the public key and the subject in the certificate. In order to trust the certificate and its contents, the CA's certificate must be trusted. The link between the CA and its certificate must be established via some non­cryptographic means, or else the system is not trustworthy.

GSI certificates are encoded in the X.509 certificate format, a standard data format for certificates established by the Internet Engineering Task Force (IETF). These certificates can be shared with other public key-based software, including commercial web browsers from Microsoft and Netscape.

2.1.6.4. Mutual Authentication

If two parties have certificates, and if both parties trust the CAs that signed each other's certificates, then the two parties can prove to each other that they are who they say they are. This is known as mutual authentication. GSI uses the Secure Sockets Layer (SSL) for its mutual authentication protocol, which is described below. (SSL is also known by a new, IETF standard name: Transport Layer Security, or TLS.)

Before mutual authentication can occur, the parties involved must first trust the CAs that signed each other's certificates. In practice, this means that they must have copies of the CAs' certificates--which contain the CAs' public keys--and that they must trust that these certificates really belong to the CAs.

- To mutually authenticate, the first person (A) establishes a connection to the second person (B).
- To start the authentication process, A gives B his certificate.
- The certificate tells B who A is claiming to be (the identity), what A's public key is, and what CA is being used to certify the certificate.
- B will first make sure that the certificate is valid by checking the CA's digital signature to make sure that the CA actually signed the certificate and that the certificate hasn't been tampered with. (This is where B must trust the CA that signed A's certificate.)
- Once B has checked out A's certificate, B must make sure that A really is the person identified in the certificate.
- B generates a random message and sends it to A, asking A to encrypt it.
- A encrypts the message using his private key, and sends it back to B.
- B decrypts the message using A's public key.
- If this results in the original random message, then B knows that A is who he says he is.
- Now that B trusts A's identity, the same operation must happen in reverse.
- B sends A her certificate, A validates the certificate and sends a challenge message to be encrypted.
- B encrypts the message and sends it back to A, and A decrypts it and compares it with the original.
- If it matches, then A knows that B is who she says she is.

At this point, A and B have established a connection to each other and are certain that they know each others' identities.

2.1.6.5 Confidential Communication

By default, GSI does not establish confidential (encrypted) communication between parties. Once mutual authentication is performed, GSI gets out of the way so that communication can occur without the overhead of constant encryption and decryption.

GSI can easily be used to establish a shared key for encryption if confidential communication is desired. Recently relaxed United States export laws now allow us to include encrypted communication as a standard optional feature of GSI.

A related security feature is communication integrity. Integrity means that an eavesdropper may be able to read communication between two parties but is not able to modify the communication in any way. GSI provides communication integrity by default. (It can be turned off if desired). Communication integrity introduces some overhead in communication, but not as large an overhead as encryption.

2.1.6.6 Securing Private Keys

The core GSI software provided by the Globus Toolkit expects the user's private key to be stored in a file in the local computer's storage. To prevent other users of the computer from stealing the private key, the file that contains the key is encrypted via a password (also known as a passphrase). To use GSI, the user must enter the passphrase required to decrypt the file containing their private key.

We have also prototyped the use of cryptographic smartcards in conjunction with GSI. This allows users to store their private key on a smartcard rather than in a file system, making it still more difficult for others to gain access to the key.

2.1.6.7 Delegation, Single Sign-On and Proxy Certificates

GSI provides a delegation capability: an extension of the standard SSL protocol which reduces the number of times the user must enter his passphrase. If a Grid computation requires that several Grid resources be used (each requiring mutual authentication), or if there is a need to have agents (local or remote) requesting services on behalf of a user, the need to re-enter the user's passphrase can be avoided by creating a proxy.

A proxy consists of a new certificate and a private key. The key pair that is used for the proxy, i.e. the public key embedded in the certificate and the private key, may either be regenerated for each proxy or obtained by other means. The new certificate contains the owner's identity, modified slightly to indicate that it is a proxy. The new certificate is signed by the owner, rather than a CA. (See diagram below.) The certificate also includes a time notation after which the proxy should no longer be accepted by others. Proxies have limited lifetimes.

illustration not visible in this excerpt

Figure2.2: Delegating user Credentials [22]

The proxy's private key must be kept secure, but because the proxy isn't valid for very long, it doesn't have to kept quite as secure as the owner's private key. It is thus possible to store the proxy's private key in a local storage system without being encrypted, as long as the permissions on the file prevent anyone else from looking at them easily. Once a proxy is created and stored, the user can use the proxy certificate and private key for mutual authentication without entering a password.

When proxies are used, the mutual authentication process differs slightly. The remote party receives not only the proxy's certificate (signed by the owner), but also the owner's certificate.

During mutual authentication, the owner's public key (obtained from her certificate) is used to validate the signature on the proxy certificate. The CA's public key is then used to validate the signature on the owner's certificate. This establishes a chain of trust from the CA to the proxy through the owner.

2.2 Sun Grid Engine

Sun Grid Engine (SGE), previously known as CODINE (COmputing in DIstributed Networked Environments) or GRD (Global Resource Director)[23], is an open source batch-queuing system, developed and supported by Sun Microsystems. Sun also sells a commercial product based on SGE, also known as N1 Grid Engine (N1GE).

SGE is typically used on a computer farm or high-performance computing (HPC) cluster and is responsible for accepting, scheduling, dispatching, and managing the remote and distributed execution of large numbers of standalone, parallel or interactive user jobs. It also manages and schedules the allocation of distributed resources such as processors, memory, disk space, and software licenses.

SGE is the foundation of the Sun Grid utility computing system, made available over the Internet in the United States in 2006[24], later becoming available in many other countries.

2.3 Video formats

2.3.1 Digital video

DV is a format for recording and playing back digital video. It was launched in 1995 with joint efforts of leading producers of video camera recorders [25].

Original DV specification, known as Blue Book, has been standardized within IEC 61834 family of standards. These standards define common features such as cassettes, recording modulation method, magnetization and basic system data in part 1, and delve into the specifics of 525-60 and 625-50 systems in part 2.

DV uses discrete cosine transform (DCT) to compress every video frame individually. Before applying DCT compression, some color information is removed from original video using chroma sub sampling to reduce the amount of data to be compressed. Baseline DV uses 4:1:1 in its 60 Hz variant and 4:2:0 in 50 Hz variant. Relatively low chroma resolution is a reason why DV is sometimes avoided in chroma keying applications, though advances in chroma keying techniques and software made producing quality keys from DV material possible.[26][27]

The sampling raster of the baseline DV video is the same as that of the ITU-R Rec.601 with 720 pixels per line for both 4:3 and 16:9 frame aspect ratios, which results in different pixel aspect ratios for full screen and widescreen video.[28][29] The 60 Hz system has 480 lines, while the 50 Hz system has 576 lines in a frame.

The video, the corresponding audio and metadata are packaged into 80-byte Digital Interface Format (DIF) blocks which are multiplexed in a 150-block sequence. DIF blocks are the basic units of DV streams and can be stored as files in raw form or wrapped in such file formats as AVI, QuickTime and MXF.[30][31] One video frame is formed from either 10 or 12 such sequences, depending on scanning rate, which results in the data rate of about 25 Mbit/s, not including the audio data rate. When written to tape, each sequence correspond to one complete track.[28]

For audio, DV allows either two Linear PCM channels (usually stereo) at 16-bit resolution and 48 kHz sampling rate (768 Kbit/s per channel, 1.5 Mbit/s stereo), or four nonlinear PCM channels at 12-bit resolution and 32 kHz sampling rate (384 Kbit/s per channel, 1.5 MBit/s for four channels). For professional or broadcast applications, 48 kHz is used almost exclusively. In addition, the DV specification includes the ability to record 16-bit audio at 44.1 kHz (706 Kbit/s per channel, 1.4 Mbit/s stereo), which is the same sampling rate used for CD audio, but this option is rarely used in practice.

Baseline DV employs unlocked audio. This means that the sound may be +/- Уз frame out of sync with the video. However, this is the maximum drift of the audio/video synchronization, it is not compounded throughout the recording.

2.3.2 Moving Picture Experts Group (MPEG)

The MPEG compression methodology is considered asymmetric in that the encoder is more complex than the decoder.[21] The encoder needs to be algorithmic or adaptive whereas the decoder is 'dumb' and carries out fixed actions.[21] This is considered advantageous in applications such as broadcasting where the number of expensive complex encoders is small but the number of simple inexpensive decoders is large. The MPEG's (ISO's) approach to standardization is novel, because it is not the encoder that is standardized, but the way a decoder interprets the bit stream. A decoder that can successfully interpret the bitstream is said to be compliant.[21] The advantage of standardizing the decoder is that over time encoding algorithms can improve, yet compliant decoders continue to function with them.[21] The MPEG standards give very little information regarding structure and operation of the encoder and implementers can supply encoders using proprietary algorithms.[22] This gives scope for competition between different encoder designs, which means better designs can evolve and users have greater choice, because encoders of different levels of cost and complexity can exist, yet a compliant decoder operates with all of them.[22]

MPEG also standardizes the protocol and syntax under which it is possible to combine or multiplex audio data with video data to produce a digital equivalent of a television program. Many such programs can be multiplexed and MPEG defines the way such multiplexes can be created and transported. The definitions include the metadata used by decoders to demultiplex correctly.

illustration not visible in this excerpt

Figure 2.3: Mpeg Compression methodology [24]

2.3.2.1 MPEG Standards

The MPEG standards consist of different Parts. Each part covers a certain aspect of the whole specification.[35] The standards also specify Profiles and Levels. Profiles are intended to define a set of tools that are available, and Levels define the range of appropriate values for the properties associated with them [36]. Some of the approved MPEG standards were revised by later amendments and/or new editions. MPEG has standardized the following compression formats and ancillary standards [37]:

- MPEG-1 (1993): Coding of moving pictures and associated audio for digital storage media at up to about 1,5 Mbit/s (ISO/IEC 11172). The first MPEG compression standard for audio and video. It was basically designed to allow moving pictures and sound to be encoded into the bitrate of a Compact Disc. It is used on Video CD, SVCD and can be used for low-quality video on DVD Video. It was used in digital satellite/cable TV services before MPEG-2 became widespread. To meet the low bit requirement, MPEG-1 down samples the images, as well as uses picture rates of only 24-30 Hz, resulting in a moderate quality [38]. It includes the popular Layer 3 (MP3) audio compression format.

- MPEG-2 (1995): Generic coding of moving pictures and associated audio information. (ISO/IEC 13818) Transport, video and audio standards for broadcast-quality television. MPEG-2 standard was considerably broader in scope and of wider appeal - supporting interlacing and high definition. MPEG-2 is considered important because it has been chosen as the compression scheme for over-the-air digital television ATSC, DVB and ISDB, digital satellite TV services like Dish Network, digital cable television signals, SVCD, DVD Video and Blu-ray Disc. [38]

- MPEG-3: MPEG-3 dealt with standardizing scalable and multi-resolution compression[28] and was intended for HDTV compression but was found to be redundant and was merged with MPEG-2, as a result there is no MPEG-3 standard.[38][39] MPEG-3 is not to be confused with MP3, which is MPEG-1 Audio Layer 3.

- MPEG-4 (1998): Coding of audio-visual objects. (ISO/IEC 14496) MPEG-4 uses further coding tools with additional complexity to achieve higher compression factors than MPEG-2.[40] In addition to more efficient coding of video, MPEG-4 moves closer to computer graphics applications. In more complex profiles, the MPEG-4 decoder effectively becomes a rendering processor and the compressed bitstream describes three­dimensional shapes and surface texture.[40] MPEG-4 supports Intellectual Property Management and Protection (IPMP), which provides the facility to use proprietary technologies to manage and protect content like digital rights management.[41] Several new higher-efficiency video standards (newer than MPEG-2 Video) are included, notably:

= MPEG-4 Part 2 (or Simple and Advanced Simple Profile) and

- MPEG-4 AVC (or MPEG-4 Part 10 or H.264). MPEG-4 AVC may be used on HD DVD and Blu-ray Discs, along with VC-1 and MPEG-2.

Chapter 3 Problem Formulation & Objective

3.1 Problem Formulation

It is clear from literature that computing environments have evolved from single-user environments to Massively Parallel Processors (MPPs), clusters of workstations and distributed systems to (most recently) grid computing systems. Every transition has been a revolution, allowing scientists and engineers to solve complex problems and sophisticated applications previously incapable of solving. However every transition has brought new challenges and problems in its wake, as well as the need for technical innovation. The evolution of computing systems has led to the current situation in which millions of machines are interconnected via the Internet with various hardware and software configurations, capabilities, connection topologies, access policies and so forth. The formidable mix of hardware and software resources on the Internet has fuelled researchers' interest in investigating novel ways to exploit this abundant pool of resources in an economical and efficient manner, as well as in aggregating these distributed resources so as to benefit a single application.

Grid computing is diverse and heterogeneous in nature, spanning multiple domains whose resources are not owned or managed by a single administrator. This presents grid resource management with many challenges such as site autonomy, heterogeneous substrate and policy extensibility. The Globus [42, 43] middleware toolkit addresses these issues by providing services to assist users in the utilization of grid resources. Users are still exposed to the complexities of grid middleware, however, and there is a substantial burden imposed on them in that they must have extensive knowledge of the various grid middleware components in order to be able to utilize grid resources. Such knowledge ranges from querying information providers, selecting suitable resources for the user's job, forming the appropriate JSDL (Job Submission Description Language), submitting the user's job to the resources and initiating job execution.

It is also clear from the literature survey that grids are extensively being used to carry out compute and storage intensive tasks, so there is a need to evaluate the performance of grids with respect to different parameters like processing time, resource utilization, memory statistics, network statistics etc., so that user of the grid can make easy decisions regarding the configuration of the environment they have to build or use for the execution of their grid jobs. Thus this thesis focuses on evaluating the performance of grid (w.r.t time) by executing a compute intensive, storage intensive and network bandwidth intensive job. For this we need to design and configure a grid environment consisting of CA, client nodes and grid processing nodes. We also require a compute, storage and bandwidth intensive problem to evaluate its performance. Thus to carry out these tasks, following are the objectives that have been set.

3.2 Objectives

The major objectives of the thesis are

1. To study and implement the Grid Environment.
2. To design and implement DV to MPEG 4 conversion process on Grid Environment.
3. To evaluate the performance of Grid Environment for DV to MPEG 4 conversion with respect to time.

The first objective intends to study the grid computing concepts, their types, their relationship with other computing technologies, the open source middleware available for its implementation etc. It also involves the analysis, design and implementation of a grid environment.

The second objective intends to study the compute, storage and network intensive problem of DV to MPEG4 video conversion and to implement it on the grid environment designed in above step.

Third objective intends to evaluate the performance of grid (designed in step 1) by executing the conversion process (as defined by step 2) with different parameters.

Chapter 4 Design of Grid Environment for DV to MPEG4 Conversion

A True Video conversion system would require enormous storage facility accoutered with high­speed access and replication facilities. A centralized storage in such a scenario has its own disadvantages and infrastructure costs. Thus, pre-splitting and distributing media content across various nodes with substantial storage capacities is a viable strategy.

Along with the storage problems, such set-ups are often hit by performance bottlenecks that arise as a centralized architecture doesn’t scale enough while handling simultaneous connection­requests and maintaining sessions. A distributed architecture, on the other hand is capable of efficiently handling the same.

Therefore, there is a need for a distributed architecture for handling the high Compute and storage intensive loads in a video conversion system. Grid computing is considered to a highly scalable solution where compute intensive processes are split up into multiple low-cost systems so that the overall load is minimized and cost reduced. Grid works very efficiently in a data parallel scenario, where the data can be split across different nodes constituting the grid.

The above paragraph shows that grid indeed is a good candidate to the distributed video conversion solution. Also, grid based architecture ensures high availability and reliability through redundancy mechanisms which don’t require expensive hardware infrastructure for their deployment.

4.1 Challenges in Design of Grid

A video conversion system designed for Grid Computing is characterized by the Efficiency, Resource utilization, fault tolerance etc. it offers to its end users. A true Grid Computing based video conversion system should provide its end users the virtualization of a single system and total control over the presentation session.

Some common problems/issues a Grid Computing based video conversion needs to address are:

1. Security: To support heterogeneous architecture machine to form a grid .There is a need to secure the grid resources from unauthentication access .So we need to use a standard method for authentication that can be used for heterogeneous environment like Grid Computing.
2. Synchronization: As the grid architecture is formed on network link so there is need to synchronize the machine to a time standard, so they can process the job in synchronized fashion.
3. Network Establishment: As the network is already establish in the laboratories we need to make a private network on the same infrastructure laid earlier .So be need to plan a network on the existing network.
4. Domain Name Resolution: As the network addresses and the Domain is already establish in the laboratories A need to make a private domain by designing a network addresses and domain name for the new network and also design the domain name resolution method.
5. Bandwidth Availability: As the network link may not always be consistent, one needs to manage the bandwidth corresponding to network change and still maintain quality of the media.
6. Fault Tolerance & Robustness: To support fault tolerance and robustness in the system A need to script replication of jobs to idle resources and facilitate fault tolerance and robustness in the system.
7. Scalability and Cost Effectiveness: Design need to be easy scalable and cost effective in terms of resources and financial structure.

In addition to these parameters, such a set-up needs to be highly user friendly to ensure users satisfaction and ease of use. Several architectures are proposed in this regard which address the above mentioned issues by employing expensive hardware infrastructure like the Clutter Computing etc.

Our video Conversion grid system forwards a distributed technique to take these problems under consideration to develop a solution prototype.

4.2 Challenges in Video Conversion

A video conversion system typically is characterized by the flexibility it offers to its users. A true video conversion service should provide its local or remote users total control over the presentation session.

Some common problems/issues a video conversion needs to address are:

1. Load distribution on server: To support multiple connection requests from user, and facilitate minimum response time.
2. Media content management: This includes high storage space, effective content Management, replication strategy etc.
3. Adapt to dynamic network bandwidth: As the network link may not always be consistent, one needs to manage the content corresponding to network change and still maintain quality of the media.
4. Decide on Buffer/Cache: To facilitate user with better quality and high-response time, the system may have to decide upon the buffer size and cache.
5. Rate control: For adapting to network, the system may need to vary the transport and encoding rates.
6. Scalability and cost effectiveness
7. To provide reliability and availability

In addition to these parameters, such a set-up needs to be highly fault-tolerant and fairly scalable to ensure users satisfaction. Several architectures are proposed in this regard which address the above mentioned issues by employing expensive hardware infrastructure like the Clutter Computing etc.

Our video grid system forwards a distributed technique to take these problems under consideration to develop a solution prototype.

4.3 Proposed Work Flow

4.3.1 Components of proposed solution

Our effort is oriented towards designing a scalable architecture based on grid, where the server itself is distributed among a set of ‘non-dedicated’ low-cost, off-the-shelf hosts like PC’s and workstations

The Components

- Media splitter. This works with the help of the content decoder to convert the media content into splits (atomic playable parts of the media) and distribute among the grid nodes.
- Media content decoder : This is part of the codec that sit on very node in the grid to convert the video split Provided to it to Mpeg 4 .The conversion on a node is independent of the other nodes and grid processing .
- Client Scheduler: The scheduler keeps track of all the media content, its splits, the nodes and its usage. The scheduler is responsible for sending the collated streamed content to individual grid nodes.
- Remote Scheduler: The scheduler keeps track of all the media content submitted to it from client. The scheduler is responsible for scheduling job at the node .It is the responsibility of remote scheduler to schedule the job of conversion on this node and return the converted video to the client scheduler from where the command of video conversion indicated.
- Streaming proxy : Point of contact for client to remote nodes. Forwards the request from client to remote schedule with the proxy credential of the user on the client scheduler to remote scheduler.

4.3.2 Proposed Solution

The process involves the following steps/phases represented in Figure 4.1 and 4.2.

1. The Initialization/Setup Phase: In this stage the client machine of the system submits the media file(s) on to the splitter where it is split among the nodes in the grid. Using the media content decoder script, the media splitter calculates how many parts the file can be split into.

Splitting policy·. Currently it split and distributes the file randomly to the nodes it has be statically programmed in the python script that has been created for the Grid Environment.

But the policy of splitting can be extended to use the resource dynamically of the node also and hence decide what portion of file is worth splitting to the nodes.

Note:

1) The setup assumes that the grid nodes have a coder “mencoder” and remote scheduler in each node to schedule the job locally to grid node.

2) We are also working on different splitting policies and strategy’s for effective storage management as well as improved scheduling for streaming.

2. Client Request: The client sees the grid to schedule the job. The client Requests to grid for its currents CPU utilization .So that it can decide to schedule the split video compression on the node or not.

3. Client Scheduling: The client proxy forwards the request to the remote scheduler who maintains the metadata about the nodes.

illustration not visible in this excerpt

Figure 4.1: Video compression by single stream

Now the client scheduler submits the split video conversion jobs to remote schedulers on each of the nodes by sending the streams of the sliced video content and a script to execute the job of conversion effectively.

4. Remote Scheduling: Each of the grid nodes starts scheduling the job. Remote scheduler on nodes makes sure that the job is handled by the Remote schedulers as its own job for conversion. Remote scheduler collects the converted streams and redirects them to the client scheduler from where the job was received earlier.

illustration not visible in this excerpt

Figure 4.2: Video compression by Purposed grid computing system

Note:

The process of scheduling at the nodes may not be simultaneous. They can be timed to send the streams after a particular delay to avoid buffering, research is ongoing for designing this strategy.

5. Scheduling Response and Rejoining : In this stage the client scheduler start counting the response from the Remote scheduler after the conversion .The client scheduler start checking the for all job split came back after conversion or not .If not the client scheduler request the remote scheduler to complete the conversion job at top priority and send back the complete job .After all the spitted job came back to client after conversion the rejoining phase start .In this phase the client machine start rejoining the split jobs to a one job .After the completion of this rejoining phase we have a converted video of full length .

Chapter 5 Implementation and Configuration Setup

Grid Environment for the conversion is set up according to Design Layout in figure 5.1. It this Grid Environment, Grid client (NodeA) connecting to a grid that appears as a "cluster" of Torque (PBS) job managed machines represented by NodeB and a "cluster" of Sun Grid Engine (SGE) job managed machines represented by NodeC to NodeH. The Globus toolkit is the middleware of the Grid environment.

In the Grid Environment we make sure that installation instructions get us to the starting Design assumptions:

1. 12 machines running Fedora Core 4 i386 (FC4) with globus toolkit 4.0.1 installed to form a grid environment.
2. A working CA Authentication Server to authenticate the grid hosts and user to use the grid resources.
3. A NTP (Network time Protocol) server to synchronize the grid node to a uniform time standard. Each machine should be running NTP client service to synchronization grid machine.

illustration not visible in this excerpt

Figure 5.1: Grid Design plan

4. To establish a network on an existing network, each machine should be configured with network address and Domain name and resolution method. This was accomplished by configuring the IP address, Domain Name and editing the /etc/hosts file.

5. RPM (RedHat Package Manager) configured so that administrators are able to install, update, and remove packages using RPM.

6. Root access to all the machines is necessary in order to complete some of the installation and for other administrative tasks

7. Make sure the proper user accounts are set up :

a. Root Account : Administration installation ,updating and removing to software packages
b. Globus account : Administration installation ,updating and removing to Globus toolkit 4.0.1 source package with CA updates
c. Jane account :User of grid system

5.1 Installing of Grid Environment

The installation has been divided into 9 sections which are mentioned below .We are only giving brief on 3 nodes nodeA(client node ), nodeB (CA and NTP server ), nodeC (Grid Processing node) , other nodes are replication of nodeC with some configuration differences of Domain name and IP address in the installation and configuration scripts.

1. Installing and Configuring Linux
2. Deploying Torque (Open PBS)
3. Deploying Sun Grid Engine (SGE)
4. Deploying The PostgreSQL Relational Database
5. Fixing Java and ANT
6. Deploying The Globus Toolkit 4.0.x
7. Connecting Globus Gram WS and Torque (Open PBS)
8. Connecting Globus Gram WS and SUN Grid Engine
9. A Distributed Grid Application

5.1.1 Installing and Configuring Linux (all nodes)

The grid set up ,so that it will appear like a grid client (NodeA) connecting to a grid that appears as a "cluster" of Torque (PBS) job managed machines represented by NodeB and a "cluster" of Sun Grid Engine (SGE) job managed machines represented by NodeC to NodeL[45].

5.1.1.1 Installing Linux (all nodes)

When installing Fedora 4 Linux, the first thing Fedora Core asks is if it should test the media before installing, absolutely do this.

Installation is pretty straight forward. The installer asks questions and the user answers.

- Here are some rough notes that should help guide you through configuration:

i. Language: English (default)

ii. Keyboard: US English (default)

- Regardless of what the "Upgrade Examine" dialogue says, I forced a fresh install.
- Installation Type: Server
- Disk Partition:

i.Automatic Partitioning: Remove all partitions.

- Boot loader: GRUB (default): no boot loader password
- Automatically partition under the default partition parameters.
- Network Devices:

illustration not visible in this excerpt

Table 5.1: Network and Host configuration details

- Firewall: No firewall - Disable SELinux
- Packages to select:
- Xwindows
- Gnome
- Applications:
- Editors
- Graphical Internet
- Text based internet
- Server:
- Server configuration tools
- Web server
- Windows File Server
- Postgres SQL Database
- Development:
- Development tools
- Java Development
- System:
- Admin tools
- System tools
- Printing Support
- Yes to License Agreement and set the nodeB as NTPServer and complete the installation.

5.1.1.2 Setting up Accounts (all nodes)

Create an account that will run the container for Globus grid services . This should not be the root account. This should be done on:

- nodeA : because it will represent a client machine running job submission tools and other tools so it needs to have Globus installed
- nodeB : because it will represent a Linux cluster head node running various Globus web services, including WS GRAM and a GridFTP server
- nodeC : because it will represent a Linux cluster head node running various Globus web services, including WS GRAM and a GridFTP server Start by editing the file /etc/group and adding the line globus:x:501:

[root@nodeA]# /usr/sbin/useradd -c "Globus User” -g 501 -m -u 501 globus

Next create a generic user account. This account will be the one used to exercise the grid services and tools. This should be done on:

- nodeA - because it will represent a user somewhere on a network exercising and using grid services and grid tools
- nodeB - because it will be helpful for testing Torque (OpenPBS) installation
- nodeC - because it will be helpful for testing SGE installation

Again you can use the 'useradd' command to add the user.

[root@nodeA]# /usr/sbin/useradd -c "Jane User" -g 100 -m -u 101 jane

You must also set the password for the "globus" and "jane" accounts by running the following two commands as root on all three systems.

[root@nodeA]# passwd globus [root@nodeA]# passwd jane

5.1.2 Deploying Torque (Open PBS) (nodeB)

To deploy Torque (Open PBS or just PBS) on nodeB as a "remote" batch system, and then later will configure Globus GRAM WS so that jobs can be submitted into PBS via Globus from "the grid".

5.1.2.1 Deploying RSH (nodeB)

By default PBS will want to use the rsh and rcp tools to copy around input and output files, even if jobs are only running on this single node "cluster".

As the user root, begin by making sure that xinetd is installed:

[root@nodeB data]# rpm -qa|grep xinetd xinetd-2.3.13-6

If Xinetd is not installed install it with the help of RPM package manager .

[root@nodeB ] # RPM -ivh xinetd [root@nodeB ] # RPM -ivh rsh-server [root@nodeB ] # RPM -ivh rsh

By default the rsh and rlogin services will not be enabled. To enable them edit the files

/etc/xinetd.d/rsh

/etc/xinetd.d/rlogin

and change disable to 'no'.

Then do:

/etc/init.d/xinetd restart

We have to configure the machine to allow access via rsh and rlogin from only itself

[root@nodeB xinetd.d]# cat /etc/hosts.equiv 192.168.31.40

5.1.2.2 Downloading and Installing Torque [46] (nodeB)

Next download the source code from (its freeware)

http://www.clusterresources.eom/downloads/torque/torque-2.0.0p7.tar.gz Execute the following command as root on nodeB only [root@nodeB] # tar -zxf torque-2.0.0p7.tar.gz [root@nodeB] # cd torque-2.0.0p7

[root@nodeB torque-2.0.0p7 ] #./configure --prefix=/opt/pbs [root@nodeB torque-2.0.0p7] # make [root@nodeB torque-2.0.0p7 ] # make install

5.1.2.3 Configuring and Deploying Torque (PBS) (nodeB)

As root run the following command to begin the initial configuration of the PBS server:

[root@nodeB torque-2.0.0p7 ] # /opt/pbs/sbin/pbs_server -t create [root@nodeB torque-2.0.0p7 ] # /opt/pbs/bin/qmgr

When qmgr is run it will start a "prompt session" :

Qmgr: set server operators = root@nodeb.ps.univa.com

Qmgr: create queue batch

Qmgr: set queue batch queue_type = Execution

Qmgr: set queue batch started = True

Qmgr: set queue batch enabled = True

Qmgr: set server default_queue = batch

Qmgr: set server resources_default.nodes = 1

Qmgr: set server scheduling = True

Qmgr: quit

As user root create the file /usr/spool/PBS/server_priv/nodes

[root@nodeB torque-2.0.0p7 ] #touch /usr/spool/PBS/server_priv/nodes Then with vi or a similar editor add the following line: nodeB.ps.univa.com [root@nodeB torque-2.0.0p7 ] #vi /usr/spool/PBS/server_priv/nodes

Then create the file /usr/spool/PBS/mom_priv/jobs/config and edit it so that it looks similar to this:

[root@nodeB torque-2.0.0p7 ] # cat /usr/spool/PBS/mom_priv/jobs/config

$pbsserver nodeB.ps.univa.com $logevent 255

5.1.2.4 Starting PBS (nodeB)

Run the following three commands as the root user to start all of the necessary PBS components:

[root@nodeB ]# /opt/pbs/sbin/pbs_mom [root@nodeB ]# /opt/pbs/bin/qterm -t quick [root@nodeB ]# /opt/pbs/sbin/pbs_server

After a few seconds you should be able to query to see which nodes are part of the PBS "cluster":

[root@nodeB mom_priv]# /opt/pbs/bin/pbsnodes -a

To start up the default simple scheduler run the following as user root:

[root@nodeB ]# /opt/pbs/sbin/pbs_sched

5.1.3 Deploying Sun Grid Engine (SGE) (nodeC)

SGE is the sun grid batch cluster software that we are going to use for our grid cluster and for batch processing of jobs in the grid system.

5.1.3.1 Downloading SGE (nodeC)

To deploy Sun Grid Engine (SGE) [47] on nodeC as a "remote" batch system, and then later will configure Globus GRAM WS so that jobs can be submitted into SGE via Globus from "the grid". Download it from the below mention link (free).

http://gridengine.sunsource.net/project/gridengine/dickthru60.html

There are two tarballs that need to be downloaded.

ge-6.1u6_1-bin-lx24-x86.tar.gz

ge-6.1u6-common.tar.gz

5.1.3.2 Unpacking the SGE distribution (nodeC)

After downloading the tarballs create a directory that will serve as the SGE directory. You can do this as the root user:

[root@nodeC ~]# mkdir -p /opt/sge-root [root@nodeC sge-root]# cd /opt/sge-root/

Now run the following commands as user root to unpack the tarballs into the directory you created. Change the path to the tarballs as is necessary:

[root@nodeC sge-root]# gzip -dc /root/ge-6.1u6-common.tar.gz | tar xvpf - [root@nodeC sge-root]# gzip -dc /root/ge-6.1u6_1-bin-lx24-x86.tar.gz | tar xvpf -

Next you need to set the environment variable SGE_ROOT to point to the directory you created and into which you unpacked the tarballs:

[root@nodeC sge-root]# export SGE_ROOT=/opt/sge-root

5.1.3.3 Installing and Configuring SGE (nodeC)

As the root user change into the directory SGE_ROOT and run the following command:

[root@nodeC sge-root]# ./util/setfileperm.sh $SGE_ROOT

Agree to License agreement and start the installation with the following command .It will generate a shell installation environment you are asked some question and you have to answer them to complete the installation some main question with answers are written below (for reference) .

[root@nodeC sge-root]# ./install_qmaster

- Grid Engine qmaster host installation : <Enter>
- Choosing Grid Engine admin user account : n
- Checking $SGE_ROOT directory : <Enter>
- Grid Engine TCP/IP service >sge_qmaster< : In another terminal edit

/etc/services and add the line “sge_qmaster 30000/tcp : and : <Enter>

- Grid Engine TCP/IP service >sge_execd< : In another terminal edit /etc/services and add the line “ sge_execd 30001/tcp “ and : <Enter>
- Grid Engine cells : <Enter>
- Grid Engine qmaster spool directory : n
- Windows Execution Host Support : n
- Verifying and setting file permissions : Y
- Select default Grid Engine hostname resolving method : Y
- Making directories : <Enter>
- Setup spooling : <Enter>
- The Berkeley DB spooling method provides two configurations! : n
- Berkeley Database spooling parameters : <Enter>
- Grid Engine group id range : 20000-20500 and : <Enter>
- Grid Engine cluster configuration : <Enter>
- Grid Engine cluster configuration (continued) : <Enter> : n
- Creating local configuration : <Enter>
- qmaster/scheduler startup script : n
- Grid Engine qmaster and scheduler startup : <Enter>
- Adding Grid Engine hosts : n
- Adding admin and submit hosts : : <Enter> : <Enter> (twice) : n
- Creating the default <all.q> queue and <allhosts> hostgroup : <Enter>
- Scheduler Tuning : 1
- Using Grid Engine : <Enter>
- Grid Engine messages : n
- Grid Engine startup scripts: n
- Your Grid Engine qmaster installation is now completed : <Enter>

This completes the first part of the SGE installation and configuration. Before continuing you need to set up your environment by doing the following:

[root@nodeC sge-root]# source /opt/sge-root/default/common/settings.sh [root@nodeC sge-root]# qconf -sh

Next nodeC needs to be configured as an execution host. Run the following command and again enter the indicated values for each menu choice:

[root@nodeC sge-root]# /opt/sge-root/install_execd

- Welcome to the Grid Engine execution host installation : <Enter>
- Checking $SGE_ROOT directory : <Enter>
- Grid Engine cells : <Enter>
- Checking hostname resolving : <Enter>
- Local execd spool directory configuration : n
- Creating local configuration : <Enter>
- execd startup script : n
- Grid Engine execution daemon startup : <Enter>
- Adding a queue for this host : y
- Using Grid Engine : <Enter>
- Grid Engine messages :

Grid Engine startup scripts : n

This completes the installation and configuration of SGE

5.1.3.4 Deploying The PostgreSQL Relational Database (all nodes)

To run the Globus Reliable File Transfer (RFT) service on nodes B and C since they are the head nodes for our "clusters". RFT requires a relational database backend in order to preserve state across machine shutdowns. Depending on the details of your Fedora Core 4 installation the PostgreSQL database may already be installed. You can check using the 'rpm' command as shown:

[root@nodeB ~]# rpm -qa|grep postgres

You should get a result similar to this: postgresql-8.0.3-1 postgresql-server-8.0.3-1 postgresql-libs-8.0.3-1

Once you are confident the packages are installed you want to make sure that a 'postgres' user account is available:

[root@nodeB ~]# grep postgres /etc/passwd

postgres:x:26:26:PostgreSQL Server:/var/lib/pgsql:/bin/bash

Please use the 'chkconfig' command to make sure that postgres will not be started automatically:

[root@nodeB ~]# /sbin/chkconfig --list | grep postgres

postgresql 0:off 1:off 2:off 3:off 4:off 5:off 6:off

5.1.4 Initialization Postgres (all nodes)

As root, begin by creating a directory that is owned by user postgres and in group postgres:

[root@nodeB ~]# mkdir -p /opt/pgsql/data [root@nodeB ~]# chown -R postgres /opt/pgsql

[root@nodeB ~]# chgrp -R postgres /opt/pgsql

Next become the 'postgres' user:

[root@nodeB ~]# su - postgres

Initialize the database by running the following command

-bash-3.00$ /usr/bin/initdb -D /opt/pgsql/data

5.1.4.1 Starting the database (all nodes)

You should redirect stdout and stderr to a file for logging purposes. Start the database in the background:

-bash-3.00$ /usr/bin/postmaster -i -D /opt/pgsql/data > /opt/pgsql/logfile 2>&1 &

You should see the following or similar processes running after starting the database:-

bash-3.00$ ps auwwwwx|grep post

5.1.5 Fixing Java and ANT (all nodes)

To remove old Java and ANT in the system and have to install new version to make it compatible with Globus toolkit 4.0.1.

5.1.5.1 Removing default Java (all nodes)

First see if the default java is installed:

[root@nodeB ~]# which java /usr/bin/java

That file is usually a symlink:

[root@nodeB ~]# ls -alh /usr/bin/java lrwxrwxrwx 1 root root 22 Oct 7 16:06 /usr/bin/java -> /etc/alternatives/java

That link is actually a link to the actual binary:

[root@nodeB ~]# ls -alh /etc/alternatives/java

lrwxrwxrwx 1 root root 35 Oct 7 16:06 /etc/alternatives/java -> /usr/lib/jvm/jre-1.4.2- gcj/bin/java

You can use 'rpm' to query and see which package actually owns the binary:

[root@nodeA ~]# rpm -qf /usr/lib/jvm/jre-1.4.2-gcj/bin/java java-1.4.2-gcj-compat-1.4.2.0-40jpp_31rh.FC4.2

Uninstall it with rpm -e parameter

[root@nodeA ~]# rpm -ev java-1.4.2-gcj-compat-1.4.2.0-40jpp_31rh.FC4.2

5.1.5.2 Java installation (all nodes)

Download the java for Linux from Sun websites its free and start installing it .

[root@nodeB ~]# ls -l jdk-1_5_0_22-linux-i586.bin [root@nodeB ~]# md5sum jdk-1_5_0_22-linux-i586.bin

Move the downloaded file to the /opt directory and make it executable:

[root@nodeB ~]# mv jdk-1_5_0_06-linux-i586.bin /opt/

[root@nodeB ~]# cd /opt/

[root@nodeB opt]# chmod 755 jdk-1_5_0_06-linux-i586.bin

You can run the file to unpack the binary installation into the directory:

[root@nodeB opt]# ./jdk-1_5_0_06-linux-i586.bin [root@nodeB opt]# ls -l /opt/jdk1.5.0_06/

Test the installation by running the java virtual machine with the '-version' flag:

[root@nodeB opt]# /opt/jdk1.5.0_22/bin/java -version

[root@nodeB opt]# /opt/jdk1.5.0_22/bin/javac -version

Lastly set up your environment to define JAVA_HOME and to put the java virtual machine and the java compiler on your path.

[root@nodeB ~]# export JAVA_HOME=/opt/jdk1.5.0_06

[root@nodeB ~]# export PATH=$JAVA_HOME/bin:$PATH

[root@nodeB ~]# which java

/opt/jdk1.5.0_06/bin/java

[root@nodeB ~]# which javac

/opt/jdk1.5.0_06/bin/javac

5.1.5.3 Fixing Java and ANT (all nodes)

You should download version 1.8 or better of the ant tool. You can download version 1.8 from the web.

http://apache.mirrormax.net/ant/binaries/

The file should look like this:

[root@nodeB opt]# ls -l apache-ant-1.8.0-bin.tar.bz2 [root@nodeB opt]# md5sum apache-ant-1.8.0-bin.tar.bz2

Move the tarball to the /opt directory and unpack it:

[root@nodeB ~]# mv apache-ant-1.8.0-bin.tar.bz2 /opt/

[root@nodeB ~]# cd /opt/

[root@nodeB opt]# tar -jxf apache-ant-1.8.0-bin.tar.bz2

After unpacking the ditribution should look like this:

[root@nodeB opt]# ls -l apache-ant-1.8.0

Next set up your environment so that ANT_HOME is defined and the ant tools are in your PATH:

[root@nodeB opt]# export ANT_HOME=/opt/apache-ant-1.6.5

[root@nodeB opt]# export PATH=$ANT_HOME/bin:$PATH

[root@nodeB opt]# which ant

/opt/apache-ant-1.6.5/bin/ant

[root@nodeB opt]# ant -version

Apache Ant version 1.6.5 compiled on June 2 2005

5.1.6 Deploying The Globus Toolkit 4.0.1 (all nodes)

Now install the globus toolkit the core installation of the grid .we have done this in steps which are mentioned below

1. First we have installed it on nodeB and make it a CA
2. secondly installed on nodeA and generated all the certificate for it from nodeB the CA
3. Thirdly we installed it on nodeC to nodeH .

5.1.6.1 Preliminaries (all nodes)

The Globus Toolkit should be installed as user 'globus' and not as root. The toolkit should be deployed on

- nodeA since that node will serve as the 'client' machine
- nodeB since that node will host the Globus GRAM WS that front ends the PBS batch system, along with other Globus grid services.
- nodeC since that node will host the Globus GRAM WS that front ends the SGE batch system, along with other Globus grid services.

Begin by making a directory for the installation that is owned by globus and in group globus:

[root@nodeB ~]# mkdir -p /opt/globus-4.0.1 [root@nodeB ~]# chown globus.globus /opt/globus-4.0.1 [root@nodeB ~]# ls -alh /opt | grep globus

drwxr-xr-x 2 globus globus 4.0K Feb 20 12:05 globus-4.0.1

Become user 'globus':

[root@nodeB opt]# su - globus [globus@nodeB ~]$ cd

Now download it from web or from CD in this directory :

http://www.globus.org/ftppub/gt4/4.0/4.0.1/installers/

The source tarball should look like this:

[globus@nodeB ~]$ ls -l gt4.0.1-all-source-installer.tar.bz2 [globus@nodeB ~]$ md5sum gt4.0.1-all-source-installer.tar.bz2

5.1.6.2 Building and Installing (all nodes)

Begin by unpacking the compressed tarball:

[globus@nodeB ~]$ tar -jxf gt4.0.1-all-source-installer.tar.bz2

Make sure that for user 'globus' JAVA_HOME is defined and the java tools are in PATH:

[globus@nodeB ~]$ export JAVA_HOME=/opt/jdk1.5.0_06

[globus@nodeB ~]$ export PATH=$JAVA_HOME/bin:$PATH

[globus@nodeB ~]$ which java

/opt/jdk1.5.0_06/bin/java

[globus@nodeB ~]$ which javac

/opt/jdk1.5.0_06/bin/javac

Make sure that for user 'globus' ANT_HOME is defined and the ant tools are in PATH:

[globus@nodeB ~]$ export ANT_HOME=/opt/apache-ant-1.6.5 [globus@nodeB ~]$ export PATH=$ANT_HOME/bin:$PATH

[globus@nodeB ~]$ which ant /opt/apache-ant-1.6.5/bin/ant

Next change directories into the distribution:

[globus@nodeB ~]$ cd gt4.0.1-all-source-installer

Define GLOBUS_LOCATION to point to the directory into which the toolkit will be installed:

[globus@nodeB gt4.0.1-all-source-installer]$export GLOBUS_LOCATION=/opt/globus-

4.0. 1 Configure the distribution. Note that we are not building the RLS component. It is not necessary at this time:

[globus@nodeB gt4.0.1-all-source-installer]$ ./configure -- prefix=$GLOBUS_LOCATION --disable-rls

To build the toolkit simply run 'make':

[globus@nodeB gt4.0.1-all-source-installer]$ make

Next run 'make install' to complete the installation:

[globus@nodeB gt4.0.1-all-source-installer]$ make install

Setting up your environment so that it is easy to use the Globus tools:

[globus@nodeB ~]$ source /opt/globus-4.0.1/etc/globus-user-env.sh

Download 2 update packages for the CA part of toolkit and install it .

1. http://www-unix.globus.org/ftppub/gt4/4.0/4.0.1/updates/ src/globus_simple_ca-0.15.tar.gz

2. http://www-unix.globus.org/ftppub/gt4/4.0/4.0.1/updates/ src/globus_simple_ca_setup-0.27.tar.gz

[globus@nodeB ~]$ gpt-build -update ./globus_simple_ca-0.15.tar.gz [globus@nodeB ~]$ gpt-build -update ./globus_simple_ca_setup-0.27.tar.gz

5.1.6.3 Creating a Certificate Authority (nodeB)

Now run the 'setup-simple-ca' command to begin the setup process. This is a short, menu driven script so the input needed to be typed in is shown below the command .

[globus@nodeB gt4.0.1-all-source-installer]$ $GLOBUS_LOCATION/setup/globus/setup- simple-ca

- Enter a unique subject name for this CA : cn=<UIET>,ou=ConsortiumTutorial,ou=GlobusTest,o=Grid

- Enter the email of the CA (this is the email where certificate requests will be sent to be signed by the CA): <jagpreetsidhu@gmail.com>
- Enter PEM pass phrase: <MyPEMpassPhrase> ( password )
- Verifying - Enter PEM pass phrase: <MyPEMpassPhrase> ( password )

In the output above you may see that a unique hash number for the CA was created.

[globus@nodeB ~]$ /opt/globus-.0.1/setup/globus_simple_ca_eac0491e_setup/setup-gsi - default -nonroot

Now the CA just created is installed and is the default for requesting certificates on nodeB.

5.1.6.4 Obtaining a Host Certificate on nodeB and making copy of Container Certificate and gridmap file (all nodes)

To request a host certificate become root and begin by setting up the environment properly:

[root@nodeB opt]# export GLOBUS_LOCATION=/opt/globus-4.0.1 [root@nodeB opt]# source /opt/globus-4.0.1/etc/globus-user-env.sh [root@nodeB opt]# grid-cert-request -host nodeb.ps.univa.com -dir $GLOBUS_LOCATION/etc

To sign the request become user globus again and set up the environment again:

[root@nodeB opt]# su - globus

[globus@nodeB ~]$ export GLOBUS_LOCATION=/opt/globus-4.0.1 [globus@nodeB ~]$ source $GLOBUS_LOCATION/etc/globus-user-env.sh [globus@nodeB ~]$ grid-ca-sign -in $GLOBUS_LOCATION/etc/hostcert_request.pem - out $GLOBUS_LOCATION/etc/hostcert.pem

The files need to be owned by root with the permissions shown below. You will again have to log in as root to do this.

[root@nodeB opt]# chown root.root /opt/globus-4.0.1/etc/hostcert.pem [root@nodeB opt]# chmod 644 /opt/globus-4.0.1/etc/hostcert.pem [root@nodeB opt]# ls -alh /opt/globus-4.0.1/etc/host*.pem

So we need to make a copy of the host certificate that the globus user has access to. As root do:

[root@nodeB opt]# cp /opt/globus-4.0.1/etc/hostcert.pem /opt/globus-

4.0. 1/etc/containercert.pem

[root@nodeB opt]# chown globus.globus /opt/globus-4.0.1/etc/containercert.pem

[root@nodeB opt]# cp /opt/globus-4.0.1/etc/hostkey.pem /opt/globus-

4.0. 1/etc/containerkey.pem

[root@nodeB opt]# chown globus.globus /opt/globus-4.0.1/etc/containerkey.pem

With the copy for the container we can, as user globus, edit the security configuration file so that the container can find the certificate and it’s key.

[globus@nodeB opt]$

vi /opt/globus-4.0.1/etc/globus_wsrf_core/global_security_descriptor.xml

<?xml version="1.0" encoding="UTF-8"?>

<securityConfig xmlns="http://www.globus.org">

<credential>

<key-file value=M/opt/globus-4.0.1/etc/containerkey.pemM/>

<cert-file value="/opt/globus-4.0.1/etc/containercert.pem"/>

</credential>

<gridmap value=M/opt/globus-4.0.1/etc/grid-mapfileM/>

</securityConfig>

For now it is easiest to create an empty grid-mapfile. Do this as user 'globus':

[globus@nodeB ~]$ touch $GLOBUS_LOCATION/etc/grid-mapfile

5.1.6.5 Configuring the RFT Service and SUDO (/etc/sudoers)file (all nodes)

First make sure again that Postgres is running:

[root@nodeB opt]# ps auwwwwx|grep postmaster Become the 'postgres' user:

[root@nodeB opt]# su - postgres

As the 'postgres' user run the 'createuser' command to create a 'globus' user for the database with the same password as globus user for host machine .

-bash-3.00$ createuser -A -d -P globus Enter password for new user:

Enter it again:

CREATE USER

Next we need to edit the Postgres permissions file and add a line that allows the 'globus' user to connect to the database from this host. Use any text editor to edit the file

-bash-3.00$ vi /opt/pgsql/data/pg_hba.conf

Add the following line:

host rftDatabase "globus" "192.168.31.40" 255.255.255.255 md5

After having edited that file we need to restart the Postgres database.

-bash-3.00$ kill -SIGTERM "process id"

Now (as user postgres') restart the database server:

-bash-3.00$ /usr/bin/postmaster -i -D /opt/pgsql/data > /opt/pgsql/logfile 2>&1 &

The next step is to create the database and tables that the globus user and the RFT service need.

[root@nodeB ~]# su - globus

[globus@nodeB ~]$ export GLOBUS_LOCATION=/opt/globus-4.0.1 [globus@nodeB ~]$ source $GLOBUS_LOCATION/etc/globus-user-env.sh [globus@nodeB ~]$ createdb rftDatabase [globus@nodeB ~]$ psql -d rftDatabase -f $GLOBUS_LOCATION/share/globus_wsrf_rft/rft_schema.sql

With the database and tables created, we need to next edit the RFT configuration file so that it knows the correct password to use when authenticating to the database.

[globus@nodeB ~]$ vi $GLOBUS_LOCATION/etc/globus_wsrf_rft/jndi-config.xml

Edit “foo” with globus user password

As user root edit the file /etc/sudoers and add the following two lines:

[root@nodeB ~]$ vi /etc/sudoers

globus ALL=(jane) NOPASSWD: /opt/globus-4.0.1/libexec/globus-gridmap-and-execute -g /opt/globus-4.0.1/etc/grid-mapfile /opt/globus-4.0.1/libexec/globus-job-manager-script.pl * globus ALL=(jane) NOPASSWD: /opt/globus-4.0.1/libexec/globus-gridmap-and-execute -g /opt/globus-4.0.1/etc/grid-mapfile /opt/globus-4.0.1/libexec/globus-gram-local-proxy-tool *

5.1.6.6 Starting the Container and Starting globus-gridftp-server (all nodes)

[globus@nodeB ~]$ export JAVA_HOME=/opt/jdk1.5.0_06 [globus@nodeB ~]$ export PATH=$JAVA_HOME/bin:$PATH [globus@nodeB ~]$ which java

[globus@nodeB ~]$ export GLOBUS_OPTIONS=-Xmx512M [globus@nodeB ~]$ /opt/globus-4.0.1/bin/globus-start-container > $HOME/container.out 2>&1 &

[globus@nodeB ~]$ su

[root@nodeB etc]# export GRIDMAP=/opt/globus-4.0.1/etc/grid-mapfile [root@nodeB etc]# /opt/globus-4.0.1/sbin/globus-gridftp-server -p 2811 -S

5.1.6.7 Repeat for the steps for nodeA with some configurational changes (all nodes except nodeB)

Repeat 6.1 and 6.2 as with nodeB but you don’t need new CA. Instead you want to install the certificate authority files you created on nodeB onto nodeA.

[globus@nodeA ~]$ scp root@nodeb:/home/globus/.globus/simpleCA/ globus_simple_ca_eac0491e_setup-0.19.tar.gz .

[globus@nodeA ~]$ gpt-build ./globus_simple_ca_ eac0491e _setup-0.19.tar.gz [globus@nodeA ~]$ gpt-postinstall

[globus@nodeA ~]$ /opt/globus-4.0.1/setup/globus_simple_ca_f1f2d5e6_setup/setup-gsi - default -nonroot

Obtaining Credentials for Generic User :

[root@nodeA i386]# su - jane

[jane@nodeA ~]$ export GLOBUS_LOCATION=/opt/globus-4.0.1 [jane@nodeA ~]$ source $GLOBUS_LOCATION/etc/globus-user-env.sh [jane@nodeA ~]$ grid-cert-request

Begin by going to nodeB and copying over the certificate request for user jane:

[globus@nodeB ~]$ cd .globus/simpleCA/

[globus@nodeB simpleCA]$ scp root@nodea:/home/jane/.globus/usercert_request.pem . [globus@nodeB simpleCA]$ grid-ca-sign -in ./usercert_request.pem -out ./usercert.pem

With the certificate signed you can go back to nodeA and grab it from nodeB:

[jane@nodeA ~]$ scp root@nodeb:/home/globus/.globus/simpleCA/usercert.pem SHOME/.globus/usercert.pem

Obtaining host credentials for nodeA :

[root@nodeA ~]# export GLOBUS_LOCATION=/opt/globus-4.0.1 [root@nodeA ~]# source $GLOBUS_LOCATION/etc/globus-user-env.sh [root@nodeA ~]# grid-cert-request -host nodea.ps.univa.com -dir $GLOBUS_LOCATION/etc

Now go to nodeB and as the globus user copy the certificate request from nodeA to nodeB so that it can be signed:

[globus@nodeB simpleCA]$ scp root@nodea:/opt/globus-4.0.1/etc/hostcert_request.pem .

Use the 'grid-ca-sign' command to sign the host request for nodeA. When prompted enter the password for the CA:

[globus@nodeB simpleCA]$ grid-ca-sign -in ./hostcert_request.pem -out ./hostcert.pem

Copy the signed certificate into place back on nodeA:

[globus@nodeB simpleCA]$ scp hostcert.pem root@nodea:/opt/globus-

4.0. 1/etc/hostcert.pem

5.1.6.8 Completing Deployment on nodeC

At this point all the services are running and tested on nodeB, and we have a generic user configured on nodeA to be able to run jobs, including jobs that involve the staging of files .You should go back through this section and repeat the necessary steps to deploy Globus on nodeC. Again you do not need to create a new certificate authority--you only need the one available and owned by the 'globus' user on nodeB. But you will need to get host certificates for nodeC and make a copy that the container on nodeC can use.

5.1.7 Connecting Globus Gram WS and Torque (Open PBS) (nodeB)

To submit jobs into the PBS batch queue. We will now connect Globus GRAM WS so that jobs can be submitted into the PBS batch queue.

[globus@nodeB gt4.0.1-all-source-installer]$ export X509_USER_CERT=/opt/globus-

4.0. 1/etc/containercert.pem

[globus@nodeB gt4.0.1-all-source-installer]$ export X509_USER_KEY=/opt/globus-4.0.1/etc/containerkey.pem Now as user 'globus' create a proxy certificate:

[globus@nodeB gt4.0.1-all-source-installer]$ grid-proxy-init

Now stop the container on nodeB:

[globus@nodeB gt4.0.1-all-source-installer]$ globus-stop-container

Before building the PBS jobmanager you need to make sure the PBS commands are in the path for the globus user:

[globus@nodeB ]$ export PATH=/opt/pbs/bin:$PATH

[globus@nodeB ]$ which qsub

/opt/pbs/bin/qsub

[globus@nodeB gt4.0.1-all-source-installer]$ which qstat /opt/pbs/bin/qstat

[globus@nodeB gt4.0.1-all-source-installer]$ which pbsnodes /opt/pbs/bin/pbsnodes

[globus@nodeB ~]$ export PBS_HOME=/usr/spool/PBS

The GRAM WS PBS jobmanager is included in the GT 4.0.1 source so you only need to go back to the source distribution directory and build it:

[globus@nodeB ~]$ cd gt4.0.1-all-source-installer [globus@nodeB gt4.0.1-all-source-installer]$ make gt4-gram-pbs [globus@nodeB gt4.0.1-all-source-installer]$ make install

The last step is to configure the jobmanager so that it knows that rsh is being used:

[globus@nodeB globus]$ cd $GLOBUS_LOCATION/setup/globus [globus@nodeB globus]$ ./setup-globus-job-manager-pbs --remote-shell=rsh

As user globus on nodeB you should start the container again:

[globus@nodeB globus]$ /opt/globus-4.0.1/bin/globus-start-container > /home/globus/container.out 2>&1 &

5.1.8 Connecting Globus Gram WS and SUN Grid Engine ( nodeC)

To connect Globus GRAM WS on nodeC to SGE so that jobs can be submitted into the SGE batch queue.

5.1.8.1 Turning on reporting for SGE (nodeC)

As user root on nodeC run:

[root@nodeC sge-root]# /opt/sge-root/bin/lx24-x86/qconf -mconf

This invokes the default system editor. Find the 'reporting_params' configuration option in the file and edit it so that it appears as shown below:

- shepherd_cmd none
- qmaster_params none
- execd_params none
- reporting_params accounting=true reporting=true \

flush_time=00:00:15 joblog=true sharelog=00:00:00

- finished_jobs 100
- gid_range 20000-20500
- qlogin_command telnet
- qlogin_daemon /usr/sbin/in.telnetd

5.1.8.2 Building the WS GRAM SGE jobmanager (nodeC)

As user globus:

[globus@nodeC gt4.0.1-all-source-installer]$ export X509_USER_CERT=/opt/globus-

4.0. 1/etc/containercert.pem

[globus@nodeC gt4.0.1-all-source-installer]$ export X509_USER_KEY=/opt/globus-

4.0. 1/etc/containerkey.pem

Now as user 'globus' create a proxy certificate:

[globus@nodeC gt4.0.1-all-source-installer]$ grid-proxy-init

Now stop the container on nodeC:

[globus@nodeC gt4.0.1-all-source-installer]$ globus-stop-container

There are four source files for the WS GRAM SGE jobmanager that need to be downloaded and built as user 'globus' on nodeC.

1. http://www.lesc.ic.ac.uk/projects/globus_gram_job_manager_setup_sge-

1.1. tar.gz

2. http://www.lesc.ic.ac.uk/projects/globus_scheduler_event_generator_sge-

1.1. tar.gz

3. http://www.lesc.ic.ac.uk/projects/globus_scheduler_event_generator_sge_setup-

1.1. tar.gz

4. http://www.lesc.ic.ac.uk/projects/globus_wsrf_gram_service_java_setup_sge-

1.1. tar.gz

[globus@nodeC ~]$ source /opt/sge-root/default/common/settings.sh [globus@nodeC ~]$ export GLOBUS_LOCATION=/opt/globus-4.0.1 [globus@nodeC ~]$ source $GLOBUS_LOCATION/etc/globus-user-env.sh

Now build the source in the following order:

[globus@nodeC ~]$gpt-build globus_gram_job_manager_setup_sge-1.1.tar.gz [globus@nodeC ~]$gpt-build ./globus_scheduler_event_generator_sge-1.1.tar.gz gcc32dbg [globus@nodeC ~]$gpt-build globus_scheduler_event_generator_sge_setup-1.1.tar.gz [globus@nodeC ~]$gpt-build ./globus_wsrf_gram_service_java_setup_sge-1.1.tar.gz

Run the 'gpt-postinstall' command to finish the installation and configuration:

[globus@nodeC ~]$ gpt-postinstall

5.1.9 A Distributed Grid video encoding setting for the book (all nodes)

You now have a computational grid in place:

- nodeA has all the necessary client tools, a globus-gridftp-server, and a user with grid credentials
- nodeB represents a remote Linux cluster running the PBS batch system
- nodeC represents a remote Linux cluster running the SGE batch system

To demonstrate the second objective formed in last chapter ,have to design and implement a workflow which is "grid enabled". We will use our grid resources to re-encode a movie. Our application will involve multiple staging of files into and out of grid resources.

5.1.9.1 Installing MPlayer and mencoder (all nodes)

As user root you can install MPlayer and mencoder using RPM package manager, but before installing them they need some dependency libraries so you have to find them on internet download and install before starting with the installation of MPlayer and its coder .

Dependencies needed are as follow:

- aalib- 1.4.0-5.1.fc3.fr.i386
- faad2-2.0-8.fc3.rf.i386
- fribidi-0.10.4-1.fr4.i386
- lame-3.97-1.fc3.rf.i386
- libdvdcss-1.2.9-1.2.fc4.i386
- libdvdread-0.9.5-2.fc3.rf.i386
- libmad-0.15.1b-4.fc3.rf.i386
- lirc-0.6.6-4.1.fc3.rf.i386
- lzo-1.08-4.1.fc3.fr.i386
- mplayer-fonts-1.1-3.fc.noarch
- xmms-1.2.10-11.1.1.fc3.rf.i386
- xvidcore-1.0.3-1.2.fc4.i386
- faac-1.5-2.fc3.rf.i386
- libXvMCW-0.9.3-1.1.fc3.fr.i386

To install them ,either using the GUI RPM package manager in ROOT account or by command .By GUI method , just need to download it in /Root directory of root user and double click on it RPM Package manager will install it .In case of Command prompt

illustration not visible in this excerpt

5.2 Video Compression process in Grid Environment

This section gives an introduction of DV to MPEG-4 conversion in for grid Environment. As shown in Figure 5.3, the first step is to split DV files into a number of chunks according to the number of nodes that the Grid environment has assigned for current job. The size of divided files is based on their information available in from MDS about the availability of nodes for processing (Figure5.3).

illustration not visible in this excerpt

Figure 5.2: Video compression by Purposed grid computing system

By MDS, the client can gather useful information like computational capabilities, CPU loading, number of nodes, etc. The client (nodeA) submit the job by a Python script (appendix A) . The slices of DV files are created and be remotely transferred (GridFTP) to remote nodes in grid for conversion process by the local schedulers running the script.

illustration not visible in this excerpt

Figure 5.3: System Components

The Remote scheduler GRAM submits job to SGE (used for batch jobs) for further scheduling on each node (Figure 5.4). SGE schedule the nodes belong to it, for conversion in local Disk(Figure 5.5). After the conversion process each node returns the MPEG4 coverted file client node. The Client node collects all job result from remote schedulers and joins the results to form a single converted MPEG4 video.

illustration not visible in this excerpt

Figure 5.4: Components of Conversion server

5.3 Mencoder

Mencoder is a free command line video decoding, encoding and filtering tool released under the GNU General Public License. It is a close sibling to MPlayer and can convert all the formats that MPlayer understands into a variety of compressed and uncompressed formats using different codec’s [33].

As it is built from the same code as MPlayer, it can read from every source which MPlayer can read, decode all media which MPlayer can decode and it supports all filters which MPlayer can use. MPlayer can also be used to view the output of most of the filters (or of a whole pipeline of filters) before running Mencoder. If the system is not able to process this in real-time, audio can be disabled using -no sound to allow a smooth review of the video filtering results.

This is the codec that has been used in the grid system for slicing ,rejoining and conversion of the video files .the installation of this cocdec in distributed environment is discussed in the section 5.1 step 9 .

The implementation of grid computing system for the video conversion system is a lengthy topic to be discussed in the current text. so we have provided the python scripts used by us in the evaluating of the book in appendix A with command and results of execution in Appendix B and C .I hope it will help you in making the video conversion grid system and evaluate the results we purpose in the next chapter.

Chapter 6 Experimental Results

This Chapter is a step forward towards achieving the third objective of the research work. In this chapter the evaluation of grid environment for DV to MPEG4 conversion has been done.

The Results of evaluations have been divided into 3 parts according to the length of input video. Following are the 3 cases that have been taken during evaluation.

1. Evaluation of grid with 6 minute job video
2. Evaluation of grid with 8 minute job video
3. Evaluation of grid with 10 minute job video

6.1 Configuration of the Environment

The Grid Computing Environment designed and implemented for the research includes 12 Linux PC which are connected in two UIET labs. The blueprint of the environment used has been shown in figure 6.1. Following are the details of the nodes:

1. NodeB: A CA Authentication Server to authenticate the grid hosts and user to use the grid resources.
2. NodeB: NTP (Network time Protocol) server to synchronize the grid node to a
uniform time standard.
3. NodeC: the client node use to input the job in the grid.
4. NodeC to NodeL are the grid processing node.

illustration not visible in this excerpt

Figure 6.1 : Physical Layout of Grid Environment in UIET Lab.

The implementation in UIET labs:

1. Site 1 (UIET Lab. 212): 5 PCs with single Intel Pentium 4 (3.0 GHz to 2.0) processor, 256MB DDRAM, and 3Com 3c9051 and Intel 82566DM-2 Ethernet Interfaces.
2. Site 2 (UIET Lab. 212):7 PCs with Dual Intel Pentium4 2.0 GHz processor, 256MB SDRAM, and 3Com 3c9051 Ethernet Interfaces.

Sites 1 and 2, is located at different lab of department in UIET, Panjab University as shown in Figure 6.1 and connected with Fast Ethernet connection.

6.2 Evaluation of grid with 6 minute video

In this evaluation we took a 6 min Digital video in DV format and inputted to the grid to generate the test results .The experiment has been performed on different number of machines (nodes).We have added machines from 1 to 10 one by one and noted the time in each case. The table and the graph below show the test result of job execution on the grid nodes

illustration not visible in this excerpt

Table 6.1: Evaluation of 6min video on grid nodes

illustration not visible in this excerpt

Figure 6.2: Evaluation of 6min video on grid nodes

It is clear from the graph in figure 6.2 that grid shows benefits over single processing systems but it doesnot show linear increase in performance by increasing the number of grid processing nodes.

6.3 Evaluation of grid with 8 minute video

In this evaluation we took a 8 min Digital video in DV format and inputted to the grid to generate the test results .The experiment has been performed on different number of machines (nodes).We have added machines from 1 to 10 one by one and noted the time in each case. The table and the graph below show the test result of job execution on the grid nodes.

illustration not visible in this excerpt

Table 6.2: Evaluation of 8min video on grid nodes

illustration not visible in this excerpt

Figure 6.3: Evaluation of 8min video on grid nodes

It is clear from the graph in figure 6.3 that grid shows benefits over single processing systems but it doesnot show linear increase in performance by increasing the number of grid processing nodes.

6.4 Evaluation of grid with 10 minute video

In this evaluation we took a 10 min Digital video in DV format and inputted to the grid to generate the test results .The experiment has been performed on different number of machines (nodes).We have added machines from 1 to 10 one by one and noted the time in each case. The table and the graph below show the test result of job execution on the grid nodes.

illustration not visible in this excerpt

Table 6.3: Evaluation of 8min video on grid nodes

illustration not visible in this excerpt

Figure 6.4 Evaluation of 10min video on grid nodes

It is clear from the graph in figure 6.3 that grid shows benefits over single processing systems but it doesnot show linear increase in performance by increasing the number of grid processing nodes.

6.5 Interpretation of Results

After conducting thirty runs for three experiments by taking videos of different lenghts on one to ten processing nodes it is clear that grid shows benefits over single/centralised systems but it doesnot show up lenear increase in performance by increasing the number of grrid processing nodes.Therotically the experiment was excpected to give linear increase in performance by increasing the grid processing nodes but this unexcpted behavour is due to following reasons.

1. Network B andwith

As the network plays a substansial role in the grid processing and perfomance .So faster network (Gigabit) is proposed to be implemented in grid system implementations.

2. Scheduling Type

The default scheduling strategy used by implementation is random but better results are expected if other scheduling strategies (e.g. priority based) are used.

3. Switching Fabric

Network cannot be made efficient without making the switching fabric more efficent because it’s the backbone of any network standard. So efficient switch fabric is a necessary requirement in grid systems.

4. Node’s Local Processing and Availablity

Nodes in grid should have high availablity of its resources.

As the research is going on better scheduling and better bandwith networks for grid enviroment, the grid systems will one day prove to be equivalent to Cluster Computing Environments.

Chapter 7 Summary, Conclusion and Future Scope

7.1 Summary

The thesis presents the evaluation of grid for DV to MPEG4 video conversion with following objectives:

1. Study and Implementation of the Grid Environment.
2. Design and Implementation of DV to MPEG 4 conversion process on Grid Environment.
3. Evaluation of performance of Grid for DV to MPEG 4 conversion with respect to time.

The first objective intends to study the grid computing concepts, their types, their relationship with other computing technologies, the open source middleware available for its implementation etc. It also involves the analysis, design and implementation of a grid environment.

The second objective intends to study the compute, storage and network intensive problem of DV to MPEG4 video conversion and to implement it on the grid environment designed in above step.

Third objective intends to evaluate the performance of grid (designed in step 1) by executing the conversion process (as defined by step 2) with different parameters.

We have developed the python script (appendix A) to assist us to find suitable resource on a grid system and then splitting a DV video job into some job slices to be remotely scheduled

and processed at grid nodes. After conducting thirty runs for three experiments by taking videos of different lenghts on one to ten processing nodes it is clear that grid shows benefits over single/centralised systems but it doesn’t show up lenear increase in performance by increasing the number of grid processing nodes.

7.2 Conclusion

Throughout the experimental evaluation we find that a grid computing can utilize the unused resources of your computer infrastructure (utilize them for processor intensive job) but its performance is far beyond the Cluster Computing Infrastructure.

After conducting thirty runs for three experiments by taking videos of different lenghts on one to ten processing nodes it has been observed that grid shows benefits over single/centralised systems but it doesnot show up lenear increase in performance by increasing the number of grrid processing nodes.Therotically the experiment was excpected to give linear increase in performance by increasing the grid processing nodes but this unexcpted behavour is due to following reasons.

Network Bandwith: As the network plays a substansial role in the grid processing and perfomance. So faster network (Gigabit) is proposed to be implemented in grid system implementations.

Scheduling Type: The default scheduling strategy used by implementation is random but better results are expected if other scheduling strategies (e.g. priority based) are used.

Switching Fabric: Network cannot be made efficient without making the switching fabric more efficent because it’s the backbone of any network standard. So efficient switch fabric is a necessary requirement in grid systems.

Node’s Local Processing and Availablity: Nodes in grid should have high availablity of its resources. As the research is going on better scheduling and better bandwith networks for grid enviroment, the grid systems will one day prove to be equivalent to Cluster Computing Environments.

7.2 Future scope

In future, work can be initiated to add a better fault-detection policy to make it more robust. Python script designed and implemented during the evaluation are very static in job assignment. So in the future work we will try to make our script more dynamic for dynamic grid computing environments which will be able to detect and re-assign the same job to other computing node that is listed by MDS service. In future we will try to use better infrastructure for networking, so the drawbacks can be minimized at the level.

Appendix A

Python source code 3 Scripts

A total of 30 python scripts [45] are used to command the grid to encode the video for the evaluation of video conversions in grid environment. In every increasing script we added a new node to video conversion grid environment. Three scripts are been presented in this sections.

Scripts details

illustration not visible in this excerpt

Table A.1: Encoding scripts summary

All the scripts are relatively based on the same logic, so only one full script (encodeWorkflow_1) is displayed here and the changes in other scripts are only presented in this text .Changes in scripts are at two places:

1. Main function
2. SingleSlice function

The changes is been highlighted in the encodeWorkflow_1 and changes are placed below for the other 2 scripts.

illustration not visible in this excerpt

Appendix B

Linux commands to schedule the job on Grid .The following table have some of the command that were performed for the research evaluation:

illustration not visible in this excerpt

Table B.1: Linux commands to schedule the job on Grid

Appendix C

Linux commands to schedule the job on Grid is discussed in Appendix B .The following section have results of that commands, that were run in the evaluations.

No. of Command executed and results Nodes

illustration not visible in this excerpt

References

[1] Wikipedia, http://en.wikipedia.org/wiki/Distributed_computing

[2] http://library.thinkquest.org/C007645/english/0-definition.htm.

[3] http://library.thinkquest.org/C007645/english/0-advantages.htm.

[4] Leslie Lamport. "Subject: distribution (Email message sent to a DEC SRC bulletin board at 12:23:29 PDT on 28 May 1987)". Retrieved on 2007-04-28.

[5] http://library.thinkquest.org/C007645/english/0-disadvantages.htm

[6] J Chem. , A database-centric virtual chemistry system, May-Jun 2006; 46(3):1034-9

[7] Alan Joch , Multiprocessing Computing, Nov 27, 2000 [online] , http://www.computerworld.com/s/article/54343/Chip_Multiprocessing?taxonomyId= 064

[8] Rick Merritt, CPU designers debate multi-core future, EE Times ,02 june 2008 , [online] ,http://www.eetimes.com/showArticle.jhtml?articleID=206105179

[9] Mok, Lawrence S. (Brewster, NY, US), Method of constructing a multicomputer system , United States Patent 6867967, 03/15/2005 [online] , http://www.freepatentsonline.com/6867967.html

[10] Wasel Chemij, MPhil, Aberystwyth University ,Parallel Computer Taxonomy, M.Phil thesis., 1994 ,ch7, [online] http://www.gigaflop.demon.co.uk/comp/chapt7.htm

[11] D.A. Bader and R. Pennington, ''Cluster Computing: Applications,'' The International Journal of High Performance Computing, 15(2):181-185, May 2001.

[12] Ian Foster, The Anatomy of the Grid: Enabling Scalable Virtual Organizations, Publisher Springer Berlin / Heidelberg , Volume 2150/2001,page 1-4.

[13] Grid Cafe', http://www.gridcafe.org/.

[14] SETI@home: http://setiathome.ssl.berkeley.edu

[15] A. Abbas ,Grid Computing: Practical Guide To Technology & Applications,1st ed.Charles River Media, 2004.

[16] Globus website, http://www.globus.org

[17] Rich Wolski , CS290I Lecture notes -- Globus: "The" Grid Programming Toolkit ,[online], http://www.cs.ucsb.edu/~rich/class/cs290I-grid/notes/Globus/index.html

[18] Czajkowski, Karl ,"Globus GRAM", Globus Toolkit Developer's Forum. Globus Alliance , January 9, 2006.

[19] Globus project , http://www.globus.org/toolkit/docs/4.0/data/key/index.html

[20] Globus project, http://www.globus.org/toolkit/docs/4.0/execution/key/index.html

[21] LDAP project, http://www.openldap.org/

[22] Globus project, http://www.globus.org/toolkit/docs/4.0/security/key-index.html

[23] "A Little History Lesson". Sun Microsystems. 2006-06-23.

[24] "World's First Utility Grid Comes Alive on the Internet". Sun Microsystems. 2006­03-22.

[25] Wikipedia ,http://en.wikipedia.org/wiki/DV

[26] fxguide ,http://www.fxguide.com/article314.html

[27] adamwilt, http://www.adamwilt.com/DV-FAQ-tech.html#DVchromakey

[28] Sony canda, http://www.sony.ca/dvcam/pdfs/dvcam%20format%20overview.pdf

[29] Dvmp , http://www.dvmp.co.uk/digital-video.htm

[30] http://www.digitalpreservation.gov/formats/fdd/fdd000173.shtml

[31] http://dvswitch.alioth.debian.org/wiki/DV_format/

[32] John Watkinson ,The MPEG Handbook, Edition 2, Publisher Focal Press, 2004-10­31, p.1

[33] John Watkinson ,The MPEG Handbook, Edition 2, Publisher Focal Press, 2004-10­31, p.2

[34] http://en.wikipedia.org/wiki/File:MPEG_Compression_Overview.svg

[35] Klaus Diepold,Sebastian Moeritz Understanding MPEG-4,Edition 1, Publisher Focal Press , September 28, 2004 ,p.78

[36] Cliff Wootton. A Practical Guide to Video and Audio Compression. Publisher Focal Press, 2005, p. 665.

[37] http://en.wikipedia.org/wiki/Moving_Picture_Experts_Group

[38] John Watkinson ,The MPEG Handbook, Edition 2, Publisher Focal Press, 2004-10­31, p.4

[39] Salomon, David (2007). "Video Compression". Data compression: the complete reference, Edition 4, Springer. p. 676.

[40] John Watkinson ,The MPEG Handbook, Edition 2, Publisher Focal Press, 2004-10­31, p.5-6

[41] Klaus Diepold, Sebastian Moeritz Understanding MPEG-4,Edition 1, Publisher Focal Press , September 28, 2004 ,p.83

[42] I. Foster and C. Kesselman, The globus project: a status report, “Future Generation Computer System, vol. 15, no. 5-6, pp. 607{621, 1999.

[43] G. Alliance, globus Alliance, http://www.globus.org/.

[44] MPlayer and Mencoder Status of codec’s support, http://www.mplayerhq.hu/DOCS/codecs-status.html ,Retrieved on 2009-07-19

[45] http://www.globusconsortium.org/tutorial/

[46] TORQUE Resource Manager , http://www.clusterresources.com/products/torque- resource-manager.php

[47] Grid Engine project , http://gridengine.sunsource.net/

Details

Pages
118
Year
2017
ISBN (Book)
9783668522862
File size
2.3 MB
Language
English
Catalog Number
v374658
Institution / College
University of the Punjab
Grade
1.00
Tags
design implementation evaluation grid environment mpeg4 conversion

Authors

Share

Previous

Title: Design, Implementation and Evaluation of Grid Environment for DV to MPEG4 Conversion