Web Metrics


Master's Thesis, 2013

66 Pages, Grade: A


Excerpt


Inhaltsverzeichnis

Chapter One: INTRODUCTION
1.1 MOTIVATION OF WORK
1.2 AIM OF WORK
1.3 ORGANIZATION OF THESIS

Chapter Two: Literature Survey
2.1 IMPORTANCE OF WEB METRICS

Chapter Three: Research Background
3.1 WEB PAGE METRICS
3.1.1 Efficiency web metrics
3.1.2 Functionality web metrics
3.1.3 Maintainability web metrics
3.1.4 Portability web metrics
3.1.5 Reliability web metrics
3.1.6 Usability web metrics
3.2 INDEPENDENT AND DEPENDENT VARIABLE
3.3 EMPIRICAL DATA COLLECTION
3.3.1 Categorization of Websites into Good and Bad

Chapter Four: RESEARCH METHODOLOGY
4.1 METHODOLOGY
4.2 TOOL DESCRIPTION
4.3 MACHINE LEARNING ALGORITHMS
4.3.1 Bayes Net
4.3.2 Naïve Bayes
4.3.3 Multilayer Perceptron
4.3.4 Adaboost
4.3.5 Decision Table
4.3.6 Nnge
4.3.7 Part
4.3.8 Bf-tree
4.3.9 J-
4.3.10 Random forest

Chapter Five: Result Analysis
5.1 DESCRIPTIVE STATISTICS
5.2 LOGISTIC REGRESSION ANALYSIS
5.3 BAYES NET ANALYSIS
5.4 NAÏVE BAYES ANALYSIS
5.5 MULTILAYER PERCEPTRON ANALYSIS
5.6 ADABOOST ANALYSIS
5.7 DECISION TABLE
5.8 NNGE ANALYSIS
5.9 PART ANALYSIS
5.10 BF-TREE ANALYSIS
5.11 J-48 ANALYSIS
5.12 RANDOM FOREST ANALYSIS
5.13 EVALUATION OF MODEL

Chapter Six: CONCLUSION AND FUTURE WORK
6.1 FUTURE WORK

ABSTRACT

World Wide Web is a source of enormous information and has massive influence on our lives. A large number of websites nowadays are designed without sufficient resources and professional skills. Therefore evaluating quality of a website is a very important issue, further for success of any web; quality is one of the most important attribute. Various guidelines, tools and methodologies have been described by many authors to maintain the quality of a websites but their implementation is not much clear. Web metrics are used to measure various attributes of websites quantitatively and can be used to evaluate the quality of a website. So it is important to assess the website to enhance the quality of websites and web development process.

In our research we calculated twenty web page metrics using an automated tool WEB METRICS CALCULATOR developed in ASP.NET language. We collected data from websites of various categories from pixel awards of year 2010, 2011 and 2012, to categorize the websites into good or bad. We have used logistic regression and 10 machine learning techniques (Bayes net, Naïve bayes, Multilayer perceptron, Adaboost, Decision table, Nnge, Part, Bf-tree, J-48 and Random forest). Out of all these techniques results shows that area under ROC curve is greatest for Random forest within range .842-.891 for all year data set, so performance of Random forest model is greater as compared to all other models

Chapter One: INTRODUCTION

World Wide Web is a source of enormous information and such information is rapidly increasing exponentially. The basic purpose of Web is to provide up-to-date and relevant information to all users. For success of any organization like E-commerce highly depends on the quality of website, further. Websites are developed by various organizations from very small to large organization with development teams; such small companies develop websites without sufficient resources and professional skills. Thus it is very important to evaluate the quality of website and web development process to improve the quality of websites.

Quality of a website can be viewed in terms of internal and external quality. Internal quality refers to cost effectiveness, maintainability and portability whereas external quality measures from the users standpoint (Signore, 2005). Despite of various detailed design guidelines and design recommendation provided by various authors it is very difficult to implement them (Nielsen, 1999) (Nielsen, 2000). Web page metrics plays an important role to measure the quality of a website as they can measure quantitatively various attributes of a web page that influence the quality of a web page.

A large number of web metrics have been proposed by different authors that contribute to the goodness or quality of a website. Web metrics covers almost all aspects like page composition, amount of information, presentation, content and size of the websites.

1.1 Motivation of work

Although various guidelines were provided by different authors to design a quality website but these guidelines are not properly defined for the implementation point of view. Thus various developers feel difficulty to design a quality websites by following these guidelines (Nielsen, 1999) (Nielsen, 2000) (Shedroff, 1999) (Friedman, 2008)

Almost for every organization’s success quality of a website is very important regardless of organizations goal whether commerce (Amazon) or content presentation (Google). But many smaller sites are designed with lack of resources and professional skills, leading to the poor quality of website. Thus it is very important question that how to improve the design of websites. In our research we explore the following issues:

- Relation between web page metrics and quality of websites
- Accuracy and precision of web page metrics to predict the quality of the websites
- Compare performance of different machine learning and logistic regression techniques to predict the quality of website.

1.2 Aim of Work

Aim of our research is to find the relationship between web page metrics and quality and websites and to compare the performance of various machine learning techniques and logistic regression technique to identify he best model to predict the quality of a website.

In our research we develop a WEB METRICS CALCULATOR in ASP.NET language which is used to compute 20 web page metrics like word count, link count, script count etc. We have collected web pages from various category of pixel awards to evaluate the quality of a website.

These metrics make a subset of metrics which are related to the quality of web page design. Then we applied various machine learning techniques and logistic regression technique to compare the performance to predict a website into good or bad.

1.3 Organization of Thesis

Remainder part of thesis is organized as follows:

Chapter 2: Related work

This chapter briefly describes the related work that has been done for evaluating the website.

- Chapter 3: Literature review

This chapter describes the detailed literature about the web metrics and the importance of web metrics.

- Chapter 4: Research methodology

This chapter describes the WEB METRICS TOOLS used for computing web metrics, metrics selected for study and various machine learning techniques in detail which we used in our research.

- Chapter 5: Result Analysis

In this chapter experimental setup and simulation results have been described.

Sensitivity and Specificity criteria are used to measure the correctness of the models.

- Chapter 6: Conclusion and future work

Basic goal of this research is to categorize the website into good and bad on the basis of the web page metrics has been summarised in this chapter.

Chapter Two: Literature Survey

Over past 20 years more than 350 web metrics has been proposed by different authors to improve the quality of web sites and web development process. Bray made the earliest attempt to make global measurements about the web (Bray, May,1996). It basically included the general attributes of web such as page size, site visibility and format distribution.

Many metrics such as no. of hits, click- through rates etc. becomes very popular to quantify the use of web. Pitkow found the problem associated with hit metering as the reliable metric due to the proxy and client caches (Pitkow, 1997). So there is a need of new web metrics that provide the deeper view of the web as a whole and a single web page as a different perspective.

In 2002 on the basis of magnitude and measurement function Dhyani provided a classification of web metrics (Dhyani, Ng, & Bhowmik, 2002).

A lot of existing work has been done on evaluating web page quality, but most quantitative methods for evaluating web sites focus on statistical analysis of usage patterns in server (Chi, Pirroli, & Pitkow, 2000) (Drott, 1998) (Fuller & Graff, 1996). Traffic-based analysis (e.g., pages-per-visitor or visitors-per-page) and time-based analysis (e.g., click paths and page-view durations) provide data that must be interpreted in order to identify usability problems. The analysis based on such data is quite uncertain since web server logs provide incomplete traces of user behavior, and because timing estimates may be skewed by network latencies.

The above work focuses more on navigation history; explicitly clicked links and the time spend on a web site. Server logs are problematic because they only track unique navigational events (e.g., do not capture use of back button) and thus are hard to understand because of caching. Another method for evaluating web pages of user interest automatically investigates various factors in a user’s browsing behavior such as number of scrolls, form input, search text etc.

Another approach that assumed that website evaluation must be rapid and automatic. This approach use two types of tools and techniques. First approach is Usability awareness tool (WebSAT), this approach should used by the designers who does not aware of the usability issues and second approach was Web usability tools and techniques (NIST web metric tool) should used by designer to improve the usability of website(Scholtz, Laskowski, & Downey, 1998) Other approaches were inspection-based that rely on assessing static HTML according to a number of pre-determined guidelines, such as whether all graphics contain ALT attributes that can be read by screen readers(Velayathan & Yamada, 2006). For example, WebSAT (Web Static Analyzer Tool) is used to check the accessibility issues (i.e., support for users with disabilities), forms use, download speed, maintainability, navigation and readability of Web pages. There are many other techniques that compare quantitative web page attributes - such as the number of links or graphics - to thresholds(Thimbleby, 1997). However, there are no clear thresholds established for a wider class of quantitative Web page measures.

Simulation has also been used for web site quality evaluation. For example, a simulation approach has been developed for generating navigation paths for a site based on content similarity among pages, server log data, and linking structure(Chi, Pirroli, & Pitkow, 2000). The simulation models hypothetical users who are traversing the site from described start pages, making use of information “scent” (i.e., common keywords between the user’s goal and linked pages content) to make decisions related to navigation. The approach does not consider the impact of various web page attributes, such as the amount of text or layout of links.

Web site effectiveness is also measured in terms of information and service quality. This study uses two instrument WEBQUAL and SERVQUAL instrument. These two instruments are combined in order to capture the interactivity and service retrieval of web(Fink, 2001).

The most closely related work is done in Ivory et.al(Ivory, Sinha, & Hearst, Preliminary findings on quantitative measures for distinguishing highly rated information-centric web pages, 2000)(Ivory, Sinha, & Hearst, 2001) which provides preliminary analysis of collection of web pages and captures various web metrics associated with the rated websites, and predicts how the pair-wise correlations are manifested in the layout of the rated and unrated sites pages. This work does not apply various machine learning algorithms to predict the best suited model that can provide high accuracy.

CBR(Case Based Reasoning) and SWR(Step Wise Regression) techniques are used by to proposed size measures and effort predictors for web cost estimation. These methodologies basically tried to estimate the design and authoring effort for Web.(Mendes, Mosley, & Counsell, 2003).

The approach presented by G. Velayathan and S. Yamada(Velayathan & Yamada, 2006) analyzes the user logs metrics such as number of scrolls, form input, search text etc. and extracts effective rules to evaluate web pages using a machine- learning method known as decision tree. A client side logging/analyzing tool GINIS is used to automatically evaluate web pages using these learned rules. Similarly, M. Zorman et.al(Zorman, Podgorelec, Kokol, & Babic, 1999) has proposed an algorithm to find the good or relevant websites for keywords provided by the user. They developed an intelligent search tool which employs TFIDF heuristics for finding term frequency and decision tree machine learning algorithm for automatically evaluation of the websites.

Another approach was based on applying Ranking SVM (Li & Yamada, Automated Web Site Evaluation - An Approach Based on Ranking SVM, 2009)(Li & Yamada, 2010)which is used to extract evaluation criteria from evaluation data for automated web site evaluation. It chooses the evaluation criteria which are the discriminant functions learned from a set of ranking information and evaluation features such as freshness, accuracy of spelling and grammar, top page’s global link popularity collected automatically by web robots. However, it does not consider the other algorithms for the website evaluation.

The quality of a website can be defined in terms of functional as well as non- functional properties. K. M. Khan(Khan, 2008) has derived the non-functional attributes such as reliability, usability, efficiency, security and assessed them. The work done in(Khan, 2008) adopts a Goal-Question-Metric (GQM) approach to derive quality metrics. It defines the goals that are needed to be measured, then it develops the questions derived from goals that are required to determine if the goals are fulfilled, and finally, their measurements are the answers of the questions which are known as metrics. For instance, questions related to the goal failure rate could be: what is the percentage of incorrect links on the page?

2.1 Importance of Web Metrics

As more and more websites are created day by day, complexity and competition also increases. To check whether we created a good website or not we use web metrics. Web metrics help us to evaluate a web site. Web metrics vary based on nature and purpose of web site.

1. Meta keyword metrics helps us to find what keywords user enters in search engine to locate a particular website. By analyzing the Meta keyword metric we can see on which keywords over website comes in top 10 output of search engine.
2. Out link and in link metrics of a web site helps us to find path where we can enter a website and leave a web site. It also helps to find out any cycle is created or not. If there are more in link then website will have good hit rate.
3. Bound rate metrics helps to find percentage of initial users who bounce away to different website rather than continue to your website. Low bounce rate is good for a website as people are staying more on your website. Identify web pages for a websites which has high bounce rate so that we can modify these pages in order to decreases the bounce rate. For eg in e-commerce sites the major benefit of web analytics may be to find out the average amount of time taken to close an online sale.
4. If your web statistics for example, reveal that 60% of the individuals who watch a demo video also make a purchase, then you’ll want to strategize to increase viewership of that video.
5. There are metrics which can show you the percentage of clicks each item on your webpage received. This includes clickable-photos, text links in your copy, downloads and of course, any navigation you may have on the page. Are they clicking the most important items?
6. If you utilize advertising options other than web-based campaigns, your web analytics program can capture performance data if you’ll include a mechanism for sending them to your website. Typically, this is a dedicated URL that you include in your advertisement (i.e.“www.example.com/offer50”) that delivers those visitors to a specific landing page. You now have data on how many responded to that ad by visiting your website.
7. If you are running a banner ad campaign, search engine advertising campaign or even email campaigns, you can measure individual campaign effectiveness by simply using s dedicated URL similar to the offline campaign strategy.
8. Analytics permits you to see where your traffic geographically originates including country, state and city. This can be especially useful if you use geo- targeted campaigns or want to measure your visibility across a region.
9. If you’re working to increase visibility, you’ll want to study the trends in your New Visitors data. Analytics identifies all visitors as either new or returning.
10. Web traffic generally has peaks at the beginning of the work day, during lunch and toward the end of the work day. It’s not unusual however to find strong web traffic entering your website up until the late evening. You can analyze this data to determine when people browse versus buy and also make decisions on what hours you should offer customer service.

Chapter Three: Research Background

This research works focus on various web page measures on the quality or goodness of website. Thus it is very important to select web page metrics as independent variable to analyze the website.

3.1 Web Page Metrics

Web page metrics gives the quantitative measures of various attributes of a website like page size, word count etc. A list of web interface measures provided by Ivory(Melody, 2001) based on Site architecture, Page performance, Page formatting, Text Formatting, Link formatting, Graphic formatting, Text element, Link element, Graphic element to analyze the quality of a web page by calculating different web page metrics. These web measures can be divided on the basis of efficiency, functionality, maintainability, portability, reliability, and usability quality characteristics.

3.1.1 Efficiency web metrics

illustration not visible in this excerpt

3.1.2 Functionality web metrics

illustration not visible in this excerpt

3.2 Independent and Dependent variable

This dataset comprises of total 21 variables out of which 20 variables are independent and 1 is dependent variable. Table 3.7 gives the list of 20 web page measures that we have selected for our study. To compute these web page measures we have developed WEB METRICS CALCULATOR in ASP.NET language. We have used CFS(Correlation based Feature Selection) in WEKA tool to select the subset of independent variables that acts as the best predictors out of all other independent variables. (Hall, 1999) This subset is searched through all possible combinations of variables. CFS provides with the good feature subset that are highly correlated with data set.

Table 7: List of Metrics for study

illustration not visible in this excerpt

Dependent variable is Category which takes two values either good or bad depending on judgment of pixel awards.

3.3 Empirical Data Collection

We had selected web pages from pixel awards website. Pixel awards given to the websites which shows excellence in design and development and established by Erick and Lisa Laubach in year 2006. Judging criteria for websites are Innovation, Content, Navigation, Visual Design, Functionality and Site experience.

Websites are placed in 24 categories Agency, Animation, Apps, Art, Blogs, Commerce, Community, Experimental, Fashion, Food & Beverage, Games, Geek, Green, Magazines, Movies, Music, Non-Profit, Personal, Sports, Travel, TV and Weird. These websites are judged against judging criteria. There are two types of winners for each categories one of them is People’s Champ and another is winner. Dependent variable takes the value good for both of them and bad for other websites in respective category. Thus we have taken 294 web pages from these categories and level-1 pages of these websites. Thus 90 websites are nominated in 2010, 109 websites nominated in 2011 and 95 websites are nominated in 2012 year.

3.3.1 Categorization of Websites into Good and Bad

There are 2 awards given in each category, one is chosen by judges as winner, and another is People’s Champ Winner. We have considered the winner websites in all the categories as good and all the other nominee websites as bad. In 2010 out of 90 websites 33 websites are categorize in good and 57 websites as bad. In 2011 out of 109 websites 41 websites are categorized in good and 68 websites as bad. Similarly in 2012 out of 95 websites 31 websites are categorized into good and 64 websites as bad. Table

3.8 shows the website categorization.

Table 8: Categorization Of Websites

illustration not visible in this excerpt

Chapter Four: RESEARCH METHODOLOGY

4.1 Methodology

Our methodology finds the number of web page metrics like word count, link count etc. and compare the goodness of different web pages using these metrics and finally build the models using machine learning and statistical technique to predict the website as good or bad.

Figure 4.1 shows the basic methodology adapted for our study. Methodology is divided into three section Empirical data collection, Web metrics calculator, and result analysis.

Empirical data collection: First we select the websites from 2010, 2011 and 2012 pixel awards website from different sub-category. Second step is to enter the URL of the website from which we want to calculate different web metrics.

Web metrics calculator: Web metrics calculator is a tool used to compute different web metrics for the input URL website.

Result analysis: Data computed by web metrics calculator is used to analysis and comparing the different machine leaning algorithm and logistic regression to predict the quality of web page and to compare the prediction accuracy of different machine learning algorithms.

illustration not visible in this excerpt

Figure 1: Flowgraph of methodology

4.2 Tool description

Introduction: We have developed Web Metrics Calculator in asp.net language that calculates 20 web page metrics

Purpose: The idea is to automatically collect information about the web pages that gives an idea of the flavour of web page document. Web metrics calculated by this tool can be used for analysis of web site quality attributes.

Advantage:

- It automates the extraction of web metrics rather than manually searching the tags or the information in html page.
- Size of the tool is very small (in few Kb).
- We implemented the sql query that will store the result of all web pages in .csv file so there is no need to enter data manually.

Installation:

- Install Asp.net 2008(minimum) on the operating System.
- Install Sql server 2008 on the operating system.
Required operating environment:
- Operating system: tool can be run on winxp, win7 or in win8 operating system.
- Microsoft .net: Asp.net 2008 and Microsoft .net 3.5(minimum) must be installed on operating system to run this tool.
- CPU: 2.4 Ghz processor and 512Mb(minimum) ram is required.
- Disk space: There must be 40Mb of data must be free to run this tool
- Web connectivity: An active internet connection is required.

Method to calculate Web metrics:

1. Word count: Total number of words that are displayed on the web page. This can be calculated by calculating the number of display word between <body> and </body> tag.
2. Link count: Total number of links that direct to either external page or internal page. This can be calculated by counting <a href> tag in web page.
3. Graphic word count: Total number of words used to save the image file. This can be calculated by calculating number of words between alt” ” in web page.
4. Page Size: Total size of the web page(in Bytes).
5. Script count: Total number of scripts used in web page. This can be calculated by counting <script> tag in web page.
6. Image count: Total number of images that exist on web page. This can be calculated by counting total number of “.jpg”, “.png” and “.gif” in web page.
7. Inline element count: This can be calculated by counting total number <span> tag used in web page.
8. Class used count: Total number of classes used in web page. This can be calculated by counting total number of class=” ” used in web page.
9. Exclamation count: Total number of exclamation(!) used in web page. This can be calculated by counting total number of ! in web page.
10. Load time: Time required to load the web page in web browser. This can be calculated by measuring End time-Start time.
11. Meta tag count: Total number of meta tag used in web page. This can be calculated total number of <meta tag used in web page.
12. Page title word count: Total words used in the page title of web page. This can be calculated by counting total number of words between <title> and </title> tag.
13. List Items: Total number of lists used in web page. This can be calculated counting total number of <li> tag in web page.
14. Meta description length: Total number of words used in meta description.
15. Unordered List: Total number of unordered list exist on web page. This can be calculated total number of <ol> tag used in web page.
16. Division count: Total number of div tag used in web page. This can be calculated by counting total number of <div> tag used in web page.
17. Number of headings: Total number of lines that are marked as headings. This can be calculated by counting total number of <h1>,<h2>,<h3>,<h4>,<h5> and <h6> in web page.
18. Paragraph count: Total number of paragraphs used in web page. This can be calculated by counting total number of <p> tag used in web page.
19. Text link count: Total number of links that are text. This can be calculated by counting total number total number of display words between <a> and </a> tag.
20. Image link count: Total number of links that are image. This can be calculated by counting total number of <img between <a> and </a> tag.

Web Metric Tool works by taking the url of any web page as input and produce the output of selected web metrics. Basic inteface of Web Metric Tool is shown in following figure:

Abbildung in dieser Leseprobe nicht enthalten

Figure 2: Basic Interface of WEB METRICS CALCULATOR

Web Metric Calculator stores the source code of the URL as text file temporarily in local directory and then apply parsing techniques to text file to get desired web metrics. “SHOW” button on the Web Metric Tool enables to view the desire web metrics as output.

Abbildung in dieser Leseprobe nicht enthalten

Figure 3: Output Window of WEB METRICS CALCULATOR

Output of Web Metric Tool is automatically saved in a .csv file , when we calculate the web metrics of desired number of URL. By clicking on the “Download” button a .csv file is automatically generated in which columns represent the different web metrics and row represents different URL.

Abbildung in dieser Leseprobe nicht enthalten

Figure 4: .CSV Data file

4.3 Machine Learning Algorithms

4.3.1 Bayes Net

Bayesian networks pearl (1988) are quite powerful probabilistic representation and that’s why they are most often used for classification purpose but unfortunately they perform in a poor way when learned in a standard way (Grossman & Domingos, 2004).Bayes Nets are graphical representation for probabilistic relationships among a set of random variables. Given a finite set X(x,x2,x3.xn) of discrete random variables where each variable Xi may take values from a finite set, denoted by Val(Xi) (bayes nets, 2007). A Bays net is an annotated DAG(directed acyclic graph) G that encodes a joint probability distribution over X. In the Bayes networks The nodes of the graph correspond to the random variables X1,X2...Xn The links of the graph correspond to the direct influence from one variable to the other. If there is a directed link from variable Xi to variable Xj, variable Xi will be a parent of variable Xj. Each node is annotated with a conditional probability distribution (CPD) that represents p(Xi/Pa(Xi)), where Pa(Xi) denotes the parents of Xi in G. The pair (G, CPD) encodes the joint distribution p(Xi,.Xn).

4.3.2 Naïve Bayes

Naive Bayes classifier is a statistical classifier as well as a supervised learning method which is based on the Bayesian theorem given by Thomas bayes. It predicts class membership probabilities, such as the probability that a given sample is belongs to a particular class or not(Leung, 2007). Given a class variable, a Naive Bayes classifier assumes that the presence of a particular feature of a class is not related to the presence of any other feature. Given the set of variables

illustration not visible in this excerpt

Where, C is a dependent class variable with a set of possible outcomes

conditional on several variables.

Using Bayes Theorem,

illustration not visible in this excerpt

Thus, we want to construct the posterior probability of the event C. Thus, the equation can be written as:

Posterior =

Pr ior *likelihood Evidence

Naïve bayes classification provides a very useful approach to understand and evaluate many other learning algorithms. Naive bayes classification is very fast, it calculates explicit probabilities and robust to noises.

4.3.3 Multilayer Perceptron

A Multilayer Perceptron is a feed forward artificial neural network model that maps different input data instances onto a set of appropriate output. An MLP consists of multiple layers of nodes in a directed graph, with each layer fully connected to the next one. Each node in all the layers is a neuron associated with nonlinear activation function except for the input nodes. MLP utilizes a supervised learning technique called back-propagation for training the network. MLP is a modification of the standard linear perceptron, which can distinguish data that is not linearly separable (Anderson, 2003).

Algorithm

Multilayer perceptron training is done in two phase:

1. Forward phase
2. Backward phase

Weights are fixed in forward phase and input is propagated layer by layer through input layer to output layer.

Error is computed by comparing the actual output and the target response and this error is propagated layer by layer in backward direction through output layer to input layer.

Weight Adjustment in Backward phase

Assume input to input layer is E, and the observed output is oi(E) and target output is ti(E) and wij denotes the weight between node I and node j.

- The Error Term for output unit k is

illustration not visible in this excerpt

- The Error Term for hidden unit k is

illustration not visible in this excerpt

- Now for every weight wij between node i and node j we have to calculate

illustration not visible in this excerpt

η =learning rate

illustration not visible in this excerpt

- Now for every weight wij between node i and hidden node j we have to

calculate

illustration not visible in this excerpt

h (E)= output from hidden node to E

illustration not visible in this excerpt

- Final adjusted weight is

illustration not visible in this excerpt

4.3.4 Adaboost

Adaboost is formulated by Freund & Schapire in 1995. Adaboost is an algorithm for constructing a “strong” classifier as linear combination. It used many other learning algorithm to improve their performance. Initially it chooses one learner out of all that classify data correctly as compare to others.

Then data is reweighted so that the “importance” of misclassified classes can be increased. This process continues and weight of each weak learner is identified.

illustration not visible in this excerpt

With the help of “weak” and “simple” classifiers h (x)

illustration not visible in this excerpt

Some interesting properties of adaboost:

- Adaboost is a linear classifier.
- Output of adaboost converges to logarithm.
- Generalization properties are good.
- Adaboost produces sequence of more complex classifiers.
- It is basically a feature selector by minimization of upper bound on an empirical error.

Algorithm(Matas & Sochman)

illustration not visible in this excerpt

Now initialize weights D1(i) = 1/m For t = 1,…..,T:

- Call weak learner, and it returns the weak classifier ht : X ĺ{-1,1} with minimum error with respect to distribution Dt

- Now choose any α ∈ R,

illustration not visible in this excerpt

- Updating the value of Dt+1 with respect to Dt

illustration not visible in this excerpt

=Normalization factor chosen for Dt+1 is a distribution.

- Final output of the strong classifier is

illustration not visible in this excerpt

4.3.5 Decision Table

Decision table is basically lists cause and effects in matrix form. It is divided into four parts:

- Condition stub: Lists the comparisons and conditions.
- Action stub: which comprehensively lists the action to be taken along the various program branches
- Condition entries: which list in various columns the possible permutations of answers to the question in the condition stub.
- Action entries: which list, in its columns corresponding to the condition entries the action contingent upon the set of answers to question of that column.

4.3.6 Nnge

Nnge stands for Non-nested generalized exemplars. Nnge is one of the instance based machine learning technique. Nnge extends the nearest neighborhood concept by including the generalized exemplars. Non-nested generalized exemplars theory is first given by Martin in 1995 which used both the simple instances and generalized exemplars. Nnge was implemented in weka toolkit and proved to be a very competitive and useful technique (Witten & Frank, 1998).

Algorithm(Zaharie, Perian, & Negru, 2011)

- For every example Ej in the training set do:
- Find the hyper rectangle Hk which is closest to Ej
- IF D(Hk,Ej)=0 then
- IF Class(Ej) Class(Hk) THEN Split(Hk,Ej)
- ELSE H’ : Extend(Hk,Ej)
- IF H’ overlaps with conflicting hyperrectangles
- THEN add Ej as non-generalized exemplar
- ELSE Hk:=H’

Where Ej = training examples

Hk=Generalized exemplars (hyper rectangles)

4.3.7 Part

Part is based on the divide and conquers strategy and basically avoids the global optimization step used in C4.5 rules and Ripper (Witten & Frank, 1998). It provide unrestricted decision list using divide and conquer strategy. It builds a partial C4.5 decision tree in every iteration and makes the "best" leaf into a rule. Partial decision trees are used to obtain a rule.

4.3.8 Bf-tree

Bf-tree stands for best first decision tree, it is one of the type of decision tree learning. Bf-tree constructs a tree in divide and conquers strategy. In Bf-tree splitting is done at the best node out of given nodes. In bf-tree every nonterminal node tests an attribute whereas terminal nodes are used to assign classification. In construction of a Bf-tree there are 3 important aspects that must be taken care of

- Calculating the best attribute to split.
- Out of all nodes that competing for splitting which should be expanded next.
- Criteria to stop the growing trees.

Selection of Best node is done on the basis of impurity i.e node having the maximum reduction of impurity.

4.3.9 J-48

J48 is an open implantation c4.8 algorithm by weka tool in java and this is decision tree based algorithm that builds the tree in the same way as ID3 along with some improvements. Ros quinlan had developed this algorithm and this is now widely used for the classification purpose now a days. In this algorithm first base cases are checked and then for each attribute normalized information gain is found and the attribute that has the highest information gain is made the root node and this process is done recursively (c4.5 algorithm). J48 is an evolution and refinement of ID3 that accounts for unavailable values, continuous attribute value ranges, pruning of decision trees, rule derivation, and so on makes it more fruitful.

4.3.10 Random forest

The word Random forest came from “randomized decision forest” which is first proposed by Tin Kam Ho in Bell labs in 1995. Random forest is quite popular and versatile machine learning classification algorithm and it can work on many attributes with large datasets. Beside the class tags it can also provide some other important information about the dataset. It consists of bagging of un-pruned decision tree learners with randomized features at each split .Decision trees are the most commonly method used for the data exploration such as CART and regression trees. The forest consists of randomly selected inputs or combination of inputs at each node to grow each tree. Random forest is simple and relatively robust to noise and gives quite good result for some data sets with fast learning. Accuracy of Random forest is as good as Adaboost and sometimes it also gives better result than this. One more advantage of this algorithm is that it is relatively faster than the bagging with better strength, variable importance and correlation.

Chapter Five: Result Analysis

In this research following measures are used to evaluate the performance of each predicting model.

1. Sensitivity and Specificity: Sensitivity and Specificity criteria are used to measure the correctness of the models. Sensitivity and Specificity can be defined as follows

illustration not visible in this excerpt

Sensitivity is also called as TPR (True Positive Rate) and Specificity is also called as 1FPR (False Positive Rate).

2. ROC(Receiver Operating Characteristics): ROC analysis is used to evaluate the quality and performance of the predicting models. ROC graph is basically a technique for organizing, visualizing and selecting classifiers on the basis of their performance. (Fawcett, 2005)ROC curve is a plotted as specificity on the x-coordinate and sensitivity on the y-coordinate. We can select many cut-off points to calculate sensitivity and specificity but the optimal cut-off points gives the maximum value of both sensitivity and specificity.

5.1 Descriptive Statistics

Descriptive statistics gives the simple quantitative measure of the dataset. It provide information like “min”, ”max”, ”mean” and “std dev” of the dtaset of year 2010, 2011 and 2012.

Table 9: Descriptive statistics of year 2010 data

illustration not visible in this excerpt

Table 10: Descriptive statistics of year 2011 data

illustration not visible in this excerpt

Table 11: Descriptive statistics of year 2012 data

illustration not visible in this excerpt

5.2 Logistic Regression Analysis

Logistic regression is one of the statistical methods of prediction. Table 12 describes the prediction of web pages of all 3 models and Table 13 describes the 10 cross fold validation result of all 3 models.

Table 12: Website prediction of logistic regression for model 1, 2, and 3

illustration not visible in this excerpt

Table 13: 10-cross fold results using logistic regression for model 1, 2, and 3

illustration not visible in this excerpt

5.3 Bayes Net Analysis

Table 14 describes the prediction of web pages of all 3 models and table5.7 describes the 10 cross fold validation result of all 3 models.

Observation made from analysis:

- Out of 33 good websites, 11 are correctly predicted and out of 57 bad website,

53 are correctly predicted which gives the sensitivity 69.70 and specificity of 68.40 respectively.

- Same procedure applied for model 2 and model 3.

Table 14: Website prediction of Bayes net for model 1, 2, and 3

illustration not visible in this excerpt

Table 15: 10-cross fold results using Bayes net for model 1, 2, and 3

illustration not visible in this excerpt

Figure 5: ROC curve of Bayes Net for Model 1

Figure 6: ROC curve of Bayes Net for Model 2

Figure 7: ROC curve of Bayes Net for Model 3

5.4 Naïve Bayes Analysis

Observation made from analysis:

- Out of 33 good websites, 28 are correctly predicted and out of 57 bad website,

20 are correctly predicted which gives the sensitivity 66.70 and specificity of 79.10 respectively.

- Out of 41 good websites, 31 are correctly predicted and out of 68 bad website, 34are correctly predicted which gives the sensitivity 70.70 and specificity of 71.60 respectively.

- Out of 31 good websites, 26 are correctly predicted and out of 64 bad website, 38 are correctly predicted which gives the sensitivity 74.20 and specificity of 75.00 respectively.

Table 16: Website prediction of Naïve bayes for model 1, 2, and 3

illustration not visible in this excerpt

Table 17: 10-cross fold results using Naïve bayes for model 1, 2, and 3

illustration not visible in this excerpt

Figure 8: : ROC curve of Naive Bayes for Model 1

illustration not visible in this excerpt

Figure 9: ROC curve of Naive Bayes for Model 2

illustration not visible in this excerpt

Figure 10: ROC curve of Naive Bayes for Model 3

5.5 Multilayer Perceptron Analysis

Observation made from analysis:

- Out of 33 good websites, 28 are correctly predicted and out of 57 bad website, 20 are correctly predicted which gives the sensitivity 81.80 and specificity of 82.50 respectively.
- Out of 41 good websites, 30 are correctly predicted and out of 68 bad website, 44are correctly predicted which gives the sensitivity 68.30 and specificity of 67.20 respectively.
- Out of 31 good websites, 19 are correctly predicted and out of 64 bad website, 48 are correctly predicted which gives the sensitivity 67.70 and specificity of 67.20 respectively.

Table 18: Website prediction of Multilayer Perceptron for model 1, 2, and 3

illustration not visible in this excerpt

Table 19: 10-cross fold results using Multilayer Perceptron for model 1, 2, and 3

illustration not visible in this excerpt

Figure 11: ROC curve of Multilayer perceptron for Model 1

illustration not visible in this excerpt

Figure 12: ROC curve of Multilayer perceptron for Model 2

illustration not visible in this excerpt

Figure 13: ROC curve of Multilayer perceptron for Model 3

5.6 Adaboost Analysis

Observation made from analysis:

- Out of 33 good websites, 26 are correctly predicted and out of 57 bad website, 46 are correctly predicted which gives the sensitivity 81.80 and specificity of 82.50 respectively.

- Out of 41 good websites, 33 are correctly predicted and out of 68 bad website, 54 are correctly predicted which gives the sensitivity 80.50 and specificity of 80.60 respectively.

- Out of 31 good websites, 27 are correctly predicted and out of 64 bad website, 55 are correctly predicted which gives the sensitivity 83.90 and specificity of 85.90 respectively.

Table 20: Website prediction of Adaboost for model 1, 2, and 3

illustration not visible in this excerpt

Table 21: 10-cross fold results using Adaboost for model 1, 2, and 3

illustration not visible in this excerpt

Figure 14: ROC curve of Adaboost for Model 1

illustration not visible in this excerpt

Figure 15: ROC curve of Adaboost for Model 2

illustration not visible in this excerpt

Figure 16: ROC curve of Adaboost for Model 3

5.7 Decision Table

Observation made from analysis: - Out of 33 good websites, 24 are correctly predicted and out of 57 bad website, 43 are correctly predicted which gives the sensitivity 72.70 and specificity of 79.10 respectively.

- Out of 41 good websites, 33 are correctly predicted and out of 68 bad website, 54 are correctly predicted which gives the sensitivity 80.50 and specificity of 80.60 respectively.
- Out of 31 good websites, 27 are correctly predicted and out of 64 bad website, 55 are correctly predicted which gives the sensitivity 83.90 and specificity of 85.90 respectively.

Table 22: Website prediction of Decision table for model 1, 2, and 3

illustration not visible in this excerpt

Table 23: 10-cross fold results using Decision table for model 1, 2, and 3

illustration not visible in this excerpt

Figure 17: ROC curve of Decision table for Model 1

illustration not visible in this excerpt

Figure 18: ROC curve of Decision table for Model 2

illustration not visible in this excerpt

Figure 19: ROC curve of Decision table for Model 3

5.8 Nnge Analysis

Observation made from analysis:

- Out of 33 good websites, 23 are correctly predicted and out of 49 bad website, 53 are correctly predicted which gives the sensitivity 69.70 and specificity of 86.00 respectively.
- Out of 41 good websites, 33 are correctly predicted and out of 68 bad website, 54 are correctly predicted which gives the sensitivity 80.50 and specificity of 80.60 respectively.
- Out of 31 good websites, 27 are correctly predicted and out of 64 bad website, 55 are correctly predicted which gives the sensitivity 83.90 and specificity of 85.90 respectively.

Table 24: Website prediction of Nnge for model 1, 2, and 3

illustration not visible in this excerpt

Table 25: 10-cross fold results using Nnge for model 1, 2, and 3

illustration not visible in this excerpt

Figure 20: ROC curve of Nnge for Model 1

illustration not visible in this excerpt

Figure 21: ROC curve of Nnge for Model 2

illustration not visible in this excerpt

Figure 22: ROC curve of Nnge for Model 3

5.9 Part Analysis

Observation made from analysis:

- Out of 33 good websites, 23 are correctly predicted and out of 57 bad website, 48 are correctly predicted which gives the sensitivity 69.70 and specificity of 71.90 respectively.

- Out of 41 good websites, 33 are correctly predicted and out of 68 bad website, 48 are correctly predicted which gives the sensitivity 70.70 and specificity of 71.60 respectively.

- Out of 31 good websites, 23 are correctly predicted and out of 64 bad website, 52 are correctly predicted which gives the sensitivity 74.20 and specificity of 81.20 respectively.

Table 26: Website prediction of Part for model 1, 2, and 3

illustration not visible in this excerpt

Table 27: 10-cross fold results using Part for model 1, 2, and 3

illustration not visible in this excerpt

Figure 23: ROC curve of Part for Model 1

illustration not visible in this excerpt

Figure 24: ROC curve of Part for Model 2

illustration not visible in this excerpt

Figure 25: ROC curve of Part for Model 3

5.10 Bf-tree Analysis

Observation made from analysis:

- Out of 33 good websites, 23 are correctly predicted and out of 57 bad website, 42 are correctly predicted which gives the sensitivity 72.70 and specificity of 71.90 respectively.
- Out of 41 good websites, 30 are correctly predicted and out of 68 bad website, 52 are correctly predicted which gives the sensitivity 75.60 and specificity of 76.10 respectively.
- Out of 31 good websites, 23 are correctly predicted and out of 64 bad website, 48 are correctly predicted which gives the sensitivity 74.20 and specificity of 75.00 respectively.

Table 28: Website prediction of Bf-tree for model 1, 2, and 3

illustration not visible in this excerpt

Figure 26: ROC curve of Bf-tree for Model 1

illustration not visible in this excerpt

Figure 27: ROC curve of Bf-tree for Model 2

illustration not visible in this excerpt

Figure 28: ROC curve of Bf-tree for Model 3

5.11 J-48 Analysis

Observation made from analysis:

- Out of 33 good websites, 23 are correctly predicted and out of 57 bad website, 45 are correctly predicted which gives the sensitivity 72.70 and specificity of 77.20 respectively.
- Out of 41 good websites, 34 are correctly predicted and out of 68 bad website, 51 are correctly predicted which gives the sensitivity 80.50 and specificity of 76.10 respectively.
- Out of 31 good websites, 24 are correctly predicted and out of 64 bad website, 54 are correctly predicted which gives the sensitivity 77.40 and specificity of 78.10 respectively.

Table 30: Website prediction of J-48 for model 1, 2, and 3

illustration not visible in this excerpt

Table 31: 10-cross fold results using J-48 for model 1, 2, and 3

illustration not visible in this excerpt

Figure 29: ROC curve of J-48 for Model 1

illustration not visible in this excerpt

Figure 30: ROC curve of J-48 for Model 2

illustration not visible in this excerpt

Figure 31: ROC curve of J-48 for Model 3

5.12 Random Forest Analysis

Observation made from analysis:

- Out of 33 good websites, 28 are correctly predicted and out of 57 bad website,

48 are correctly predicted which gives the sensitivity 84.90 and specificity of

84.20 respectively.

- Out of 41 good websites, 26 are correctly predicted and out of 68 bad website,

56 are correctly predicted which gives the sensitivity 80.50 and specificity of

73.10 respectively.

- Out of 31 good websites, 24 are correctly predicted and out of 64 bad website,

54 are correctly predicted which gives the sensitivity 83.90 and specificity of

79.70 respectively.

Table 32: Website prediction of J-48 for model 1, 2, and 3

illustration not visible in this excerpt

Table 33: 10-cross fold results using Random forest for model 1, 2, and 3

illustration not visible in this excerpt

Figure 32: ROC curve of J-48 for Model 1

illustration not visible in this excerpt

Figure 33: ROC curve of J-48 for Model 2

illustration not visible in this excerpt

Figure 34: ROC curve of J-48 for Model 3

5.13 Evaluation of model

For dimensionality reduction we used CFS technique provided in WEKA tool which provide the subset of attributes. When CFS applied to 2010 year data, 21 variables were reduced to 5 variables in which 4 are independent and 1 is dependent. Independent variables in 2010 data are Word Count, Link count, Script Count and List Item Count. Similarly in 2011 year dataset independent variables are Link Count, Script Count, Inline Element Count, Load time, Pagetitle Word Count and Unordered List Count. Similarly in 2012 year dataset independent variables are Word Count, Page size, Script Count, Image Count, Load Time and Paragraph Count.

Observation made from evaluation of models after applying CFS technique

- Script Count is very significant metric in all three year dataset, so it should be consider by designers for the good design of website.
- Word Count is common in 2010 and 2012 year dataset and Link Count is common in 2010 and 2011 year dataset.
- Number of significant metrics either same or increases by the time

Cutoff point of all the models is computed using ROC analysis which maintain a balance between predicted website as good and bad. Area under curve(AUC) of ROC is a measure of combination of sensitivity and specificity and ROC curve is plotted between sensitivity and 1-specificty. So Area under ROC curve is used for computing the accuracy of prediction model.

Table 34 describes the prediction result of 10 machine learning techniques for model 1. Table 35 describes the prediction result of 10 machine learning techniques for model 2. Table 36 describes the prediction result of 10 machine learning techniques for model 3.

Table 34: Prediction results of 10 machine learning techniques of model 1

illustration not visible in this excerpt

Table 35: Prediction results of 10 machine learning techniques of model 2

illustration not visible in this excerpt

Table 36: Prediction results of 10 machine learning techniques of model 3

illustration not visible in this excerpt

We have employed logistic regression and machine learning techniques to evaluate their performance for predicting the quality of the websites. The AUC of all the models predicted using Random Forest technique is greater than the AUC of all the other models predicted using the logistic regression as well as other machine learning techniques (Bayes Net, Naïve Bayes, Multilayer Perceptron, Adaboost, Decision Table, Nnge, Part, Bf-tree, J-48, Random Forest).

The model 1 with respect to dataset of 2010 has an AUC of 0.885 using Random Forest technique which is greater than that using other techniques and same trend is seen for the models with respect to dataset of year 2011 and 2012 with the AUC of 0.842 and 0.891 respectively. All the models performed best with Random Forest classifier, which is reflected in their AUC values.

Both the sensitivity and specificity should be high to predict good and bad websites. The models predicted with the Random Forest technique have higher prediction performance in terms of sensitivity and specificity. For Model 1, Random Forest classifier provides the sensitivity of 84.90 and specificity of 84.20. Model 2 has sensitivity of 80.50 and specificity of 73.10. For Model 2, Random Forest provides the sensitivity and specificity of 83.90 and 79.70, respectively.

Thus, on overall basis in terms of sensitivity, specificity and area under ROC curve, the best model suitable for predicting the class of websites as good or bad is determined to be Random Forest Model. It is said that Random Forest outperforms more sophisticated classifiers on many datasets, achieving impressive results.

Chapter Six: CONCLUSION AND FUTURE WORK

The basic goal of this research is to categorize the website into good and bad on the basis of the web page metrics. Further we employed 10 machine learning algorithm and logistic regression methods for classifying the website and compare the performance of 10 machine learning techniques.

So we can finally summarize this work into three sub-parts:

1. We collected 294 webpages and their level-1 pages from various category from the pixel awards website of year 2010, 2011 and 2012.

2. Then we computed 20 web metrics of these webpages using WEB METRICS CALCULATOR which was developed in ASP.NET.

3. Then we applied logistic regression and 10 machine learning(Bayes net, Naïve Bayes, Multilayer Perceptron, Adaboost, Decision Table, Nnge, Part, Bf-tree, J-48, Random forest) techniques to classify the website and compare the accuracy of logistic regression and 10 machine learning techniques.

Result of this report can be summarized as follows:

1. Script Count is very significant metric in all three year dataset, so it should be consider by designers for the good design of website.

2. Most significant metrics in 2010 Word count, Link count, Script count and List item count. In 2011 most significant metrics are Link count , Script count, Inline element count, Load time, Page title word count, Unordered list count. In 2012 most significant metrics are Word count, Page size, Script count, Image count, Load time and paragraph count.

3. Performance of Random Forest technique is better than all other machine

learning techniques and logistic regression under ROC analysis. Range of Area Under Curve of Random Forest is .842-.891.

6.1 FUTURE WORK

Although this research analysis is conducted on three year dataset and computed 20 web page metric. Analysis should be done on larger and different datasets as well as with more number of web page metrics to generalize our result. Further we are plan to extend our research for all level web pages instead of only 1-level pages and to define new web page metrics.

Bibliography

Americo, R. (2010). Websites Quality: Does It depend on the application Domain ? International Conference on the quality of Information and Communications Technology.

Calero, C., Ruiz, J., & Piattini, M. (2005). Classifying web metrics using the web quality model. Emerald Group Publishing, 227- 248.

Chi, E. H., Pirroli, P., & Pitkow, J. (2000). The scent of a site: A system for analyzing and predicting information scent, usage, and usability of a web site. ACM CHI 00 Conference on Conference on Human Factors in Computing Systems.

Dhyani, D., Ng, W., & Bhowmik, S. (2002). A survey of web metrics. ACM Computing Surveys.

Fink, D. (2001). Web Site Effectiveness: A Measure of Information and Service Quality. IRMA

international conference.

Group, M. W. (2005). qualitycommentary050314final. Retrieved from http://www.minervaeurope.org/: http://www.minervaeurope.org/publications/qualitycommentary/qualitycommentary050314final.pdf Ivory, M. Y., Sinha, R., & Hearst, M. (2000). Preliminary findings on quantitative measures for distinguishing highly rated information-centric web pages. 6th Conference on Human Factors and the Web.

Ivory, M. Y., Sinha, R., & Hearst, M. A. (2001). Empirically Validated Web Page Design Metrics. SIGCHI conference on Human factors in computing systems, (pp. 53-60). Washington. Khan, K. M. (2008). Assessing Quality of Web Based System. IEEE/ACS International Conference on Computer Systems and Applications (pp. 763-769). AICCSA. Leung, K. M. (2007). naiveBayesianClassifier.pdf. Retrieved 06 04, 2013, from naiveBayesianClassifier.pdf: http://cis.poly.edu/~mleung/FRE7851/f07/naiveBayesianClassifier.pdf

Li, P., & Yamada, S. (2009). Automated Web Site Evaluation - An Approach Based on Ranking SVM.

International Joint Conferences on Web Intelligence and Intelligent Agent Technologies.

IEEE/WIC/ACM.

Li, P., & Yamada, S. (2010). Extraction of Web Site Evaluation Criteria and Automatic Evaluation. Journal of Advanced Computational Intelligence and Intelligent Evaluation.

Matas, J., & Sochman, J. (n.d.). Adaboost_matas. Retrieved 06 06, 2013, from

http://www.robots.ox.ac.uk/: http://www.robots.ox.ac.uk/~az/lectures/cv/adaboost_matas.pdf

Melody, I. y. (2001). An empirical foundation for automated web interface evaluation. ACM digital library.

Mendes, E., Mosley, N., & Counsell, S. (2003). Early web size measures and effort prediction for web costimation. IEEE International Software Metrics Symposium(METRICS ’ 2003) (pp. 18-29). Sydney: IEEE CS Press.

Mich, L., Franch, M., & Gaio, L. (2003). Evaluating and Designing the Quality of Web Sites. IEEE Multimedia (pp. 34-43). Ieee computer society.

Olsina, L., & Rossi, G. (2002). Measuring Web Application Quality with WebQEM. Ieee computer society, 20-29.

Pollilo, R. (2005). Un modello di qualità per i siti web. AICA, 32-44.

Scholtz, J., Laskowski, S., & Downey, L. (1998). Developing Usability Tools and Techniques for Designing and Testing Web Sites. 4th Conference on Human Factors & the Web. Signore, O. (2005). A comprehensive model for Web sites quality. Seventh IEEE International Symposium on Web Site Evolution, (pp. 30-38). Budapest.

Thimbleby, H. (1997). Gentler: A tool for systematic web authoring. International Journal of Human- Computer Studies, 139-168.

Velayathan, G., & Yamada, S. (2006). Behavior-Based Web Page Evaluation. IEEE/WIC/ACM

International Conference on Web Intelligence and Intelligent Agent Technology. (WI-IAT Workshops. Zaharie, D., Perian, L., & Negru, V. (2011). A view inside the classification with non-nested generalized exemplars. Iadis European Conference Data Mining.

Zorman, M., Podgorelec, V., Kokol, P., & Babic, S. H. (1999). Using machine learning techniques for automatic evaluation of Websites. Third International Conference on Computational Intelligence and Multimedia Applications ICCIMA (pp. 169-173). New Delhi: IEEE Computer Society Press.

Excerpt out of 66 pages

Details

Title
Web Metrics
Course
Master of technology
Grade
A
Author
Year
2013
Pages
66
Catalog Number
V294903
ISBN (eBook)
9783656937081
ISBN (Book)
9783656937098
File size
1914 KB
Language
English
Keywords
Web
Quote paper
Kapil Sharma (Author), 2013, Web Metrics, Munich, GRIN Verlag, https://www.grin.com/document/294903

Comments

  • No comments yet.
Look inside the ebook
Title: Web Metrics



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free