Loading...

Measuring the performance of UK Modern Language Departments

An application of Data Envelopment Analysis

Essay 2004 33 Pages

Business economics - Marketing, Corporate Communication, CRM, Market Research, Social Media

Excerpt

Contents

1. Introduction

2. Data Envelopment Analysis

3. Methodology and Results
3.1 Data
3.2 Data Envelopment Analysis (DEA)
3.3 Advantages and Limitations of Data Envelopment Analysis

4. Conclusion

References

Appendix
The Guardian University Ranking
Using the tables
University Statistics
Data file
Comparison Additive Model – output-oriented CCR-Model
Weight restrictions
Correlations
Comparison of rankings

Print-out DEAWIN

Additive Model and output-oriented CCR-Model

Additive Model and output-oriented CCR-Model (without teaching assessment)

Weight restrictions

1. Introduction

Based on the Guardian University Ranking 20021 Data Envelopment Analysis will be introduced as an approach to produce a league table ranking 85 modern language departments at UK universities. The results of the implementation of this method will form the ground for a general discussion of Data Envelopment Analysis elaborating its advantages and disadvantages in comparison to Scoring Approaches.

2. Data Envelopment Analysis (DEA)

Data Envelopment Analysis (DEA), developed by Charnes, Cooper and Rhodes (1978), is a linear programming procedure for a frontier analysis of inputs and outputs evaluating the relative efficiency of decision-making units (DMUs). DEA measures the technical efficiency of a DMU as a proportionate distance from the efficient surface, (Joro et al., 2002). It generates an aggregate performance measurement (APM) in the form of an efficiency score for the unit under investigation, (Dyson et al., 1990; Sarrico and Dyson, 2000). An efficiency score of less than one is assigned to inefficient units indicating the possibility for performance improvement, (Anderson and Petersen, 1993).

3. Methodology and Results

3.1 Data

Data underlying this analysis was subdivided into inputs and outputs by adopting a student’s perspective.2,3 Inputs comprise university spending per student and the average A-level/Highers score. The University spending per student was subordinated to the group of inputs as it represents some kind of “investment” into students; however, a categorisation as an output, which can be found in the study by Sarrico et al. (1997), also seems plausible. The classification of A-level/Highers scores as an output means implicitly (for this analysis undesired) to assume that the higher the entry qualification, the more desirable is the university to the applicant. This relationship, however, only holds true for strong applicants, (Sarrico et al., 1997).

Abbildung in dieser Leseprobe nicht enthalten

The student-staff ratio, the percentage of graduates with two-ones and firsts, the percentage of graduates employed (six month after graduating), the percentage of graduates unemployed (six month after graduating), student number (full-time undergraduates), the percentage of graduates in further study (postgraduates), and the teaching assessment form outputs.

Although faculty-wide figures were used in the data set where departmental figures were unavailable, the lack of data for sixteen universities would have led to their exclusion from the analysis as the DEA program does not allow for zero-values to be included or cells to be left blank. Thus, an average score was calculated to replace missing values.

3.2 DEA-Analysis

As suggested in literature (Johnes and Johnes, 1993) for any application of DEA, a sensitivity analysis4 was conducted testing the variability (sensitivity) of results to changes in the input-output specifications by confronting results of the Additive Model – a variant of the BCC-Model - with the ones derived from the output-orientated CCR-Model.

The Additive Model evaluates performance by simultaneously minimising inputs and maximising outputs, (Joro, 1998). The CCR-Model, in contrast, indicates the efficiency of a unit by measuring the amount to which the output of that unit can be increased without the need to increase inputs, (Sarrico and Dyson, 2000). The results of the two models (run in DEAWIN) were compared in a correlation analysis.

Abbildung in dieser Leseprobe nicht enthalten

The derived Pearson correlations score5 of 0.736 is highly significant with a = 0.000 (at a confidence level of 99%) which allows drawing the conclusion that results from the two models do not statistically significantly differ from each other.

By using the output-oriented CCR-Model the number of efficient universities could be reduced from thirty-five (Additive Model) to nineteen. These universities labelled ‘efficient’ by the output-orientated CCR-Model were also classified as efficient by the Additive Model6.

Furthermore, the output-orientated CCR-Model was run with four different weight restrictions on virtual outputs7. As Sarrico et al. (1997) recommend in their study, a minimum virtual weight of 5% was imposed on outputs to ensure that none of the categories are disregarded in the evaluation of the university. The correlation8 between version two and three (version one is infeasible) and between version two and four are statistically significant at a significance level of 0.01. However, the correlation between version three and four is not.

3.3 Advantages and Limitations of Data Envelopment Analysis

Data Envelopment Analysis uses linear programming, which represents a higher mathematical concept than a scoring method and allows inputs and outputs to be measured in different units (Charnes et. al., 1994; Roberts, 1999). The higher mathematical complexity of this method yields potential for more sophisticated analysis. However, it exhibits at the same time certain limitations. DEA can easily be expanded to handle multiple cost drivers and multiple types of costs while a single efficiency score is assigned to each DMU, (Cubbin and Tzanidakis, 1998). However, this can be computationally intensive as a linear programme for each DMU is run, (Johnes and Johnes, 1995).

Unlike many other higher mathematical procedures it not affected by a relatively large number of cost drives in comparison to the sample size and does not suffer from multi-collinearity since its algorithm is not sensitive to high correlation of the cost drivers, (Roberts, 1999). DEA does not require a prior specification of the shape of the curve to be fitted, (Cubbin and Tzanidakis, 1998). However, extreme values work as attraction poles for the fitted curve which pull the curve towards them minimising the distance between them and the fitted curve which makes them appear efficient, (Roberts, 1999).

Homogeneity is a very strong assumption of this method since DEA requires that similar units are being compared, (Roberts, 1999). Scoring approaches, in contrast, do not make any assumptions about the frequency distribution of data. Since DEA is a nonparametric technique, statistical tests for homogeneity are difficult, (Roberts, 1999). Additionally, DEA assumes that at least one DMU is technically efficient so that the efficiency frontier can be defined. Thus implying that the remaining DMUs are deemed efficient simply as there are no other units that are more efficient than these existing in the sample, (Johnes and Johnes, 1993).

The selection of the inputs and outputs is extremely important since this will directly affect the discriminatory power of DEA. For discrimination to be effective the number of inputs and outputs should be small compared to the total number of units, (Roberts, 1999). Dyson (1996) therefore suggests a rule-of-thumb such that the number of units is greater than or equal to twice the product of the inputs and outputs. This rule is opposed by Banker et al. (1989), and Golany and Roll (1989) who suggest other rules-of-thumb.

The relative efficiency score achieved by each DMU can be sensitive to the number of inputs and outputs specified, (Johnes and Johnes, 1993; Sexton, 1986; Nunamaker, 1985). The more inputs and output variables are included in the model, the higher will be the number of DMUs with an efficiency score equal to the unity (Nunamaker, 1985; Tomkins and Green, 1988) although, efficiency scores cannot go down when additionally variables (inputs or outputs) are added to the model9, (Sexton et al. 1986).

By choosing the inputs and outputs which are to be included in a DEA, it is often possible to make virtually any DMU appear efficient i.e. a unit may appear efficient purely because of the patterns of its inputs and outputs, (Johnes and Johnes, 1995; Srinivas, 2000; Tomkins and Green, 1988; Roberts, 1999). A DMU may achieve a high efficiency score in some circumstances merely by being different (in its input or output mix) from other units, and may therefore serve a niche quite well; however, does not represent a good overall performer, (Johnes and Johnes, 1993; Tomkins and Green, 1988; Roberts, 1999).

As DEA allows flexibility in the weights used in determining an aggregate measure, a weight restriction imposing maximum and minimum weights can be employed to ensure representation of relevant factors and prevent over-representation, (Sarrico and Dyson, 2000; Sarrico et al., 1997; Roberts, 1999). Since DEA does not allow a unique set of inputs and output weights to be defined, it is not possible to evaluate the marginal impact of each input on each output, (Johnes and Johnes 1995).

4 Conclusion

The implementation of the approach to use Data Envelopment Analysis to produce a league table10 comparable with the Guardian’s University Ranking can be judged as successful, especially as the upper part of the ranking (first twenty universities) of the two methods seem to agree on which are the best modern language departments in the UK. However, there are slight variations in the order of the departments.

Data Envelopment Analysis proves to be a useful technique for assessing the (technical) efficiency of university departments as it allows for comparing homogeneous entities (i.e. decision-making units that use the same inputs to produce the same outputs) in the context of multiple incommensurate outputs and inputs, (Sarrico and Dyson, 2000; Tomkins and Green, 1988). As this method measures the ‘relative’ efficiency of units it functions as an indicator for performance relative to other units and not to a ‘theoretical maximum’, (Tomkins and Green, 1988; Roberts, 1999). Thus, it provides each inefficient DMU (modern language department) with a reference set of relatively efficient DMUs which can serve as a benchmark for improvement, (Srinivas, 2000).

The relatively inefficient units (departments ranked very low) should try to emulate by attempting to vary their outputs and inputs (provided this is within the range of feasible actions for the department) as to achieve a performance comparable to the best observed (university department ranked number one) in order to become relatively efficient units themselves, (Tomkins and Green, 1988).

As a consequence head of departments or the management of the university can elaborate on the insights gained by the analysis and the resulting ranking as strengths and weaknesses of their unit(s) are identified through comparing them to other departments elsewhere in the country, (Johnes and Johnes, 1995). This yields the possibility to transfer resources from poor performers to more promising ones, hence increasing the aggregate performance, (Sarrico and Dyson, 2000).

References

ANDERSON, P. and N. C. PETERSEN (1993) A Procedure for Ranking Efficient Units in Data Envelopment Analysis, Management Science, Vol. 39 (10), pp. 1261-1264.

BANKER, R., A. CHARNES, W. COOPER, J. SWARTS and D. THOMAS (1989) An Introduction to Data Envelopment Analysis with Some of their Models and its Uses, Research in Governmental and Non-profit Accounting, Vol. 5, JAI Press, pp. 125-163.

BEASLEY, J. E. (1990) Comparing university departments. OMEGA - The International Journal of Management Science, Vol. 18 (2), pp. 171-183.

CHARNES, A., W. COOPER, and E. RHODES (1978) Measuring Efficiency of Decision Making Units, European Journal of Operational Research, Vol. 2 (6), pp. 429-444.

CHARNES, A., W. COOPER, A. LEWIN, and L. SEIFORD (1994) Data Envelopment Analysis: Theory, Methodology and Applications, Boston: Kluwer Academic Publishers.

CUBBIN, J. and G. TZANIDAKIS (1998) Techniques for Analysing Company Performance, Business Strategy Review, Vol. 9 (4), pp. 37-46.

DOYLE, J. R. and A. J. ARTHUS (1995) Judging the quality of research business schools: the UK as a case study, OMEGA - The International Journal of Management Science, Vol. 23 (3), pp. 257-270.

DYSON, R., E. THANASSOULIS and E. BOUSSOFIANE (1990) Data Envelopment Analysis, Operational Research Tutorial Papers, Hendry, L. and R. Eglese (Eds.), The Operational Research Society, London, pp. 13-28.

FUNG, K. K. (1995) Data Envelopment Analysis – Another Paretian Trap? Economics of Education Review, Vol. 14 (3), pp. 315-316.

GOLANY, B. and Y. ROLL (1989) An Application Procedure for DEA, OMEGA - The International Journal of Management Science, Vol. 17 (3), pp. 237-250.

JOHNES, J. (1993) Measuring Teaching Efficiency in Higher Education: An Application of Data Envelopment Analysis to Graduates from UK Universities 1993, LUMS working paper, February (electronically available at http://lums.co.uk/publications).

[...]


1 See Appendix I.

2 See Appendix IV.

3 See Appendix IX.

4 See Appendix XI.

5 See Appendix XVII.

6 See Appendix XI.

7 See Appendix XIII.

8 See Appendix XIV and XVII.

9 See Appendix (data without teaching assessment score).

10 See Appendix XVIII – XX.

Details

Pages
33
Year
2004
ISBN (eBook)
9783640597581
File size
631 KB
Language
English
Catalog Number
v147739
Institution / College
Swansea University
Grade
First
Tags
performance measurement dysfunctional consequences tunnel vision suboptimization myopia convergence misrepresentation real world examples

Author

Share

Previous

Title: Measuring the performance of UK Modern Language Departments