2. Assessing Selection Methods
3. The ‘Classic Trio’
3.1 Application Forms
4. Alternative Methods
4.1 Personality Questionnaires
4.2 General Mental Ability and Aptitude Tests
4.3 Biodata Inventories
4.4 Work Sample Tests
4.5 Peer Assessments
4.6 Assessment Centres
5. Job Analysis Data and the Choice of Selection Methods
6. Resistance to the Introduction of Alternative Selection Methods
Two simple facts force any organisation to carefully select the people it employs. Firstly, people differ widely in their abilities, knowledge, interests and personality. Secondly, the jobs provided by the organisation vary in their demands (Robertson and Smith 1986). Thus, choosing the ‘right’ person for a job becomes a crucial factor in ensuring an effective workforce and competitive advantages.
The objective of selection processes is finding the most capable and suitable candidate, i.e., that candidate who is most likely to deliver the best performance on the job. To achieve this objective, a wide range of selection methods has been developed. But despite a variety of methods, many organisations, if not the majority of organisations, stick to the “classic trio” of selection (Cook 1993: 15) and rely on application forms, references and unstructured interviews only. Certainly, there is every reason to believe that there are some undeniable advantages in making the ‘classic trio’ attractive to many organisations. But at the same time, the ‘classic trio’ is criticised for a number of considerable shortcomings and a low efficiency compared to other selection methods.
In the following the advantages and disadvantages of the ‘classic trio’ will be briefly sketched. The main criteria guiding the analysis will be reliability, validity, practicality, generality, fairness and costs of the selection methods. Afterwards some alternative selection methods will be reviewed, and their competitive advantages over the ‘classic trio’ will be outlined.
We will analyse how job analysis data can help organisations to choose appropriate selection methods. Finally, a number of possible reasons for resistance to the implementation of alternative selection methods will be considered and an approach to overcome this resistance will be briefly sketched.
2. Assessing Selection Methods
Although the popularity of the ‘classic trio’ as organisations’ first choice selection procedure seems to be unbroken, Cook predicts the future demise of these “old-fashioned, inefficient methods.” (1993: 26) He maintains that the recruitment procedure successively composed of application forms, free-form references and unstructured interviews is simply less efficient than alternative selection methods. To understand this criticism we will have a look at the reliability, validitiy, practicality, generality, fairness, and costs of the three classical steps of recruitment.
The reliability of a selection method refers to its “consistency of measurement” (Arvey 1979: 26). The underlying question is: do we get the same results, when we measure the same thing twice? If yes, the reliability of the test method is high. If the results of the two measurements vary considerably, the test reliability is low.
A high validity on the other hand assures that the selection test actually measures what it is supposed to measure, that it predicts what it claims to predict. Various types of validity can be found in literature, of which the most important ones are “face validity”, “content validity”, and “criterion validity” (Cook 1993: 197-199).
Face validity indicates the extent to which a test appears to non-experts as plausible and related to the job (Arvey 1979: 34). Content validity reflects the direct correspondence between the test’s content and the specific job tasks and requirements. A test method is content valid “if the items on the test directly reflect observed behavior skills and knowledge considered essential for adequate job performance.” (Arvey 1979: 34)The correspondence between test and job content must not necessarily be obvious to the non-expert.
The degree of criterion validity provides information about the tests capacity to predict certain aspects of the candidate’s future performance. Therefore, it sometimes is also called “predictive validity” (Cook 1993: 199). Most validity studies in HRM literature are based on criterion validity.
Looking at the fairness of a selection test, one is concerned with the question if the test is biased and, therefore, possibly causes adverse impact on certain groups of applicants.
The practicality of a selection method mainly depends on the time and effort necessary for carrying it out. A crucial practicality criterion may as well be the extent of resistance to test methods. In general, the practicability is high when the test can easily be introduced (Cook 1993: 257).
The generality of a selection method indicates the extent of its applicability. It gives information about how many different jobs and groups of employees the test can be used for (Cook 1993: 253).
And, finally, the cost is an important criterion for the choice of selection methods. The costs of tests may directly correlate with their practicality and generality, but that is not necessarily the case.
3. The “Classic Trio”
In the following, the three selection methods of the ‘classic trio’ are assessed by the criteria outlined in the preceding chapter.
3.1 Application Forms
The first method of selection within the classical three step recruitment procedure is the application form. On the basis of application forms a first shortlist is drawn and a number of applicants, often the majority of them, is rejected. The practicability, generality and costs of this procedure are obviously very favourable. But its reliability and validity are probably considerably questionable, though no systematic studies are at hand yet. However, a survey of British application forms identified only a few universal questions, which indicates that the standardisation of application forms is hardly developed. Moreover, reviews of application forms revealed questions that were either biased and could create adverse impact on minorities, or weren’t job-related at all (Cook 1993: 15-16). Apart from dubious reliability and validity, often the transparency of selection processes based on application forms is weak, as the selection criteria are not made explicit.
Applicants traditionally have to submit two to three references or, as they are often called, letters of recommendation. Usually, form and content of references are free, i.e., the referee can write about the applicant whatever he feels like writing. Other forms of references are structured by questions, checklists or ratings.
The low reliability of references is indicated in a number of studies (Cook 1993: 73). Referees do not always agree on the candidate’s personality and abilities, or stress completely different aspects. Many referees are reluctant to give poor ratings. Even in standardised ratings systematic errors are inherent (Cook 1993: 76). The halo effect describes the phenomenon that the different rating dimensions are not independent, but influence each other. Additionally, referees show the tendency to avoid poor ratings, and use only the middle points of the scales, but not the extremes.
However, the popularity of references can be explained with their high practicability, their high generality, and their low cost intensity (Cook 1993: 258). Though, serious doubts about their reliability and validity remain. “Most research … finds references are unreliable, and have little or no validity” (Cook 1993: 83). Hunter and Hunter (1984: 90) estimate the mean validity of references at 0.26, whereas Robertson and Smith (1989: 93) present validity results between 0.17 and 0.26.
 Further types of validity mentioned by Cook are “faith validity“, “construct validity”, “factorial validity”, and “synthetic validity” (1993: 196-204).
 Among them are Hunter and Hunter (1984) and Schmitt et al (1984), which are referred to further down. In this essay all figures on validity refer to criterion validity.