UNIT 5: LEARNING OUTCOMES On completion of this unit students will be able to:
Discuss the different types and styles of selection interviews
Describe the various different objectives of selection interviews
Discuss the reliability, validity, fairness and effectiveness of interviews
Describe some of the theoretical and practical problems associated with the interview as a selection tool
Describe a number of ways in which the interview may be improved as a selection tool
5.1. INTRODUCTION TO INTERVIEWS AS A SELECTION TOOL This unit will examine the properties of one of the most frequently used tools in the selection process: the interview. The unit begins by describing the diversity of practice that characterises selection interviewing and highlights the importance of the various features of the selection interview in determining its reliability, validity and fairness. The unit will also examine how interviews can be best utilised as a selection tool.
Even when elaborate selection processes (such as assessment centres) are used, interviews are usually included as one of a battery of assessment tools. The interview is flexible and practical and can be used at various stages throughout the selection process. One important feature of the selection interview examined in this unit is that it can provide an opportunity for the candidate to form an impression about their potential employer.
There is huge diversity in the design and execution of selection interviews. Interviews vary in terms of their structure, duration, the style of questioning employed, the way candidate responses are evaluated, and a host of other dimensions related to interview content and the collection of data. Some interviews are designed to include the interviewer as a participant observer, while others are more objective and closely resemble a 'verbal' psychometric test.
The way an interview is designed and executed has a huge impact upon its reliability, validity and fairness. Structured interviews with highly job related content produce some of the highest reliability and validity coefficients. Of course, reliability, validity and fairness can also be affected by a variety of biases and the limitations associated with using human judges to record and score behaviour. However, many of these problems can be addressed if the interview process is properly designed and carried out. The unit closes by examining best practice in selection interviewing, summarising the key strengths of interview as a selection tool.
A quick task: This task is designed to help you to understand the importance of considering the candidate perspective when examining selection interviews. Think of your own experience of being interviewed for a job (just pick your most recent interview if you have had more than one). What questions were you asked? How do you think your answers to these questions related to your potential to be good at the job you applied for? Did you think the interview was fair or unfair (and why)? Did you begin to form an impression of the company from your experiences of the interview - if so was that impression a good one or a bad one?
5.2. THE USE OF SELECTION INTERVIEWS Those of us that have been a candidate in a selection process will have a good understanding of the nature of the selection interview. However, it should be useful to begin this unit with some basic definition of the selection interview. Three are offered below:
a face to face interaction between two or more individuals with a motive or a purpose
a conversation with a purpose
a procedure designed to predict future performance based on applicants’ oral response to oral enquiries
The interview is virtually ubiquitous in selection with surveys showing that over 90% of UK and US organisations use interviews for management selection. In larger organisations, interviews are often one of a battery of selection tools used in the selection process. However, for small and medium sized enterprises, the interview plays a major part in the selection process, with perhaps biodata (collected via an application form) being the only other information collected and used in the decision-making process. This means that any occupational psychologist who becomes involved in selection needs to have a strong understanding of the selection interview.
The popularity of the interview is largely because of its flexibility as a selection tool. Interviews can be used to examine candidates' knowledge, skills, abilities and attitudes. In other words it can be used to assess many different competencies.
An interview can also be used in a number of ways in the selection process. In recruitment it provides a mutual preview allowing the employer to collect data about a potential employee, but at the same time allowing candidates to gather data about a potential future employer. It can also be used as part of the negotiation process where issues such as training and development needs and salary requirements can be discussed. The interview is often crucial to ensuring that selection is a two-way process.
Interviews are perceived to be relatively cheap to conduct and many managers feel that they have the knowledge and skills to conduct an interview. The widespread use of interviews inevitably results in a great deal of diversity in practice. All interviews are not the same, and the way they are designed and carried out can have a major impact on their reliability, validity and fairness.
A quick task: Here are four qualities that one could assess during a selection interview: numerical ability, experience of selling, conscientiousness and interpersonal skills. What questions could you ask to gather data on each? What might compromise the reliability of this data? Is there anything you could do in the design and execution of the interview to enhance reliability? SE
In order to describe the diversity of selection interviewing we will examine four features of the selection interview: interview structure, interview questions, number of interviewers and the purpose of the interview.
5.3.1. Interview structure When you are reading about selection interviews one of the most important issues you will encounter is that of interview structure. Most selection interviews are relatively unstructured. It is highly likely that you have encountered this type of interview. Unstructured interviews are not carefully planned, tightly controlled (e.g. different candidates may be asked different questions) and the link between these questions and job performance is not always clear. This can cause major problems. If nothing else lack of structure compromises the link between the content of the interview and the job analysis. It also means that different candidates are being assessed in different ways, compromising the standardization of the assessment.
Structured interviews are systematic (often because they are developed from the results of a job analysis), controlled and have a logical and consistent order of questions. This structure drives the management and execution of the interview and has a major impact on the data collected. All candidates are required to answer the same questions delivered in the same way.
Structuring an interview is one way of forging strong links between the results of a job analysis and the interview content. The competencies (knowledge, skills, abilities, attitudes and other qualities) identified in the job analysis can be probed through bespoke interview questions. Of course, not all competencies can be measured in interviews, but many can be and this makes interviews an extremely flexible and useful part of the selection process.
5.3.2. Interview questions Pulakos and Schmitt (1995) examined two types of structured interview questions that are commonly used to measure competencies: experienced based situational questions (sometime referred to as the patterned behaviour description interview) and job related situational questions (sometime referred to as situational interviews).
Experience based situational questions focus on situations that have happened in the past that have relevance to the future demands of the job role. An example of such a question is:
"Describe a time when you were faced with completing an important, but boring task. How did you deal with this situation?" The candidate's answer would then been recorded and examined for negative and positive behavioural indicators of the competencies being assessed by that question. So if the question was examining a competency 'drive and determination' the candidate's answer would be examined for positive indicators such as developed ways of maintaining interest in the task or developed strategies for maintaining concentration. A negative indicator in a candidate's answer could include loses interest in routine tasks. Job relevant situational questions require the candidate to apply their qualities to dealing with situation that might occur within the job role. These can be past-orientated (experience-based), for example:
"Part of this job involves dealing with dissatisfied customers. Can you tell me about a time when you have dealt effectively with a dissatisfied customer. What did you do to manage the situation effectively? " Or, these questions can be future orientated, for example:
"You are the personnel officer in a manufacturing plant and the heating system is not working properly. The temperature has dropped below the legal minimum, and shop floor staff are threatening to walk out any minute. Production is already behind. What do you do?" Pulakos and Schmitt (1995) found that experience based questions may be the better predictors of future performance (although future-orientated job-related questions showed modest predictive validity). Reviewing the literature, Salgado (1999) confirmed that in terms of predictive validity there appeared to be a substantial advantage in using questions that were experience based and required the candidate to describe their behaviour in job-related situations (validity co-efficient of around .5 could be achieved).
It is important to remember that structure is not a dichotomy, it is a continuum. There is often a degree of flexibility in structured interviews (e.g. a part of the interview where a candidate asks questions of the interviewer(s)). Therefore, you will often encounter terms such as 'highly structured' or 'highly unstructured' when you are reading about interviews. The main thing to note for now is that the degree of structure has significant implications for the effectiveness of the interview as a selection tool: we will return to discuss this issue in some depth later in this unit.
5.3.3. Number of interviewers Some interviews are carried out by a single interviewer others by a panel of several interviewers. There are advantages and disadvantages of both approaches. With a single interviewer there is a chance that their biases have a major impact on the process, and they may experience cognitive overload as a result of conducting the interview and collecting data simultaneously.
The panel interview is frequently justified by claims that it reduces biased decision making. However, Bayne and Fletcher (1983) found that interview panels do not make significantly better decisions than individual interviewers. One reason for this is that a board has to come to an agreed decision and the decision making process introduces the variable of a group dynamic within the situation. Therefore the power and politics prevalent within such a group in an organisation may mean that one powerful individual on an interview panel exerts an influence on the selection decision almost equivalent to what it would be if they were conducting a single interview. Taking one possible scenario, if the chair of the interview panel is a very senior person within the organisation, then the power within the panel will not be equally balanced.
A significant advantage of using more than one interviewer is that the cognitive load placed on each interviewer is less. This means that one interviewer can take notes while the other manages the questioning process.
5.3.4. Interview purpose Interviews can also occupy different positions in the selection process. An interview may occur before more in-depth assessments (such as an assessment centre) or it may occur at the very end of the selection process as a final decision-making tool (e.g. as part of an assessment centre). This is important because interviews are often used to explore, in more depth, data gathered from other selection methods (e.g. to discuss the findings from a personality questionnaire).
Interviews can also be guided by different approaches to psychology. Some view interviews from a social perception perspective. From this perspective the interview is seen as a subjective experience for all parties involved. The interviewer is not seen as a detached observer of candidate performance, but rather as a participant observer whose thinking and behaviour shapes the process and its outcomes. This perspective on interviewing focuses on the formation of mutual expectations or the start of a psychological contract between the candidate and the interviewer (i.e. the employer's representative). To meet these requirements a low degree of structure is often needed.
However, most of the research into selection interviews is grounded in the psychometric approach. This approach sees the interview as akin to a psychometric instrument, or test, with a detached observer obtaining and interpreting a sample of candidates' behaviour. This approach views the interview as an objective measure, with rational decision-making being based on evidence and performance 'scores'. To meet these requirements a high degree of structure is required in the interview.
This distinction between the social perception and psychometric approach is important because it sets the research agenda. Most researchers have focused on examining how the interview can be improved as a psychometric measure. There has been less research on the impact of the interview from a social perception perspective. At this stage it is important to be aware that the interview can serve either, or both purposes. Unfortunately many organisations fail to understand this distinction leading to interviewing practice that has poorly defined and executed objectives.
5.4. RELIABILITY, FAIRNESS AND VALIDITY OF INTERVIEWS In this section we deal with the complexities of examining the properties of interviews. This requires us to delve in more detail into the theory of why interviews can be used to predict occupational performance and to consider how they should be designed and executed to maximise their effectiveness.
5.4.1 The reliability and fairness of interview data In particular we are concerned with inter-rater reliability (different interviewers giving similar ratings when observing the same performance) and intra-rater reliability (the same interviewer giving similar ratings when observing the same performance).
It will not surprise you that as interview structure decreases so reliability also drops. In a very authoritative review of a huge amount of data on selection interviews Conway, Jako and Goodman (1995) found that problems with reliability were commonplace in selection interviews. Achieving reliability is challenging because each interview is unique in some way. This variation can be because there are differences between interviewers in terms of the questions asked, the data collected and the way that the data is interpreted. This is not necessarily a deliberate effort to distort the process on the part of the interviewer but rather due to the interactive nature of the interview and the various biases and limits that impact on human decision-making.
The meta-analysis by Conway et al. (1995) showed that these problems were minimised through a number of interventions. One-to-one interviews with standardised questions appeared to have the highest reliability. When interviewers were trained in the interview process and about how to avoid biases reliability also tended to be higher. Finally, using a mathematical approach (i.e. summing the different competency scores) to arrive at an overall score for interviewee performance tended to be more reliable than relying upon the discretion of interviewers to arrive at an overall score. The latter approach requires each interviewer to make their own judgments about the relative importance of various ratings, thus compromising the standardisation of the interview process.
Research also shows that in terms of adverse impact, interviews give fairer outcomes than many other widely used selection tools including psychometric tests of ability and intelligence (Huffcut & Roth, 1998; Moscoso, 2000). However, the various human biases described in Unit 2 are very relevant in selection interviewing. Interviewee appearance, accents, physical appearance, and the candidate’s and interviewer’s experience can all have a significant impact upon the outcome of an interview (Arvey & Campion, 1982). In the next section of this Unit the impact of these factors on the validity of interviews is examined and we describe some best practice recommendations for reducing the impact of these biases.
Interviewer ratings are significantly influenced by both verbal and non-verbal behaviour (Burnett & Motowidlo, 1998). This raises the possibility that impression management by the candidate can have a significant impact on interviewer decision-making (Arvey & Campion, 1982). Good eye contact, positive body language and physical attractiveness can lead to higher scores in interviews but generally only for candidates who provide good answers to the questions posed (Rasmussen, 1984; Burnett & Motowidlo, 1998). In terms of impression management, self promotion of past and future behaviour by the interviewee (rather than ingratiation) tend to lead to higher interviewer evaluations (Stevens & Kristoff, 1995). However, there is some debate about whether these effects are biases. Positive body language and an ability to manage the impression one gives to others may be viewed as a skill that also has a significant impact on job performance. For example, an employee dealing with an irate customer often needs to use impression management: the employee may be feeling angry but needs to maintain calm, friendly and professional behaviour.
There is surprisingly little qualitative research on the links between what candidates actually say and the ratings given by interviewers. Silvester (1997) showed that the attribution theory could be used to help explain the ratings given by interviewers. Interviewers tended to give higher scores when candidates made internal, stable and controllable attributions about their performance.
For example, if in an interview you were asked how you had coped with a significant problem at work, you might say that something you did caused the problem (at least in part), that your role in the problem was something bought about by something stable (e.g. your personality) rather than the situation, and that you felt that you could do something about it (i.e. that you could do something to solve the problem). This would be an internal, stable and controllable attribution.
A quick task: Read this extract from a selection interview. The candidate has been asked to describe a time when they have experienced success in a team work environment and to describe their role in that situation. They say:
"Well, to be honest we were just lucky to come together at the right time. I think that sometimes I work really well in a team, but it really depends whether the people around me are similar to me and whether or not they like me. I don't think it's really possible to change other people's views or opinions to get them to work better together with each other." What attributions are this candidate making (internal or external; stable or unstable; controllable or uncontrollable)? Is this answer likely to get a high score from the interviewer? Why?
5.4.2 The validity of interview data As we have already seen interviews are a very flexible method of assessment. In an interview we could be assessing knowledge, skills, abilities, personality, motivation, and so on. All of these different constructs can have a different relationship with job performance. This means that any examination of the validity of interviews needs to take these differences into account. This is why meta-analyses have been an important source of information about the factors that influence the validity of interviews.
Large meta-analyses tend to reveal modest validity coefficients for selection interviews. Reilly and Chao (1982) found an average coefficient of .19 with a variety of criteria. Weisner and Cronshaw (1988) found a slightly higher coefficient of .26 with supervisor ratings of performance. Many of these average coefficients are for a range of different types of interviews.
When one delves deeper into the impact of the design and execution of interviews on validity coefficients a number of important findings emerge. In one of the most substantial meta-analyses of interviews McDaniel, Whetzel, Schmidt, and Maurer, (1994) reviewed 245 different validity coefficients from studies of interviews. They found that validity was the highest when:
The interviewers used situational and job-related questions (interview content)
When the interview was highly structured and carried out by one person (interview execution). Salgado's (1999) review reports that highly structured interviews have an average validity coefficient of around .5, whereas those with little structure have coefficients of around .2
When job performance measures (rather than tenure) were the criteria for the validation study (validation criteria)
These findings are logical. If the interview is specifically designed to examine job-related competencies in an organised and methodical way then there is a better chance that it will predict future performance than if it is conducted in a haphazard fashion.
Campion, Palmer and Campion (1997) provided a more detailed analysis of the determinants of the predictive validity of interviews and concluded that predictive validity was improved by certain design characteristics.
A quick task: Think back to your own experiences of being a candidate in selection interviews. How many of those interviews followed the recommendations that have emerged from the research you have read so far in this section of the unit? What were the main similarities and differences?
Campion et al. (1997) also found that the way that data was collected and evaluated also had a significant impact on the validity of the interview. Higher validates tended to be obtained when:
Each answer given by the candidate was rated separately and on multiple rating scales (e.g. different rating scales for interpersonal skills, drive and determination, technical knowledge etc. rather than a single overall rating for the answer to the question)
Interviewers took detailed notes of candidate performance and used rating scales that had clearly defined rating points (i.e. anchors such as "displays many of the positive behaviour indicators of this competency" or "displays mostly negative behavioural indicators of the competency")
Interviewers used information about the links between interview performance and job performance in their decision-making
Overall evaluations of candidates were determined by summing the scores obtained in the interview rather than allowing interviewers to determine the overall rating using their own individual rationale
Interviewers were provided with extensive training in all aspects of the interviewing process
As you can see, these measures are designed to restrict the impact of human biases or decision-making heuristics on the outcome of the interviews. Many are measures designed to improve the reliability of ratings and as we know reliability is a necessary condition for validity.
The issue of incremental validity is perhaps less important for selection interviews than it is for other methods of assessment. Given that an interview forms the bulk of many selection processes, the fact that it captures data on individual differences that are also captured by other selection methods is perhaps desirable. For example, if the interview captures good data on personality then might there be an argument that the omission of a personality questionnaire from the selection process is less of a problem?
Research on the incremental validity of interviews yields fairly unsurprising results. In terms of predicting job performance, there is a lack of incremental validity over intelligence tests (Mayfield, 1964; Schmidt & Hunter, 1998). It appears that interview performance is significantly related to, but not the same as, intelligence. In Reading 1.1 (p.456) Robertson and Smith (2001) present a useful succinct discussion of the construct validity of interviews. Performance in unstructured interviews tends to rely more upon social skills and personality, while cognitive ability has more of a role to play in determining performance in highly structured interviews. There is also evidence that interview performance is related to performance on more elaborate and complex selection methods such as assessment centres (Dayan, Fox & Kasten, 2008). Therefore, typical selection interviews tend to have broad construct validity: this is perhaps re-assuring as they tend to dominate many selection procedures.
5.5. INTERVIEWEE DECISION-MAKING In considering the psychometric properties of the interview there is always a danger of ignoring the candidate perspective. Taking the social exchange perspective, Herriot (1993) presented an excellent analysis of the importance of the selection interview in the initial stages of the employment contract. Structured interviews that use situational questions help to provide the candidate with a realistic job preview (RJP). Research has shown that RJPs lead to lower levels of early turnover in organisations (Phillips, 1998). But this type of interview restricts the opportunities for candidates to ask questions. In contrast, unstructured interviews allow more time and 'space' for role negotiation.
The social exchange perspective shows that the interview is not just about collecting reliable and valid information about the candidate. It can be an important source of career information for the employee. It provides prospective employees with important information about present employees. For example, if the interviewer is pleasant, personable and knowledgeable then the candidate may form the impression that many of the company's current employees share similar characteristics. The interview is also important to the perceived fairness of the process: both candidates and those making selection decisions have come to expect an interview as integral to the selection process. This is particularly the case for situational, job-related interviews that are viewed very favourably by candidates (Hausknecht, Day & Thomas, 2004). If they are not interviewed or if the interview is poorly designed and executed, candidates may feel that they have been denied an important opportunity to show their qualities.
5.6. BEST PRACTICE IN SELECTION INTERVIEWING This unit has examined many aspects of the selection interview from a variety of different perspectives. This can lead to a rather confusing picture from a practitioner perspective. In this section the implications of the research will be summarised as a series of recommendations for best practice.
In terms of interview design it is essential that the interview questions are based on a thorough job analysis. The results of the job analysis should also be used to construct the rating and scoring processes for the interview itself. To secure good psychometric properties for the interview, interviewers should use clearly defined rating scales and rating criteria. Ratings should be based upon evidence collected during the interview (usually written notes of candidate performance). Overall, the information gathering process should be standardised (Dipboye, 1997).
Pulakos, Schmitt, Whitney and Smith (1996) found that interviewers’ individual validity coefficients varied from -0.10 to 0.65. While these results may be affected by a small sample size (25 interviews per person), they illustrate the importance of picking the right person to be an interviewer.
In order to conduct the interview effectively, interviewers should be trained to be aware of the biases that can impact upon the observation, recording and evaluation of information about candidates. Huffcutt and Woehr (1998) found training such as this can significantly improve interviewing. This training should include clear guidance and training on effective questioning, listening and observation. Training can also be used to make interviewers aware of the negative impact that biases and prejudices can have on selection decisions.
A key to controlling an interview lies in gathering and retaining pertinent information. A candidate may transmit quite a lot of information. To retain the information and use it in decision-making requires a good short-term memory and practice on the part of the interviewer. Most, if not all, interviewers will forget at least some of the information they have obtained in an interview, even if they are tested immediately after the interview (Arvey & Campion, 1982). The longer the time lapse between the interview and recall, then the worse is the memory. Some interviewers do not take notes on the basis that it is distracting for candidates. It is better, however, to have notes in order to maximise accuracy and objectivity. While a selection interview is a social interaction, from the organisation’s perspective it is an information-gathering exercise. The more systematic the manner in which this information is collected, the better for both the organisation, in terms of employing the right person for the job, and the candidate in terms of fairness and not ending up in a job to which they are not suited. It is important that the interviewer is skilled at picking up all of what the candidate projects, both verbal and non-verbal. In the contemporary context, all data gathered on a person should be available for scrutiny, according to the Data Protection Act (1998). Therefore it is essential to keep a set of well ordered and clear interview notes available. However, Middendorf and Macan (2002) found that the act of note-taking, in the employment selection context, may be more important for interviewer memory and legal reasons than for improving the decisions made by interviewers.
An unequivocal finding from the research on interviews is that interviews should be structured. This brings consistency to the process but also 'forces' job-relatedness into the questioning. This means that control over questioning remains with the interviewer. Control devices that can be legitimately used by interviewers include:
summaries that allow clarification and confirmation and avoid awkward silences;
using a variety of questioning techniques, such as open and closed questions, as appropriate.
Highly structured interviews can be restrictive and some authors (e.g. Dipboye, 1997) have identified the usefulness of semi-structured interviews. Multimodal interviews (Schuler & Finke, 1989) are designed to offer the best of both the psychometric and social exchange approaches to interviewing. These interviews are divided into distinct stages, or phases, that have different objectives. Some parts of the interview are designed to help build rapport, some stages offer the candidate a chance to ask questions, other sections offer a realistic job preview, while other parts of the interview are tightly controlled containing job-related situational questions. There is an onus of responsibility on the interviewer(s) to conduct the interview in an ethical and responsible way. This should include affording the candidates adequate time and opportunity to raise any issues that are important to them. Of course, the different objectives of different parts of the interview need to be explained to the candidate.
This multi-modal approach can work well if information from the different elements of the interview is prevented from leaking into other parts of the interview (e.g. candidates questions about pay and conditions should not be allowed to bias the evaluation of responses to the structured job-related questions).
As with any selection method, it is best not to use interviews as the only method of selection. At the very least, biodata can also provide important additional information about the suitability of the candidate.
5.7. SUMMARY As we have seen in this unit well-designed and executed interviews might not have the highest validity. However, because they are a relatively inexpensive and flexible selection technique, they can have good utility (i.e. are cost-effective).
Naturally, the reliance on humans to manage the interview process and evaluate the candidate raises the possibility of bias. However, these problems can be managed through careful design of the interview process and comprehensive interviewer training. Interviews can show good reliability and validity, particularly if they are structured and contain job-related questions.
There is an important public-relations element to selection interviews. They are consistent with the expectations of both current and prospective employees. Interviews provide an effective means of securing person-organisation fit through facilitating a two-way decision-making process (Herriot, 1993).
The interview is also extremely flexible as interview performance can be indicative of personality, intelligence, motivation, social skills and so on. It can also be used to assess signs and samples of performance. However, as you read the remainder of this module, you may wish to consider that there are other, more reliable and valid measures of these constructs. The usefulness of the interview is then often determined by issues of cost and practicality.
5.8. READINGS McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. (1994). The validity of employment interviews: A comprehensive review and meta analysis. Journal of Applied Psychology, 79, 599-616.
This is a classic review of the properties of selection interviews. As with all meta-analyses, the statistics can be complex and you should not worry too much about these. The introduction and conclusion give some excellent insights into how the design and execution of interviews determines their effectiveness.
Lievens, F, & Peeters, H. (2008). Interviewers' sensitivity to impression management tactics in structured interviews. European Journal of Psychological Assessment, 24, 174-180.
This paper provides a good discussion (and controlled test) of some of the potential ‘human’ biases in selection interviews.
5.9. SOURCES CITED IN THE TEXT Arvey, R., & Campion, J. (1982). The employment interview: a summary and review of current research. Personnel Psychology, 35, 281-322.
Bayne, R. & Fletcher, C. (1983) Selecting the selectors. Personnel Management, 15, 42-44.
Burnett, J. R., & Motowidlo, S. J. (1998). Relations between different sources of information in the structured selection interview. Personnel Psychology, 51, 963-983.
Campion, M., Palmer, D., & Campion, J. (1997). A review of structure in the selection interview. Personnel Psychology, 50, 655-702.
Conway, J. M., Jako, R. A., & Goodman, D. (1995). A meta- analysis of interrater and internal consistency reliability of selection interviews. Journal of Applied Psychology, 80, 565-579.
Dayan, K., Fox, S., & Kasten, R. (2008). The Preliminary Employment Interview as a Predictor of Assessment Center Outcomes.Journal of Selection and Assessment,16, 102-111.
Dipboye, R. L. (1997). Structured selection interviews: Why do they work? Why are they underutilized?. In N. Anderson & P. Herriot (Eds.), International handbook of selection and assessment (pp. 455-473). Chichester: John Wiley & Sons.
Fletcher, C. (1992). Ethical issues in the selection interview. Journal of Business Ethics, 11, 361-367.
Hausknecht, J.P., Day, D. V., & Thomas, S. C. (2004). Applicant reactions to selection procedures: An updated model and meta-analysis. Personnel Psychology, 57, 639-683.
Herriot, P. (1993). Interviewing. In P. Warr (Ed.), Psychology at work (pp. 139-159). London: Penguin.
Huffcut, A., & Roth, P. (1998). Racial group differences in employment interview evaluation. Journal of Applied Psychology, 83, 179-189.
Huffcutt, A.J. & Woehr, D.J. (1998) Further analysis of employment interview validity: A quantitative evaluation of interview related structure methods. Journal of Organizational Behavior, 20, 549-560.
McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 79, 599-616.
Moscoso, S. (2000). Selection interviews: A review of validity evidence, adverse impact and applicant reactions. International Journal of Selection and Assessment, 8, 237-247.
Phillips J. M. (1998). Effects of realistic job previews on multiple organizational outcomes: A meta-analysis. Academy of Management Journal. 41, 673-690.
Pulakos, E., Schmitt, N., Whitney, D., & Smith, M. (1996). Individual differences in interviewer ratings: The impact of standardisation, consensus discussion and sampling error on the validity of the structured interview. Personnel Psychology, 49, 85-102.
Pulakos, E., & Schmitt, N. (1995). Experience-based and situational interview questions: Studies of validity. Personnel Psychology, 48, 289-308.
Rasmussen, K. G. (1984). Non-verbal behavior, verbal-behavior, resume credentials, and selection interview outcomes. Journal of Applied Psychology, 69, 551-556.
Reilly, R. R. & Chao, G. T. (1982) Validity and fairness of some alternative employee selection procedures. Personnel Psychology, 35, 1-62.
Salgado, J. F. (1999). Personnel selection methods. In C L. Cooper and I. T. Robertson (Eds.) International Review of Industrial and Organizational Psychology. New York: Wiley.
Silvester, J. (1997). Spoken attributions and candidate success in graduate recruitment interviews. Journal of Occupational and Organizational Psychology, 70, 67-74.
Stevens, C. K., & Kristoff, A. L. (1995). Making the right impression: A field study of applicant impression management during job interviews. Journal of Applied Psychology, 80, 587-606.
Wiesener, W.H. & Cronshaw, S.F. (1988) A meta-analytic investigation of the impact of interview format and degree of structure on the validity of the employment interview. Journal of Occupational Psychology, 61, 275-290.