Effective utilization of post-assessment center data of organizations in India.

AuthorGupta, Seeta
PositionStatistical data

This paper examines how Indian organizations utilize data generated during Assessment Centers (ACs) for writing summary reports on participants, giving feedback on performance at the center, suggesting development actions and evaluating and validating the center. Twenty senior HR professionals, consultants and practitioners from 20 Indian and Multinational Corporations were interviewed. Results highlight that Post-AC data are utilized to varying degrees of effectiveness depending on long/short-term orientation of organizations. In the absence of detailed validation procedures, organizations need to build incremental validity at every stage of conducting an AC. Selection of consultants for conducting ACs have to be done with care and caution because most of the Western models need to be culturally tested and then adapted to suit local requirements.

Introduction

Assessment Center is a popular assessment tool and is used for many purposes from identification of high potential managers to building and developing leaders in an organization. Gaugler & Thornton (1989) describe assessment center as "a process where job analysis and competency modeling are typically used to study the performance domain of target jobs. Results of these analyses identify the dimensions to be assessed and content of assessment exercises. Multiple assessors observe overt behavior in exercises simulating important job situations. Ratings of dimensions and overall performance relate to a variety of criteria, including measures of comparable constructs and job performance". Although, Assessment Centers are expensive, there are many advantages. According to Joiner (2004) return on investment (ROI) of conducting an AC can be very high. ACs are generally predictive of performance (Lievens & Thornton, 2005; Thornton & Rupp, 2006). ACs are legally defensible if they are developed following professional guidelines (Guidelines and Ethical Considerations for Assessment Center Operations, 2009). Byham (1971) has given approximately eleven necessary steps in the design and conduct of an AC. These steps closely resemble the recommendations given in the Professional Guidelines like determining program objectives, defining dimensions to be assessed, selecting exercises that bring out the dimensions, designing assessor training and assessment center program, announce programs, inform participants and assessors, handle administrative details, train assessors, conduct assessment center, write summary reports on participants, provide feedback to participants, evaluate the center and set up procedures to validate center against a criterion of job success.

An Assessment Center starts with defining the objectives and finally it has to be validated against a criterion of job success. There are six important preparatory steps before conducting an AC called pre-AC steps. Four post-AC steps (8, 9, 10 and 11) are critical to taking the findings to a logical conclusion and for utilization of data thereafter. If organizations do not handle the last four steps efficiently, the whole process becomes meaningless. Hence it is important to understand how organizations utilize post-AC data.

Alexander (1979) in a study of 65 organizations found that they mostly focused on immediate feedback processes rather than on long-term utilization of assessment center results. Issues that concern subsequent utilization of results are followed by fewer organizations. Only 34 to 57 percent organizations carry out long-term processes such as: 1) Providing evaluations to higher management, 2) Coaching of employees by immediate supervisor, 3) Discussing assessee's training and development needs with the supervisor, 4) Discussing career plans in feedback process, 5) Reassessing employees later, who performed poorly, 6) Telling assessees whether they have potential for advancement, 7) Initiating developmental plans, 8) Assessment center staff monitoring development of assessee and 9) Evaluation becoming a part of the employee's personnel file.

Vloeberghs and Berghman (2003) emphasize that there should be procedures and strategies laid out to follow-up on results obtained from Assessment Development Centers (ADCs) and should be utilized. Gupta (2010) in a research on "relevance of assessment centers in competency assessment" concluded that, ACs are essential for assessing competencies despite their high costs and the benefits of doing ACs are far greater than not doing them. Follow-up action on the outcomes/developmental plans emerging from ACs is critical and if not carried out then the entire process becomes redundant. Byham (1987) suggests that role of any AC needs to be clearly defined with respect to follow-up procedures and to using results effectively. Similarly Bender (1973) recommends that early decision should be made on how the results of ACs would be utilized in the organization.

The importance of having follow-up procedures to effectively utilize results has been emphasized by many researchers and according to the Guidelines, all eleven steps are important for having a legally defensible AC. However, there are noteworthy gaps in what is prescribed and practiced. Research has repeatedly shown that globally the last two steps (i.e. evaluation and validation) are largely ignored areas. Guidelines specify that in 'writing summary reports on participants' (step 8) statistical integration of data should happen amongst assessors. Spychalski, Quinones, Gaugler and Pohley (1997) however, report that only 14% American ACs use statistical analysis in an assessors' conference compared to judgmental decisions. Similarly there may be gaps in giving 'Feedback' (step 9). Existing research was reviewed to understand what each post-AC steps entail and how organizations can handle each step to effectively utilize AC data.

Summary Reports Writing in ACs

In writing summary reports on participants, a number of assessors discuss multiple independent evaluations made during an AC to generate a report on each participant. There are controversies regarding what combinations of 'discussion by consensus' and/or 'statistical integration' will lead to high correlations between overall assessment rating (OAR) and performance criteria (Gaugler& Thornton. 1989).

In the report writing stage, assessors have a very critical role and errors can sneak in and affect overall validity. Researchers have explored a number of variables to reduce assessor error (Lievens, 1998) by providing training to assessors, using alternate rating strategies, reduce assessor's cognitive load by reducing the number of dimensions and by providing check-lists to record behavior etc. Melchers, Kleinmann and Prinz (2010) also found that there was a detrimental effect on ratings when assessors had to simultaneously rate multiple individuals. Byham (1978) advocates certification of assessors in order to ensure a legally defensible AC.

Contents of assessor training have been found to affect predictive and construct validity(Krause & Thornton, 2009). Research shows that length of training is less important (Gaugler et al., 1987) vis-a-vis quality (Lievens, 2002). Schleicher, Day, Mayes & Riggio (2002) report that 'Frame-of-reference training' has very high positive impact on improving reliability, inter-rater reliability, discriminant validity and better rating accuracy. Kleinmann et al. (1996) found that disclosing dimensions to participants improved rating accuracy.

Feedback in ACs

Feedback to participants on overall summary of findings is the 2nd step post-AC, and many variables can impact the effectiveness of this process. Woo et al. (2008) found that assessor feedback can be a threat to participants as there is a positive relation between favorable feedback and behavioral engagement. Lievens (2008) suggests applying 'trait activation theory' for giving feedback. Woo et al. (2008) concluded that accuracy of assessor rating and perception of a 'due process' are critical factors, Abraham et al. (2006) posit that negative feedback needs to be given in a constructive manner; Bell and Arthur (2008) found that 'extraversion' and 'agreeableness' of participants need to be looked into for increasing feedback acceptance. Timing of feedback is also very important and Thornton & Rupp (2006) found that maximum learning takes place when feedback is given soon after an AC is over. Krause and Thornton (2009) found that only fifty percent organizations studied had an evaluation procedure but documentation was missing.

Validity of ACs

An Assessment Center needs to be "valid"; which means, does it measure what it purports...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT