From reactions to return on investment: a study on training evaluation practices.

AuthorSrimannarayana, M.
PositionReport

Introduction

The importance of employee training has been significantly increasing in various parts of the world. Organizations in Europe, the United States, and Asia spend billions each year on employee training. (Cascio & Boudreau, 2008). However, 'what the organization gains out of its investment in training' is an issue of concern for the management. A 2006 study in the US by Accenture revealed that only 3% of CEOs were satisfied with their corporate training function (Hall, 2008). There is pressure on training function to measure its effectiveness in the increasingly competitive environment. This paper makes an attempt to explore training evaluation practices in India.

Literature Review

Training evaluation is the systematic collection of descriptive and judgmental information necessary to make effective training decisions related to the selection, adoption, values and modification of various training activities (Goldstein & Ford, 2007). It involves both formative and summative evaluation (Tarenou et al., 2007). Formative evaluation involves evaluating training during design and development (Brown & Gerhardt, 2002). Summative evaluation refers to an evaluation conducted to determine the extent to which the training program objectives are achieved. The focus of training evaluation research and practices is predominately on summative evaluation (Brown & Gerhardt, 2002).

There are three stages in the evolution of training evaluation. The first stage is practice-oriented, a theoretical stage, as represented by the Kirkpatrick four-level framework, ranging from the late 1950s to the late 1980s. The second stage is a process-driven operational stage, represented by the ROI wave spanning from the late 1980s to the early 2000s. The present stage is the research-oriented comprehensive one. There are multiple frameworks suggesting different models and levels of training evaluation in each stage (Wang & Spitzer, 2005).

Evaluation Frameworks: The CIRO (Context, Input, Reaction and Outcome) approach developed by Warr, Bird and Rackham (1970) seems to be the first framework of training evaluation. Context evaluation refers to obtaining and using data about the present operational context to decide training needs and goals. Input evaluation refers to the process of assessing the various resources available and their deployment for training. Reaction evaluation refers to assessing the participants' reaction to the program. Outcome evaluation is concerned with assessing the results obtained from the program. Thus, this model incorporates both formative and summative evaluation frameworks. However, this model does not indicate how measurement takes place (Tzeng et al., 2007). Stufflebeam, et al. (1971) proposed CIPP (Context, Input, Process and Product) model of evaluation. The four types of evaluation in this model are derived from four basic types of decisions made in education: planning decisions, structuring decisions, implementing decisions and recycling decisions. This is an effective, efficient, comprehensive and balanced evaluation model (Galvin, 1983). This model shares many of the features of the CIRO model (Roark et al., 2006). Both these models cover formative as well as summative evaluation of training. However, this model assumes rationality by decision making and ignores the diversity of interests and multiple interpretations of these agents (Bennett, 1997). Hamblin (1974) developed another model of training evaluation, which consists of five levels: reactions, learning, job behavior, functioning and ultimate value.

Similar to this model, Kirkpatrick (1976) proposed a four-level model of training evaluation, which is most popular among the academicians and practitioners. It classified training outcomes into four levels such as reactions, learning, behavior and results. Reaction evaluation is defined as assessing satisfaction of the participants with the program. Learning evaluation is concerned with the extent to which the participants have learned the knowledge, skills and abilities taught in the program. Behavior evaluation refers to the extent the knowledge, skills, and abilities learned are transferred onto the job performance. Results evaluation is concerned with monitoring outcomes made by the participants. According to this framework, higher level outcomes should not be measured unless positive changes occur in lower level outcomes. There are criticisms on the hierarchical nature of this model (Alliger & Janak, 1989; Alliger, et al., 2002; Bates, 2004). There is limited evidence to support the causal relations between the levels of evaluation of this model. It leads to an excessively simplified method of assessing training effectiveness. It neglects the evaluation needs of all the other stakeholders involved in the training process (Guerci et al., 2010). This framework devalues the evaluation of societal impact or the usefulness and availability of organizational resources (Kaufman & Keller, 1994). To address these issues Kaufman & Keller (1994) have proposed a five-level framework of training evaluation covering 'enabling' and 'societal outcomes' to Kirkpatrick's model. Having identified flaws in the Kirkpatrick model, Holton (1996) proposed an evaluation model that hypothesized three outcome levels: learning, individual performance, and organization. According to Holton (1996) these levels are influenced by primary (such as ability, motivation and environmental influences) and secondary factors (for example, those that affect motivation to learn). Kirwan and Birchall (2006) pointed out that this model solely "describes a sequence of influence on outcomes occurring in a single learning experience and does not demonstrate any feedback loops"; it does not indicate any interaction between factors of the same type.

Phillips (1995, 1997) added another fifth level, i.e. return of investment (ROI) to the four levels of evaluation developed by Kirkpatrick. But isolating the effects of the training is a major challenge in this model. To address the issues and concerns with existing training evaluation models, Brinkerhoff, (2003) proposed the Success Case Method (SCM) for evaluating training programs. It is a process for evaluating the business effect of training that is aligned with and fulfills the strategy. It assesses the effect of training by looking intentionally for the very best that training is producing. When these instances are found, they are carefully and objectively analyzed, seeking hard and corroborated evidence to irrefutably document the application and result of the training. Further, there must be adequate evidence that it was the application of the training that led to a valued outcome. If this cannot be verified, it does not qualify as a success case (Brinkerhoff, 2005). The main disadvantage of SCM is that it needs some level of judgment with respect to what trainers identify as critical success factors on the job (Casey, 2006). Dessinger and Moseley (2006) developed the Dessinger-Moseley Full-Scope Evaluation Model (SEM). It aims at integrating formative, summative, confirmative, and meta-evaluation. It helps to formulate judgments about the worth of any performance improvement intervention. However, as pointed out by the authors themselves, the evaluation of training using this model is time consuming and it requires long-term support from the organization and all the stakeholders.

Training Evaluation Practices: There are a few studies available on training evaluation practices in different countries. ASTD's 1999 report (Bassi & Van Buren, 1999) stated that the 'leading edge' companies measured 81% of their programs at reaction level evaluation; 40% programs were considered for learning evaluation; 11% were evaluated at behavioral level; and 6% of the programs were taken for results level evaluation. The study conducted by Blanchard, Thacker and Way (2000) in Canada revealed that organizations conducted reaction and learning evaluations of about two-thirds of both management and non-management level of training programs. However, more than half the organizations didn't measure their training on the job application and business results levels. Yadapadithaya (2001) found a similar pattern with respect to training evaluation in India. Al-Athari and Zairi (2002) identified that the most common level of evaluation for both government and private sector was reaction type in Kuwait.

Pulichino's (2007) study found that 84. 5% of the sampled professionals reported that they conduct reaction level evaluation and 56. 1% of them conduct learning level evaluation. But only 19.9% of the surveyed professionals reported that their organizations always or frequently assessed job behavior and 13.7% always or frequently assessed business results. Bersin's (2008) study, conducted in North America, found that most organizations focus only on measuring standard course operations. A very small number of organizations routinely measure return on investment, business impact or job impact. ASTD report (2009) identified that as high as 91.6% of the professionals mentioned that they do conduct reaction assessment. 80.8% of them stated that they gather data about learning. However 54.6% and 36.9% of them only mentioned they evaluate behavior and results respectively.

Srimannarayana (2010) found that in India all the organizations (30) studied collect feedback from the participants of the programs to conduct reaction level evaluation. With regard to learning level evaluation, 46.67% of the organizations collect information. As far as changes in the behavior are concerned, 30% of the organizations make an attempt. With respect to business results level evaluation, one organization only, out of 30 organizations collects information for this purpose using client satisfaction scores. The same organization makes an attempt to calculate return on investment on some of the training programs. The Saks and...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT