What do MBA students think of teacher evaluations?

AuthorBhattacherjee, Debashish

Student evaluations of teaching effectiveness (SETE) have been extensively studied in the literature with the central debates revolving around the issues of validity and reliability of the instruments used. Scant attention however has been paid in this literature to student perceptions and views of SETEs and their usefulness. This paper, presents the preliminary results from a survey of MBA students from two Indian business schools where (anonymous) student evaluations of teachers are mandatory after course completion, and where they are also used as one of the key determinants for faculty promotion and tenure decisions. Most students felt that since these SETEs are not taken very seriously by them their reliability are questionable.


Student Evaluations of Teaching Effectiveness (SETEs) are one of the many tools employed by academic institutions the world over to evaluate the performance of faculty members. Typically, SETEs consist of standardized questionnaires administered at the end of each course. The ratings thus obtained are used at times to decide the performance pay of faculty members and, in the long run, to make tenure related decisions. Apart from these ratings, academic institutions usually also consider other aspects such as research output and/or administrative work, for pay and tenure related decisions.

We now know that in the long run, output-based pay incentivizes 'good' workers to stay and 'bad' workers to leave a firm (Lazear, 1997). In other words, output-based pay enables the manager to strictly evaluate a worker's performance. However, output is not always easy to measure and often involves substantial monitoring costs. Input-based pay, on the other hand, which typically is the number of hours put in by the worker, is easier to implement, but does not al ways help the manager evaluate the worker. In the case of faculty members, implementing SETEs is an attempt to evaluate the worker in an input- based pay regime. Much of the literature on SETEs has focused on the reliability and validity of SETEs, and opinions of faculty members and administrators. Students as one of the most important stakeholders of the process have been relatively ignored. The objective of this paper is to document student perceptions of this process and to test for differences in perceptions among different sets of students.


SETEs have been extensively studied ever since they were introduced in the 1930s. Several studies have explored the validity and reliability of the survey instruments used and also the ability of the students to objectively evaluate faculty members. An extensive review concluded that student ratings are largely valid and reliable and they can provide valuable inputs to all the stakeholders involved (Marsh, 1987). While the central questions regarding the validity and reliability of SETEs were answered by the late 1980s, their acceptability was still in doubt, and reliability/validity studies on SETEs continue to be published to this day. Some have even argued that the research on SETEs over the past fifty years has been largely driven by the urge to prove or disprove the validity and reliability of SETEs without actually emphasizing on not misusing or misinterpreting them (Theall & Franklin, 2001).

Valsan et al. (2008) argued that SETEs merely performed a legitimizing function for management as SETEs were ill equipped to capture and evaluate any of the outcomes that an academic establishment aspires to achieve. They assert that when used for academic purposes, SETEs only lead to collusion between students and faculty resulting in various negative externalities such as grade inflation, dilution of academic rigor and sometimes even negatively affecting the careers of capable and qualified teachers.

A meta-validation model approach to understanding the reliability and validity of SETEs suggested that there is strong evidence in support of criterion validity but there is inadequate evidence in support of content or construct related validity of SETEs (Onwuegbuzie et al., 2007). Further studies have used the meta-validation model to review extant literature in SETE studies. To elaborate, criterion validity measures if a given set of variables measure a behavior that can also be corroborated by an existing or a future instrument. Content validity is the extent to which a measure or a set of variables explain a given construct (in our case, teaching ability). Construct validity on the other hand is the ability of the measure to demonstrate relationship between variables as would be anticipated.

An extensive survey of research conducted in the field of SETE studies after the year 2000 by Spooren et al. (2013), uses the lens of validity studies to classify and evaluate the progress made in the field. The authors, after considering a very large number of papers published in the past decade, largely support the conclusions arrived at by Onwuegbuzie et al. (2007). There is still no consensus on the questions that should constitute SETEs, even though there are many standardized questionnaires employed by universities the world over, primarily because there is no consensus on the construct of an 'ideal teacher'.

Several scholars have raised concerns and provided evidence about the lack of discriminant and divergent validity (i.e. if variables that are supposed to be unrelated are indeed unrelated). Some of the concerns are about the gender, race and personality of the instructor, unfairly influencing the ratings. However, the most common concern raised is about expected grades. Several empirical studies have shown that students give faculty members who give easy grades much higher ratings. A study based on SETE scores across three semesters by 2600 students at American University shows that ratings were heavily influenced by expected grades, gender and race of the instructor (Langbein, 1994). Another study by the same author found that, faculty members and students are engaged in a socially destructive game of grade inflation based on SETE scores across four years at the same university (Langbein, 2008).

Studies have also shown that the bias in student ratings could also vary with factors such as level of class being taught, interest in the subject matter before joining the class, the size of the class, the rigor of the course and the personality of the instructor (A1 Issa & Sulieman, 2007). These factors also determine student attitudes towards the process of end of course evaluations.

Coming to student perceptions of SETEs, the pervasive feeling is that SETEs are not really taken into consideration to determine performance pay, bonus or tenure of faculty members, and even if they were, it is not a sufficient motivation for students to seriously participate in the process of teaching evaluation. Chen and Hoshower (2003) use the expectancy theory framework to understand the pre- and post- participation attitudes of students. Student participation in the process is said to be determined by the degree of their faith in the administration to actually employ the SETEs to decide faculty pay or tenure.

Several studies have obtained student opinions of the process and have compared them with faculty perceptions to understand the conflicts. An example is the study by Sojka et al. (2002) comparing the responses of faculty and students at a midsized American university found that students were much less likely to agree that faculty graded leniently for better ratings, or that faculty's careers were tied to the ratings. Similar results were also reported by Mukherji et al. (2008) and Balain et al (2010). In both studies, students were less inclined than faculty to believe that faculty may be lenient in grading for the sake of better ratings and more inclined to believe that students were more inclined than faculty to believe that good teachers were indeed rewarded with better ratings.

The two studies which have focused exclusively on student perceptions are Marlin (1987) and Spencer et al. (2002). Both these studies confirmed that students are interested in the process of SETEs, but are not certain about their usage by their respective administrations. With a database of over 12000 students across disciplines and stages, they concluded that one of the most important ways in which the process can be made successful is by convincing the students that their opinions do matter. It was the need to be heard by the faculty members at various stages of learning during the course and to genuinely consider the feedback before coming back to teaching the course that dominated the students' expectations from the process. A similar conclusion was also made by Marlin (1987) who showed that if the administration intended to use the...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT