IDEAblog

Addressing Faculty Fears of Student Retaliation in IDEA Online

August 26, 2013

Guest blog by Michael Stankey 
Texas Woman's University 

A common concern among faculty members at TWU using IDEA Online is that students who have stopped coming to class and/or students who will be receiving poor grades will be able to and be more likely to complete online course evaluations to retaliate against their instructors.

Scholarly research on this issue suggests just the opposite.  Findings reveal that students with higher grades are more likely to complete online course evaluations and students with lower grades are less likely to participate.  For example, Thorpe found for a small sample of large undergraduate math, computer science, and statistics courses at Drexel University that the response rate of “A” students (55.0%) was 2.5 times higher than the response rate of “F” students (22.0%).1  In a more comprehensive study, Adams found similar results for all undergraduate courses at North Carolina State University, with “A” students responding at a rate of 58.1% compared to a 21.7% response rate among “F” students.2

Despite these findings, many faculty members persist in their fears, perhaps due to the limited distribution of such research, but more likely because there’s so much at stake.  Student evaluations of instruction often play an important role in assessing annual faculty performance and informing tenure and promotion decisions.  Faculty concerns appear to be more acute with respect to online course evaluations.  They “resent allowing students who have stopped coming to evaluate the class,” they believe that “the majority of students who participate are the disgruntled ones,” and they think that “the online opportunity to evaluate may get a disproportionate response from students who are angry about the course and likely stopped attending class deciding to settle for a low grade rather than learn the material,” to quote a few from emails I’ve received. 

The “Manage Respondents” feature of IDEA Online, which tracks whether students have responded and allows reminder emails to be sent to non-respondents, provides a unique opportunity to test the retaliation theory.  As seen in the sample screen shot below, a student’s name, email address, and ID number (intentionally blurred here for confidentiality purposes) are associated with whether or not she has completed the survey.  Of course, the nature of each student’s response is not shown to insure anonymity, but it’s easy to tell who has completed the survey.  Back on campus, it’s also easy to determine the grade earned by each student by reviewing course grade rosters.  By joining the response data to the grade data, using student ID numbers as the common link, we can compare the grade distributions of all students with the grade distributions of students who completed evaluations.  If there’s a “disproportionate response” among low-grade earners, it will reveal itself in the comparison.

For a study I presented at the most recent IDEA User Group Meeting in Nashville, “Manage Respondents” rosters were obtained from the IDEA Center for all 2,033 undergraduate and graduate courses surveyed at TWU during the Fall 2011 semester.  These response rosters were merged with the corresponding TWU course grade rosters obtained from the TWU Office of Institutional Research and Data Management.  The resulting data set facilitated grade distribution comparisons by course level, by college, by subject, by instructor, and for any combination of these grouping variables.  The data also made it possible to test individual faculty member theories and help them better understand the response characteristics of their students.

The response profiles of TWU students were similar to those reported above, with “A” students being more likely to respond and “F” students being less likely to respond, but the analysis was extended to include all levels, all colleges, all subjects, and all instructors in an interactive Excel pivot table.  For instance, the response rate of “A” students was 47.9% overall, 48.1% for undergraduate lower-level, 47.0% for the college of arts and sciences, and 42.6% for chemistry courses.  By contrast, the response rate of “F” students was 12.6% overall, 14.3% for undergraduate lower-level, 13.0% for arts and sciences, and 17.0% for chemistry.  For the fearful professor quoted above who was concerned about a disproportionate response among low-grade earners, the response rate for “A” students was 63.6% and the response rate for “F” students was 1.7%. 

To better illustrate this phenomenon, the data from the Thorpe and Adams studies, along with the data from TWU, are presented below as grade distributions.  In each graph, the gold bars represent the grade distribution of all students completing courses, while the dark red bars represent the grade distribution of students completing evaluation surveys in those courses.  Figure 1 depicts the feared “disproportionate response” in which students who receive lower grades are more likely to complete surveys.  In this hypothetical case, the survey response distribution is skewed to the right, with 55.0% of the survey respondents being students who received Ds and Fs compared with the 24.0% of all students who received Ds and Fs, making the lower-grade students more than twice as likely to complete surveys.  This relationship is not found in the actual data, however.  In all cases shown (and in all cases I’ve investigated), the survey response distribution is skewed to the left, with the red bars exceeding the gold in the “A” category and the red bars falling short of the gold in the “F” categories.  For the fearful professor, 71.1% of the course surveys were completed by students earning As or Bs, while only 10.5% of the course surveys were completed by students earning Ds or Fs, even though 31.2% of all students received Ds or Fs in the class.  Was there a disproportionate response?  Yes, but in the opposite direction anticipated.  The instructor’s high-grade earners were 6.8 times more likely to respond compared with the low-grade earners.

Although it would be shortsighted to conclude that high-grade earners can’t be disgruntled (e.g., a B student who felt she deserved an A, an A student who felt like the class was a waste of time, etc.) or that low-grade earners won’t offer negative feedback, it does appear that faculty fears of a disproportionately high response among low-grade earners in IDEA Online have little empirical support.

1Stephen W. Thorpe, “Online Student Evaluation of Instruction: An Investigation of Non-Response Bias,” paper presented at the 42nd Annual Forum of the Association for Institutional Research, Toronto, Canada (June, 2002).

2Meredith Jane Dean Adams, “No Evaluation Left Behind: Nonresponse in Online Course Evaluations,” doctoral dissertation, Educational Research and Analysis, North Carolina State University (2010).

Post new comment

The content of this field is kept private and will not be shown publicly.