This online resource provides guidance to faculty reviewing course evaluations and deciding which feedback to act upon.
Course evaluations have a longstanding history within institutions of higher education and often play an integral role in tenure and promotion decisions for tenure-track faculty, as well as hiring decisions for faculty with temporary appointments. They provide student perspectives on course satisfaction and are generally not comprehensive assessments of student learning or student engagement. Course evaluation scores have been found to correlate moderately with a student’s expected grade in a course rather than their actual grade (Centra & Creech, 1976). Also of note is that courses perceived by students to either be too easy or too demanding have been found to receive lower course evaluation scores (Creech, 2003).
Course evaluation scores can also be influenced by instructor attributes and interactions with other variables (Reid, 2010; MacNell et al., 2014). Despite such limitations, student feedback obtained through course evaluations can be meaningful, but some degree of discernment should be used in relying solely on course evaluations prior to making large-scale changes to a course (Linse, 2017). Ideally, feedback from course evaluations should be considered alongside that of classroom observations of teaching and instructor reflections, to have a wider range of perspectives.
Below are several recommendations for making meaning of student feedback from close-ended (e.g. multiple choice or Likert scale) and open-ended questions on course evaluations.
Having numerical data can provide valuable information to a faculty member as to where a course stands from the perspective of the students for a particular item. A critique of the close-ended questions on student evaluations is that they can reduce student perceptions of a course to numerical values without providing sufficient context, giving credence to also having opportunities for students to respond to open-ended questions. The mean score (or at Lafayette, the interpolated median) for a particular item on the course evaluation form is often a focal point of analysis. However, the distribution of scores can be even more informative. Scores that are mostly distributed within the lower range or have an even distribution often signify problems within the classroom that should be addressed (Linse, 2017). Scores mostly distributed in the upper range of values typically do not highlight a major problem with the course for a particular item from the student perspective.
Below are a few examples.
Consider student responses to the question, “I learned a great deal in this course.” A hypothetical distribution of course evaluation scores is strongly agree = 5% , agree = 10%, somewhat agree = 15%, disagree = 60%, strongly disagree = 10%. Most scores are distributed within the lower range of the scale. In this scenario, recommended next steps include examining students’ narrative comments and instructor reflections to make meaning of the underlying reasons behind students’ low perceptions of how much they learned during the course. For example, perhaps students did not perceive that they had met the course learning outcomes, or much of the information presented did not appear to be new or useful. If reasons can be identified, changes to the course content and building awareness of how course activities are helping students meet learning outcomes can be considered. Administering a mid-semester course evaluation the next time the course is offered to obtain general feedback as well as to determine if students are still having a hard time seeing what they have learned, can help assess whether additional changes should be made to the course.
In a second example for the same item, the scores are: strongly agree = 60% , agree = 30%, somewhat agree = 9%, disagree = 0%, strongly disagree = 1%. These scores are distributed mostly in the upper range and do not signify a major problem from the perspective of students in the course for this item. In other words, the majority of students perceived to learn a good amount during the course. The 1% who strongly disagreed are seemingly outliers and while their responses should be acknowledged, they can be treated as rare compared to the other scores.
Questions where students can provide narrative comments can give more context and constructive feedback and complement student responses to the close-ended questions. One major challenge for the instructor when reviewing responses to open-ended questions, is knowing which comments to act upon. Below are steps presented from two different sources that describe ways to analyze written comments in a constructive manner, and can help minimize the human tendency to focus heavily on outlier negative comments, make meaning of the information, and aid in any deliberations over what to alter in a course.
As a hypothetical example for the question, “How did the various components of the course contribute to your learning?” major themes are: “lectures were helpful and informative,” “homework was not helpful,” “readings did not seem relevant,” and “all course materials were great.” Some themes contradict one another, however, a number of students comment upon the utility of the homework and reading assignments, substantiating a careful review of the course learning objectives and the degree to which the homework assignments and reading activities align with such objectives. These steps can be performed by the instructor who could also consult with a trusted colleague in the discipline. In the event of a lack of alignment, course revision is recommended.
Given the emotion that course evaluations can evoke, reviewing feedback with a trusted colleague or requesting a confidential consultation from CITLS to work through this process can be helpful. If changes are subsequently made to the course based on feedback, letting the students know in a future course what alterations were made can demonstrate that student responses to course evaluations are taken seriously.
In general, end-of-semester evaluations can provide useful feedback on a course to guide change, and foster even more productive learning environments for students. Other sources or agents for providing feedback include classroom observations, instructor reflections after class sessions, mid-course evaluations, and CITLS staff.
Centra, J.A. (2003). Will Teachers Receive Higher Student Evaluations by Giving Higher Grades and Less Course Work? Research in Higher Education, 44: 495. https://doi.org/10.1023/A:1025492407752
Centra, J. A., and Creech, F. R. (1976). The relationship between student, teachers, and course characteristics and student ratings of teacher effectiveness. Project Report 76- 1, Educational Testing Service, Princeton, NJ.
MacNell, L., Driscoll, A., Hunt, A.N. (2014). What’s in a Name: Exposing Gender Bias in Student Ratings of Teaching. Innovative Higher Education, 40, 4, 291-303. https://doi.org/10.1007/s10755-014-9313-4
Reid, L. D. (2010). The role of perceived race and gender in the evaluation of college teaching on RateMyProfessors.Com. Journal of Diversity in Higher Education, 3(3), 137-152.
Zakrajsek, T. (June 2019). Analyzing student end of course written comments. The Scholarly Teacher. Retrieved from: https://www.scholarlyteacher.com/post/analyzing-student-end-of-course-written-comments