…while carrying in mind these 3 Considerations,
2 Action Steps, and 1″Be Sure Not to Do this” Reminder
“Paul is condescending in feedback on assignments. Has poor written communication skills. Speaks way too softly in class. In-class activities often poorly explained and have unclear purpose. Very poor time management. Class usually ran long only due to bad organization. Those who can do, those that can’t teach. Does not work so well when what you can’t do is teach!”
“Paul was great. He was accessible in person and electronically more than almost any other teacher I have had. Paul provided detailed thought and feedback that was valuable and constructive.”
These two student comments were for the same class. So, were you reading this in your own set of feedback comments, you would probably be thinking: How can I begin to make sense of these contrasts? What should I do?
Here are a few things to consider (using the 3-2-1 format, an active learning strategy for student learning, as well as an organisational tool for this post):
3 Things to Consider:
- A recent TILT post pointed to the varying opinions about student ratings of teaching vary, which an IDEA paper summary of the research summarises in this way:
“There are probably more studies of student ratings than of all of the other data used to evaluate college teaching combined. Although one can find individual studies that support almost any conclusion, for many variables there are enough studies to discern trends. In general, student ratings tend to be statistically reliable, valid, and relatively free from bias or the need for control, perhaps more so than any other data used for faculty evaluation.”
Importantly, the most reliable, valid student rating instrument must be only one of the multiple sources of data gathered about teaching – whether regarding an individual’s classroom or an entire curriculum. The student ratings instrument must be used in combination with multiple sources of information if the goal is to make a judgment teaching effectiveness, or to discern ways teaching works well or could work better.
More important still, student ratings must be interpreted. We should not confuse a source of data with the evaluators who use it – in combination with other kinds of information – to make judgments about an instructor’s teaching effectiveness.
- Wildly contradictory ratings, especially in written comments (as above), do not mean that you should dismiss them. Indeed, as Karron Lewis observes, many instructors find the qualitative comments to be more helpful and informative than the scaled/quantitative ratings. Lewis’s article “Making Sense of Student Written Comments” recommends sorting the comments according to the numerical scores reported on the same forms; in this way, you will be able to see more clearly which students are satisfied or dissatisfied. You will, actually, have more context for interpreting the comments: for instance, knowing that satisfied students share some of the same concerns as dissatisfied students (for example, a number of them write that the content was covered too quickly) may help you to consider what changes you’d like to make when you teach again.
- One of the principles in How Learning Works; Seven Research Based Principles for Smart Teaching (Ambrose, et al) observes that “Students’ motivation determines, directs, and sustains what they do to learn.” Student motivation has an emotional component: how students feel about the subject will affect their learning. Because of that emotional aspect, it is not surprising that student ratings of teaching elicit strong feelings from both students and instructors.
2 Things To Do:
- Meet with someone to talk about them. Researchers conducting a meta-analysis of student ratings feedback (Penny and Coe) and consultants analysing professional practice (Stanford Newsletter) agree that interacting with appropriately trained teaching peers and teaching consultants is vital: “the most successful consultation may result when teachers have opportunities to interact with and draw upon the knowledge and experience of their more knowledgeable colleagues.” Whether working one-to-one or in small groups with an experienced peer or consultant, it is important to talk with others as part of the “development of a collaborative learning culture in which there is sharing and openness about teaching, as is consistent with current reform efforts in higher education” (both quotes, Penny and Coe).
- Schedule a time in your syllabus to gather midterm feedback. As these articles note (z.umn.edu/sgidarticle), gathering midterm feedback, then discussing the analysed data with students will lead to improved ratings. (And Center for Educational Innovation consultants can help individuals or departments learn to incorporate the Student Feedback through Consensus feedback into teaching plans as a means of gathering learning- and teaching-related data from students’ perspectives while they’re in the midst of learning.
1 Thing To Not Do:
- Ignore them. After all, they are a source of information about your teaching. But what about the really mean ones, like the one above? Yes, you can ignore those since they often don’t provide much insight into your teaching. There’s not much to do about a comment like: “Those who can do, those that can’t teach. Does not work so well when what you can’t do is teach!” Sometimes, a comment like that will says more about that particular student than about your teaching.
3-2-1 format: can be used as part of preparing for and/or reporting out of an in class small or large group discussion; more about these – and other uses – at http://www1.umn.edu/ohr/teachlearn/tutorials/active/strategies/index.html#321
Penny, Angela R., and Robert Coe. “Effectiveness of Consultation on Student Ratings Feedback: A meta-analysis.” Review of Educational Research 74.2 (2004): 215-253. http://rer.sagepub.com/content/74/2/215.full.pdf
Stanford University Newsletter on Teaching. “Using Student Evaluations to Improve Teaching.” http://web.stanford.edu/dept/CTL/Newsletter/student_evaluations.pdf