Tag Archives: evaluation

Should Students Have the Power to Change Course Structure? 

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Submitted by Katrina Markowicz

Article Reference

McDonnell, G. P., & Dodd, M. D. (2017). Should students have the power to change course structure? Teaching of Psychology, 44(2), 91-99.

Article DOI

10.1177/0098628317692604

Summary of Article

Purpose: Compared to the previous semester’s class where this wasn’t implemented, does changing small details to a course through four course evaluations improve course performance and course satisfaction?

Method:
– Perception class with 73 undergraduates
– Students completed four course feedback forms (CFFs)
CFFs 1 had two sections: 1) impressions of the course and the instructor, and 2) how much the student wanted to change certain aspects of the course (that were brainstormed together).
CFFs 2-3 sections: 1) impressions of the course and the instructor, 2) extra changes they wanted to see in the course, and 3) whether the changes improved their course satisfaction (yes/no)
CFF 4 also asked about the effectiveness and satisfaction of the course changes in addition to the impressions of the course and semester.
– Other measures: performance data across three tests an end-of-semester evaluation
– The instructor shared the results of the CFFs with the students

Main Results:
– Changes made were perceived as effective and improved quality of class
– Students rated instructors higher in CFF semester than non-CFF semester
– Students in CFF semester performed better on averaged than students in non-CFF semester

Author’s conclusions: Midsemester feedback should provide students an opportunity to change the course as it improves the student learning environment.

Discussion Questions

  1. Given the results of the study and your own attitudes towards mid-semester feedback (MSF), would you incorporate MSF into your teaching practice? What are the benefits of MSF? Do you see any potential downsides (e.g., teaching self-efficacy)? If so, how would you protect yourself from these?
  2. The article’s context was spent on how to use feedback to make changes to the course: How comfortable are you with making course changes mid-semester? What types of activities or lecturing approaches would you change? What would you be unwilling to change (i.e., what is too much of an ask)? How will you go about collecting this data and ensuring you made the appropriate changes?
  3. The Forsyth (2016) reading for this week purports that student feedback is reliable and valid, and student ratings do not change based on whether the feedback is end-of-semester or mid-semester (given that there are no course changes). Since there are course changes occurring in this study, do you think that the subsequent three evaluations are reliable and valid without rater bias (i.e., halo effects; monitoring effects)? What could have been done to the study methods to confirm or disaffirm rater bias, if there was any?

What’s in a name: Exposing gender bias in student ratings of teaching

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Submitted by Ruben Martinez

Article Reference

MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. <em>Innovative Higher Education</em>, <em>40</em>, 291-303.

Article DOI

10.1007/s10755-014-9313-4

Summary of Article

Student course evaluations are considered integral indicators that follow faculty throughout their teaching career. This article explores the extent to which students differ in rating course evaluations for male/female instructors of an online course. Instructor gender was falsified throughout the course. After the course as completed, evaluations were provided to students that consisted of 15 likert-scale questions, including questions about professionalism, knowledge, respect, and warmth. One additional (cool) aspect of this study is that the authors performed comparisons not only on perceived gender (based on the falsified information) but also the actual gender, providing a nice comparison point. The authors found no difference in ratings for actual gender, but did find significant differences for perceived gender, such that “male” instructors received higher (better) scores on all items (seven were significant differences). This study helps to delineate the relation between gender and evaluation in university settings, suggesting that differences in evaluation are a result of bias as opposed to actual differences in teaching style or characteristics by gender.

Discussion Questions

  1. What other biases could be looked at and studied with this paradigm?
  2. What, if anything, can be done to address this problem?
  3. As gender identity comes into the limelight, what role will this bias/effect have on gender non-conforming, transgender, and non-binary instructors? And what can be done to support these instructors?