Category Archives: Article Summary

Evaluating: Assessing and Enhancing Teaching Quality

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Submitted by Fan Zhang

Article Reference

Beran, T. N., & Rokosh, J. L. (2009). The consequential validity of student ratings: What do instructors really think?.Alberta Journal of Educational Research, 55(4).

Felton, J., Koper, P. T., Mitchell, J. and Stinson, M. (2008), “Attractiveness, easiness and other issues: student evaluations of professors on Ratemyprofessors.com”, Assessment&Evaluation in Higher Education, Vol. 33 No. 1, pp. 45-61.

Article DOI

Summary of Article

Summary 1
The Consequential Validity of Student Ratings: What do Instructors Really Think?
The purpose of this study was to examine the consequential validity of student ratings, according to instructors at a major Canadian university. Results indicate that most instructors reported concerns about the SRI.
The problems:
Poor design of the instrument (70%)
The survey produced a limited amount of useful information. While there are items in the study that are targeted at specific areas of instruction, yet, these items may not be precise enough for instructors to determine how to improve these areas.
Many instructors indicated that the items were general and not applicable to their style of teaching or course design.
Procedural difficulties (56%)
Many instructors find that the SRI is administered too frequently, and resulting in “student rating fatigue.”
Myth-based issues (31%)
Some instructors consider the USRI to be an unfair measure as it is purely a “popularity contest” or believe that giving out higher grades will result in better SRI scores.
Ratings are biased (29%)
Many instructors believe students’ evaluations to be biased by several factors, including course difficulty, instructor popularity, grading leniency, prior student interest, and class size, although research has consistently shown that most such background characteristics have a negligible effect on student evaluation.
Negative effect on instructors/instruction (11%)
A number of instructors reported feeling that the student rating procedure leads instructors to lower their standard to avoid receiving low ratings.

This study revealed the importance of the consistency between what instructors consider to be quality teaching and the measures used to assess them.

Summary 2
Attractiveness, easiness and other issues: student evaluations of professors on Ratemyprofessors.com

Ratemyprofessors.com is a website with the motto ‘Where the students do the grading.’ It is not affiliated with any institution of higher education or accrediting agency. Since 1999, it has received nearly six million postings rating more than 750,000 instructors at more than 6000 schools.
At the time of this study, students can voluntarily rate their professor at the website based on easiness, helpfulness, clarity, overall quality, and hotness. It is worse noting that today, there are only two categories that are being highlighted on the website, and they are the level of difficulty and overall quality.
This study included data from 6852 professors from 369 institutions in and the United States and Canada. They found that there is a significant positive correlation for Quality and Easiness (0.62), and they found professors with high Easiness scores usually have student comments regrading a light workload and high grades. The authors of this article claim, based on these findings, they think these self- selected evaluations from Ratemyprofessors.com cast considerable doubt on the usefulness of in-class student opinion surveys for purposes of examining quality and effectiveness of teaching.

Discussion Questions

  1. Do you think SRI can be an accurate representation of the quality of a course?
  2. 2. What changes will you make so the SRI can be more helpful to the growth of an educator?
  3. 3. Do you think websites like “rate my professor” influence how professors teaching today?

The effectiveness of online and blended learning: A meta-analysis of the empirical literature

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Submitted by Polina Beloborodova

Article Reference

Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record, 115(3), 1–47.

Article DOI

Summary of Article

Background:
Earlier research on various forms of distance learning concluded that these technologies do not differ significantly from regular classroom instruction in terms of learning outcomes. However, today the increased capabilities of web-based applications and collaboration technologies and the rise of blended learning models combining web-based and face-to-face classroom instruction have raised expectations for the effectiveness of online learning.

Purpose:
This meta-analysis was designed to produce a statistical synthesis of studies contrasting learning outcomes for either fully online or blended learning conditions with those of face-to-face classroom instruction.

Participants:
The types of learners in the meta-analysis studies were about evenly split between students in college or earlier years of education and learners in graduate programs or professional training. The average learner age in a study ranged from 13 to 44.

Conditions:
The meta-analysis was conducted on 50 effects found in 45 studies contrasting a fully or partially online condition with a fully face-to-face instructional condition. Length of instruction varied across studies and exceeded one month in the majority of them.

Research Design:
The meta-analysis corpus consisted of (1) experimental studies using random assignment and (2) quasi-experiments with statistical control for preexisting group differences. An effect size was calculated or estimated for each contrast, and average effect sizes were computed for fully online learning and for blended learning. A coding scheme was applied to classify each study in terms of a set of conditions, practices, and methodological variables.

Results:
The meta-analysis found that, on average, students in online learning conditions performed modestly better than those receiving face-to-face instruction. The advantage over face-to-face classes was significant in those studies contrasting blended learning with traditional face-to-face instruction but not in those studies contrasting purely online with face-to-face conditions.

Conclusions & Recommendations:
Studies using blended learning also tended to involve additional learning time, instructional resources, and course elements that encourage interactions among learners. This confounding leaves open the possibility that one or all of these other practice variables contributed to the particularly positive outcomes for blended learning. Further research and development on different blended learning models is warranted. Experimental research testing design principles for blending online and face-to-face instruction for different kinds of learners is needed. The meta-analysis findings do not support simply putting an existing course online, but they do support redesigning instruction to incorporate additional learning opportunities online while retaining elements of face-to-face instruction.

Discussion Questions

  1. In your opinion, how will higher education look like like in 2050? To what degree will it migrate to online? What teaching methods and technologies will we use?
  2. What will be the role of the instructor? What skills should we develop now to fulfil this role in the future?
  3. Coming back to the present, which of VCU technology resources would you like to try in your teaching? How are you going to use it?

Interactive lecturing: review article & pilot study

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Submitted by Polina Beloborodova

Article Reference

Gülpinar, M. A., & Yeğen, B. Ç. (2005). Interactive lecturing for meaningful learning in large groups. Medical Teacher, 27, 590-594.
Snell, Y. S. L. S. (1999). Interactive lecturing: Strategies for increasing participation in large group presentations. Medical Teacher, 21, 37-42.

Article DOI

Summary of Article

(1) Summary of Snell (1999): review of interactive lecturing techniques
Interactive lecturing is a set of techniques aimed at increasing participation of the audience in the lecture:
• Presenter <=> students
• Students <=> material / content
• Students <=> students
New role of teacher: instructor -> facilitator / coach
Why is it good for learning?
• Active involvement
• Increased attention & motivation
• ‘Higher’ level of thinking (analysis, synthesis, application, problem solving, etc.)
• Feedback to teacher & students
• Increased teacher and student satisfaction
Why teachers don’t use it?
• Fear of losing control and covering all material
• Contextual factors (content, physical setting, time constrains, audience)
Techniques:
1. Breaking the class into smaller groups
2. Questioning the audience:
– straightforward questions
– brainstorming
– rhetorical questions
– surveying the class
– quizzes & short answers
3. Using audience responses
4. Using cases & examples
5. Written materials (handouts)
6. Debates, reaction panels, & guests
7. Simulations & role plays
8. Multimedia (video, audio, etc.)
How to get interactive?
• Take risks & overcome fears
• Prepare & practice
• Set clear objectives, cut on material (less is more)
• Prepare students to get involved
• Be flexible, but not too flexible

(2) Summary of Gülpinar and Yeğen (2005): pilot study on interactive lecturing
Aim: to test a ‘structured integrated interactive’ two-hour block lecture
Objectives:
• effects of the prior knowledge on learning & evaluation of the lecture
• effects of well-structured advanced organizer on learning & evaluation
• impact of clinical integration on the comprehension of basic sciences
Lecture outline:
• Using the same template across the lecture: (1) for gradually adding details, (2) for introducing associated pathologies
• Interactive task every 10-15 min
• Using clinical cases with structured evaluations charts
Measures:
1. Pretest: evaluation of prior knowledge (pre-lecture test)
2. Posttest: problem solving skills (performance on cases, evaluated by instructor)
3. Lecture evaluation questionnaire
Sample: 93 students of a large Turkish university
Results:
• Evaluation: 92% successful, mostly positive comments
• Interactivity: 43.9% evaluated as interactive, 35.7% as partially interactive
• Issues: content wasn’t limited, fast pace
• 90% showed acceptable performance on evaluating cases (problem solving)
• Significant correlation (r = .2) of pre-lecture test scores and case scores in one of two topics of the lecture
Conclusions:
• Interactive lecturing facilitates more meaningful in interactive learning in large groups
• Higher order thinking and development of problem solving skills can be achieved to some extent with interactive lecturing
• Prior knowledge is important for learning processes and learning outcomes

Discussion Questions

  1. Which interactive lecturing techniques would work best for the course that you would like to teach in the future? Provide a few examples.

    Summary of discussion:
    – Working in small groups
    – Asking students to repeat what the instructor said a while ago
    – Asking questions
    – Using technology for surveys

  2. Which interactive lecturing techniques would work better for younger audiences? Which ones would be better for older audiences?

    Summary of discussion:
    – It’s not about the choice of techniques, but their adaptation to various audiences (e.g. organizing group work in more structured way for undergraduate students and less structured for graduate students)
    – Other factors to consider: institutional setting (university vs. community college), familiarity with interactive teaching

  3. What are possible negative consequences of interactive lecturing?

    Summary of discussion:
    – Too much interactivity can lead to losses in material covered and can be annoying for the audience
    – Technology has to be checked before the lecture
    – Students may disclose too personal information
    – Lecture may go out of control (e.g. students may start discussing irrelevant topics in groups)
    – Students may give wrong answers and examples

Feeding forward from summative assessment: The Essay Feedback Checklist as a learning tool

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Submitted by Samantha Mladen

Article Reference

Wakefield, C., Adie, J., Pitt, E., & Owens, T. (2014). Feeding forward from summative assessment: The Essay Feedback Checklist as a learning tool. Assessment & Evaluation in Higher Education, 39(2), 253-262.

Article DOI

Summary of Article

Aim: Investigate the use of the Essay Feedback Checklist (EFC) as a strategy to provide feedback to students that improves future performance on other forms of assessment

Method: 104 second year undergraduate sport studies students were recruited and randomized to a feedback-as-usual condition or an experimental condition with receipt of feedback via the Essay Feedback Checklist on a 2,500 word essay. The EFC requires students to rate their performance prior to submission of their assignment. The same checklist is then used by assessors and any significant discrepancies in scores are explained in additional feedback comments to the student. Students can also request additional feedback on specific domains. Randomization condition was assessed as a predictor of performance on a future assignment in the same subject area, but of a different format (knowledge test). Four students volunteered to take part in a subsequent focus group about the EFC process.

Results:
Repeated measures ANOVA demonstrated a significant group x assessment effect: students who received standard feedback had a decrease in score from 49.29 +/- 12.06 to 44.00 +/- 15.08. Students receiving EFC feedback increased in score from 50.11 +/- 11.51 to 56.85 +/- 17.74 on the subsequent assignment. Qualitative feedback revealed themes of advantages and disadvantages of the EFC, method of self-assessment, and perceived usefulness for future assessments. Students enjoyed the individualized nature of feedback, especially that assessors responded to the types of feedback specifically requested by students. Some students felt that the EFC hurt their morale, especially when they disagreed with scores given by assessors or when they felt that they did not understand the terminology used by assessors or the rubric. A primary benefit was improvement in students’ learning, including taking time to correct their assignment before turning it in and also adjusting for future assessments.

Discussion: The EFC demonstrated success “feeding forward” learning. Students appreciated many aspects of the procedure, but also raised some concerns, including morale and trust between students and assessors. These challenges offer opportunities for future improvements to the EFC.

Discussion Questions

  1. What is your goal in providing feedback to students? How does this influence what form of feedback you offer?
  2. How could the EFC be implemented in courses that don't use essays? How could the principles of the EFC be adapted for other types of assignments?
  3. Focus group participants in this study indicated that the form negatively impacted their morale. How could this be avoided, while still engaging students in the feedback process?

Online Academic Integrity

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Submitted by Samantha Mladen

Article Reference

Mastin, D., Peszka, J., & Lilly, D. (2009). Online academic integrity. Teaching of Psychology, 36(3), 174-178.

Article DOI

10.1080/00986280902739768

Summary of Article

Aim: Investigate whether academic pledges reduce the level of cheating in online assignments, and assess the overall rate of cheating in online assignments

Method: 439 introductory psychology students were recruited over three semesters. Participants were told that they would be completing an online motor task and that their earned extra credit would increase based on their score on the motor task. Participants were told that the computer program would not track their correct responses, and thus they were asked to self-report their correct response total.
Students were randomized to three experimental conditions: no pledge, check mark pledge, and written honor pledge

Results: 361 participants (82.2%) accurately reported their performance, 16 (3.6%) underreported their performance, and 62 (14.1%) overreported their performance.
Mean magnitude of cheating in the entire sample was 0.39/10 points (SD = 1.52), but the mean in the cheating sample was 3.08/10 points over-reporting (SD = 2.75).
Honor pledge condition had no effect on the rate of cheating in the overall sample, or in the cheating subset . Participants were 2.06 times more likely to cheat at the end of the semester than at the beginning, but time of semester did not affect the magnitude of cheating. When the required magnitude of overreporting to be considered cheating was increased (2 points out of 10 instead of 1 point out of 10) the percentage of students cheating dropped to 8.0%. This change was made to account for the presence of 13 students underreporting by 1 point – the authors allowed a larger margin of error in reporting before labelling over-reporting as cheating.

Discussion: Research is still needed to determine the actual rates of cheating on online assignments, and effective strategies to reduce rates of cheating. These efforts are becoming more important as more institutions and professors institute online courses and assessment methods. Though the external validity is not perfect, since most online exams are not self-report, the lack of significance of the honor pledge condition is troubling.

Discussion Questions

  1. Would you include a version of an honor pledge in your courses? Has your opinion changed as a result of reading this study?
  2. Forsyth cites a study that claims that more students say that they would be more likely to cheat online than the rates at which they actually cheat. What does this tell us about the way in which students approach online learning? How can we use this information to try to prevent cheating?
  3. Online learning brings with it tremendous opportunities to increase collaboration and team-based learning among students. How would you balance this opportunity with the reality that it may make cheating easier for students?

TED Ed – Lessons and Series

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Submitted by Margaret Kneuer

Article Reference

https://ed.ted.com/

Article DOI

Summary of Article

For the technology activity, I decided to explore the TED-Ed Lessons and Series for video content related to psychology topics, specifically in social or relationship psychology courses. I found the TED-Ed website to have more organized features for video lesson planning than YouTube because of the added features of creating or using lessons. After creating an account, I had the ability to design lessons that supplement the curriculum in different psychology courses or use the premade lessons provided. I selected 11 videos between 5-20 minutes in length, and the TED-Ed website allows me to save and edit lessons for students. I also had the chance to crop the video for students, if needed, and provide an overview with additional multiple choice or short answer questions to check for understanding. The website has an option for instructors to require, if needed, student log in to make sure students watch and answer the questions. There is also a section for students to open up to discussion prompts, which could help facilitate any classroom discussion about the different topics. I have the option to add or subtract any videos throughout to better enhance student learning, or adjust the videos depending on the course in the future. This technological tool is designed to aid instructors in completing lesson plans swiftly and in an organized fashion. As a final collaborative project for students, I could also assign a partner video project, modeled after the videos used in the course, to create their own version of a TED Talk, educational video, or a response to a video and upload it. Overall, I found the TED-Ed lesson planning feature to be beneficial in organizing videos and related questions for students to use, which would supplement their readings assigned for a specific course in the future.

Discussion Questions

  1. 1. Dan Gilbert (2004) “The Surprising Science of Happiness”
    https://www.ted.com/talks/dan_gilbert_asks_why_are_we_happy
    Decade Follow Up Article on 2004 TED Talk
    https://blog.ted.com/ten-years-later-dan-gilbert-on-life-after-the-surprising-science-of-happiness/
    2. Robb Willer (2017) “How to Have Better Political Conversations”
    https://www.ted.com/talks/robb_willer_how_to_have_better_political_conversations
    3. Amy Cuddy (2012) “Your Body Language May Shape Who You Are”
    https://www.ted.com/talks/amy_cuddy_your_body_language_shapes_who_you_are
    Decade Follow Up Article on 2012 TED Talk
    https://ideas.ted.com/inside-the-debate-about-power-posing-a-q-a-with-amy-cuddy/
    4. Philip Zimbardo (2008) “The Psychology of Evil”
    https://www.ted.com/talks/philip_zimbardo_on_the_psychology_of_evil
    5. Freeman Hrabowski (2013) “Four Pillars of College Success in Science”
    https://www.ted.com/talks/freeman_hrabowski_4_pillars_of_college_success_in_science
    6. Martin Seligman (2008) “The New Era of Positive Psychology”
    https://www.youtube.com/watch?v=9FBxfd7DL3E
    7. Robert Sternberg (2014) “Successful Intelligence”
    https://www.youtube.com/watch?v=ow05B4bjGWQ
    8. Walter Mischel (2015) “The Marshmallow Test”
    https://www.youtube.com/watch?v=XcmrCLL7Rtw
    9. Elizabeth Loftus (2013) “How Reliable is Your Memory?”
    https://www.youtube.com/watch?v=PB2OegI6wvI
    10. Barbara Fredrickson (2011) “Positive Emotions Open our Mind”
    https://www.youtube.com/watch?v=Z7dFDHzV36g
    11. Robert Cialdini (2012) “Science of Persuasion”
    https://www.youtube.com/watch?v=cFdCzN7RYbw
  2. (Options to also change the videos selected)
  3. (refer to TED Ed website to create your own lesson plan)

Should Students Have the Power to Change Course Structure? 

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Submitted by Katrina Markowicz

Article Reference

McDonnell, G. P., & Dodd, M. D. (2017). Should students have the power to change course structure? Teaching of Psychology, 44(2), 91-99.

Article DOI

10.1177/0098628317692604

Summary of Article

Purpose: Compared to the previous semester’s class where this wasn’t implemented, does changing small details to a course through four course evaluations improve course performance and course satisfaction?

Method:
– Perception class with 73 undergraduates
– Students completed four course feedback forms (CFFs)
CFFs 1 had two sections: 1) impressions of the course and the instructor, and 2) how much the student wanted to change certain aspects of the course (that were brainstormed together).
CFFs 2-3 sections: 1) impressions of the course and the instructor, 2) extra changes they wanted to see in the course, and 3) whether the changes improved their course satisfaction (yes/no)
CFF 4 also asked about the effectiveness and satisfaction of the course changes in addition to the impressions of the course and semester.
– Other measures: performance data across three tests an end-of-semester evaluation
– The instructor shared the results of the CFFs with the students

Main Results:
– Changes made were perceived as effective and improved quality of class
– Students rated instructors higher in CFF semester than non-CFF semester
– Students in CFF semester performed better on averaged than students in non-CFF semester

Author’s conclusions: Midsemester feedback should provide students an opportunity to change the course as it improves the student learning environment.

Discussion Questions

  1. Given the results of the study and your own attitudes towards mid-semester feedback (MSF), would you incorporate MSF into your teaching practice? What are the benefits of MSF? Do you see any potential downsides (e.g., teaching self-efficacy)? If so, how would you protect yourself from these?
  2. The article’s context was spent on how to use feedback to make changes to the course: How comfortable are you with making course changes mid-semester? What types of activities or lecturing approaches would you change? What would you be unwilling to change (i.e., what is too much of an ask)? How will you go about collecting this data and ensuring you made the appropriate changes?
  3. The Forsyth (2016) reading for this week purports that student feedback is reliable and valid, and student ratings do not change based on whether the feedback is end-of-semester or mid-semester (given that there are no course changes). Since there are course changes occurring in this study, do you think that the subsequent three evaluations are reliable and valid without rater bias (i.e., halo effects; monitoring effects)? What could have been done to the study methods to confirm or disaffirm rater bias, if there was any?

Team-Based Learning Improves Course Outcomes in Introductory Psychology

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Submitted by Katrina Markowicz

Article Reference

Travis, L. L., Hudson, N. W., Henricks-Lepp, G. M., Street, W. S., & Weidenbenner, J. (2016). Team-based learning improves course outcomes in introductory psychology. Teaching of Psychology, 43(2), 99-107.

Article DOI

10.1177/0098628316636274

Summary of Article

Purpose:
-To compare two styles of teaching, team-based learning and lecturing, and examine the influence of these teaching styles on student satisfaction and exam performance.

Method:
-Introductory psychology classes (1,130 undergraduate students, 29 sections, 15 graduate instructors)
-14 out of 15 instructors randomly assigned to condition (team-based learning or lecture)
-Team-based learning (TBL) condition: 12 class sessions dedicated to completing a TBL module. Modules included: Out of class preparation: read 10 pages of textbook, In-class quiz (individual), Group quiz (same as above) taken as a team with feedback provided, Team application activities
-Lecture condition: not allowed to implement team-based learning quizzes or activities and taught via primarily lecture
-Each condition completed the same midterm and final (both multiple choice), and a course evaluation survey at two time points (mid-semester and end-semester)
-Other measures included: perceptions of TBL, preference for TBL over lecture, positivity towards TBL, and involvement in TBL.

Main Results:
-Students in TBL condition performed moderately better on both exams than students in lecture condition.
-Results seemed to show that these gains were related to content covered in TBL modules.
-There were no differences in course satisfaction between groups.
-TBL students expressed positive attitudes towards activities, and preferred lecture style learning over TBL.

Author’s conclusions: TBL is an effective method of learning which does not negatively impact course satisfaction.

Discussion Questions

  1. The two outcome variables for this study were student satisfaction and exam performance. Using Bloom’s taxonomy, specifically the cognitive domain (i.e., knowledge, comprehension, application, analysis, synthesis, evaluation), what other outcomes would be important to assess if this study were to be replicated? What types of assignments could be integrated into the team-based learning approach to foster learning in these other domains?
  2. Would you use the team-based learning approach in your own teaching and how (e.g., to supplement lectures)? Did you like it? What are the pros and cons?
  3. How do you feel about the findings about the increase in exam scores for the team-based learning students, but the preference for lectures? How does this change or not change your view on what types of activities to incorporate in with your future lectures? How does the information presented in other readings from this week supplement your viewpoint?

Taking the Testing Effect Beyond the College Freshman: Benefits for Lifelong Learning

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Submitted by Jeremy Barsell

Article Reference

Meyer, A.D., & Logan, J.M. (2013). Taking the Testing Effect Beyond the College Freshman: Benefits for Lifelong Learning.  <em>Psychology and Aging, 28</em>, 142-147.

Article DOI

Summary of Article

Using testing as a learning tool has been well-documented in young populations.  Testing as a learning strategy and tool has been linked to an improvement in long-term memory and recall.  Meyer and Logan (2013) investigate whether these testing effects apply beyond college students.  Comparing three groups, university students aged 18-25, young community adults aged 18-25, and middle-aged to older community adults aged 55-65, the authors found evidence of increased learning due to testing.  All three groups participated in a study phase of four study topics, followed by a distractor phase which included multiplication problems, then a recognition test phase for two of the study topics, and a restudying phase for the other two study topics.  Participants then went through another distractor phase before taking a final cued-recall test.  Findings suggested that testing significantly improved learning, and that there were little differences between the young adult and older community groups.  Implications include the use of testing beyond the academic setting, especially in the context of careers or jobs.

Discussion Questions

  1. Testing is heavily associated with being in school. Based on the results of this study, testing can be an effective tool for learning. As such, how would we apply this beyond academia? Would this be effective for promoting lifelong learners? In what other contexts could testing be effective?
  2. Other research has shown that testing itself can be full of bias. For example, there is evidence that the SAT is biased against racial minorities and for those with lower SES. If testing is truly better for learning, how can we reconcile these biases with testing?
  3. Are there other ways to demonstrate learning besides testing students? The authors of this article would suggest that testing is both an effective learning and studying tool. How can we promote positive attitudes towards test-taking, and should we do this? Or are there better ways to “test” learning?

Moral identities, social anxiety, and academic dishonesty among American college students

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Submitted by Hadley Rahrig

Article Reference

Wowra, S. A. (2007). Moral identities, social anxiety, and academic dishonesty

among American college students. <em>Ethics & Behavior</em>, <em>17</em>, 303-321.

Article DOI

http://dx.doi.org/10.1080/10508420701519312

Summary of Article

Instructors can employ methods known to minimize cheating behavior, however, little is known about the motivations that influence students to cheat. Wowra (2007) posits that academic integrity is guided by two factors: (a) identification with moral principles and (b) sensitivity to social evaluation. Seventy undergraduate students enrolled in an introductory psychology course completed the Integrity Scale and students scoring in the lower tercile (expedient students) and upper tercile (principled students) were recruited for further evaluation. Participants completed self-report measures of social phobia and antisocial behavior (subscales measuring lying, stealing, cheating, broken promises, and aggression). <em>As expected, students who reported greater relative centrality of moral identity reported fewer instances of academic dishonesty. Results demonstrated a weak but significant positive correlation between cheating behavior and social anxiety. Interestingly, students in the expedient group reported 4 x the number of social phobia symptoms relative to the principled student cohort. </em>

Discussion Questions

  1. The social anxiety hypothesis of academic cheating (Wowra, 2007) states that cheating often occurs when students’ motivations to appear academically competent outweighs personal integrity. From your experience of students, do you think social factors of academic performance influence cheating behaviors? 
  2. Wowra (2007) provides evidence from experiments in which participants often cheated on bogus tests to avoid appearing less than average. As an instructor, how can you minimize preoccupation with appearing less than average for low achieving students? 
  3. What classroom factors do you think influence justification of ethical violations in those with peripheral moral identity?