Stitching The Diss*: Researching Social Media Use in Higher Education Classrooms

As part of my dissertation, I need to discuss how social media is currently being used (and researched) in higher education settings.  I had hoped to search “connected learning” and higher education and be done with it, but as we can tell from this post, “connected learning” wasn’t going to cut the mustard for finding all the articles in the higher ed world.

So, given that my research question revolves around the assessment of the social learning processes that occur on social media platforms, why not search for:

Social Media” AND “Higher Education or Universities“?  

Once again, I used ERIC, Academic Search Complete and Education Research Complete (both from the EBSCO aggregator).

I found a total of 311 articles.  Once I removed:
(1) Repeats
(2) University use for recruitment, admissions, or communication with the community at large
(3) Non-academic use by students
(4) Non-instructional use by faculty
(5) Literature reviews, expert opinions, and theoretical position paper type things, I was left with

64 articles that described or studied class-related use of social media in graduate and undergraduate settings.

Here is what’s being studied and called “social media” in the educational research literature: 

“Other” includes: podcasts (3), mobile devices such as ipods or smart phones (3), Pinterest (1), Yammer (1), and homegrown, closed private social networking sites (4).

More useful information: 

  • A vast majority of these are individual case studies, describing how an instructor integrated social media platforms into their classrooms. 
  • Most describe digitally-enhanced face-to-face classes, although fully online courses are represented in small numbers.
  • Many of the studies came from business, music, library, or general education contexts with some health sciences, pre-service teachers, and humanities sprinkled in.   
  • Almost all studies focus on instructor perceptions of the technology, student satisfaction, or student perceptions of the value added by social media.
  • This information was captured through end-of-class surveys, interviews, and focus groups.
  • Most of the remaining were phenomenological studies meant to capture the student experience through qualitative content analysis of digital artifacts. 
  • Findings were almost ubiquitously positive. Some of the more interesting insights include:
    • Evans, C (2014) Twitter for teaching: Can social media be used to enhance the process of learning? British Journal of Educational Technology, 45(5), 902-915. –  This was a virtual ethnographic observation and qualitative/quantitative content analysis of a class in which Twitter was incorporated for the purpose of building a learning community. It found that amount of Twitter use was positively correlated with student engagement but not correlated with class attendance.  Course related tweets were entirely separate from any relationship-building between tutors and students.  Students used Twitter well for structured course-related activities but failed to use any of the interactive mechanisms available (mentioning, replying…)
    • Lin et al (2013) Students enjoy lurking/consuming tweets, but do not care to actively participate. Conclusions: scaffolding and education is needed to encourage students to participate.
    • Novakovich and Long (2013)  suggest that even though the quality of student feedback on blogs weren’t great, the act of providing it increased engagement and time on task. 
    • Pohl et al (2012) Enhancing the Digital Backchannel Backstage on the Basis of a Formative User Study. International Journal of Emerging Technologies in Learning, 7(1), 33-41. Suggests that the use of Twitter during large lecture hall classes increases the students’ skills in asking critical or higher level questions. 
    • Jacquemin et al. (2014) Twitter was perceived by students and faculty to be too obtuse for formal discussions, but provided a wonderful hub for linking between course information and news/community/realworld

Caveats:

I know for a fact that there’s a lot of juicy information under the keywords “blogging” “microblogging” and “Twitter,” so no I do not think I’ve captured everything with this search.  However there is some interesting information here.  Until next time…

*What is “Stitching The Diss?” I’m enrolled in a doctoral seminar that’s meant to help us complete the first three chapters of our dissertation.  The instructor, Paul Gerber, has suggested that the literature review is best contemplated in little nuggets (he said segments, but I work at #VCUALTLab, so I say nuggets) lest we get overwhelmed and confused by the task at hand.  Ok.  In an attempt to stay pleasantly overwhelmed and only as confused as can trigger creative productivity, I’ve opted to blog about each search I do so that I might stay engaged and increase my time on task.  After I blog about it, I “stitch” or “connect” it all together.  Stitching the diss. In other words, I’m hoping to learn. 

Stitching the Diss: Research on Assessment in Higher Ed Online Discussion Forums

I’m sure this will become a mere paragraph in my dissertation, but for now I think it has use in it’s long form for the #ccresearch team as we orient ourselves to the literature.  All of these articles are organized in my personal Zotero.  Bear with me – I will make that public as soon and link it (somehow) to this blog so you have the entire reference.  Until then if there are specific references you want – comment and I will pull them over (this may get me to make my Zotero public sooner rather than later…) The examples given are just examples – not all articles are represented here but all of the categories/themes are.

What types of learning processes and products are currently being captured in higher education, online, discussion-based settings? And how are they being captured?
I searched ERIC, Academic Search Complete, and Educational Research Complete with the search terms “online discussion” and “Universities OR Higher Education” and “Grading OR Assessment” to find a total of 159 articles.  When duplicates and irrelevant articles were removed, 87 remained.
The studies represented could be divided into those that:
·         Aim to design assessment tools for online discussion groups.
o   (Wei-Ying, Chen, Hu, 2013)
o   (Tomkinson & Hutt, 2012)
o   (Whatley & Ahmad, 2007)
o   (Wu & Chen, 2008)
o   (McConatha, Praul, & Lynch, 2008)
o   (Miniaoui & Kaur, 2014)
·         Aim to evaluate online discussion groups as a format for learning. (In other words, the assessment tool is not the focus of the article) 
o   (Kemm & Dantas, 2007)
o   (Fox & Medhekar, 2010)
o   (Anderson, Mitchell, & Osgood, 2008)
The assessment tools described in these studies are meant to measure the following indicators:

·         Content Acquisition 
o   (Burrow, Evdorides, Hallam, Freer-Hewish, 2005)
o   (Carroll, Booth, Papaioannou, Sutton, & Wong 2009)
o   (Kemm & Dantas, 2007)
o   (Kerr et al., 2013)
o   (Miers et al., 2007)
o   (Nevgi, Virtanen, & Niemi, 2006)
o   (Porter, Pitterle, & Hayney, 2014)
o   (Shephard, Warburton, Maier, Warren, 2006)
o   (Whatley & Ahmad, 2007)
o   (El-Mowafy, Kuhn, Snow, 2013)
o   (Fox & Medhekar, 2010)
o   (Mafa & Gudhlanga, 2012)
o   (Lai, 2012)
o   (Choudhruy & Gouldsborough, 2012)
o   (Moni, Moni, Poronnik, & Lluka, 2007)
·         Discussion Process
o   (Caballe, Daradoumis, Xhafa, & Juan, 2011) *qualitative analysis*
o   (Luebeck & Bice 2005) – *qualitative analysis*
o   (Moni, Moni, Poronnik, & Lluka, 2007) – *qualitative analysis*
o   (Rovai, 2007) – *rubric driven qualitative analysis*
o   (Choudhury , & Gouldsborough, 2012) *quantitative analysis* for participation
o   (McNamara, & Burton, 2009) – *quantitative analysis* for critical thinking skills
·         Emotional State of the Student Participants
o   (Hughes, Ventura, & Dando, 2007)
·         Critical Thinking
o   (Matheson, Wilkinson, & Gilhooly, 2012)
o   (Wei-Ying, Chen, Hu, 2013)
o   (McNamara, & Burton, 2009)
The assessment tools were controlled by the following people:

·         Self-assessment
o   (Damico & Quay, 2006)
o   (Matheson, Wilkinson, & Gilhooly, 2012)
o   (DeWeaver, Van Keer, Schellens, & Valcke, 2009)
·         Peer-assessment
o   (Davies, 2009)
o   (McLuckie & Topping, 2004)
o   (El-Mowafy, Kuhn, Snow, 2013)
·         Combinations of self- and peer assessment
o   (Beres & Turcsani-Szabo, 2010)
o   (Biblia, 2010)
·         Instructor driven (Majority)
o   (Kerr et al., 2013)
o   (Miers et al., 2007)
o   (Porter, Pitterle, & Hayney, 2014)
o   (Rooij, 2009)
o   (Shephard, Warburton, Maier, Warren, 2006)
o   (Smith et al., 2013)
The assessment tools consisted of these formats:
  • ·            Content Test or Quiz

o   (Burrow, Evdorides, Hallam, Freer-Hewish, 2005)
o   (Carroll, Booth, Papaioannou, Sutton, & Wong 2009)
o   (Miers et al., 2007)
o   (Porter, Pitterle, & Hayney, 2014)
o   (Shephard, Warburton, Maier, Warren, 2006)
o   (Smith et al., 2013)
o   (Hardy et al., 2014)* Student-generated multiple choice questions*
o   (Nevgi, Virtanen, & Niemi, 2006) *Student-generated multiple choice questions*

·         Reflective essays
o   (Damico & Quay, 2006)
o   (Estus, 2010)
·         Rubrics
o   (Rooij, 2009)
o   (Wei-Ying, Chen, Hu, 2013)
o   (Whatley & Ahmad, 2007)
o   (Wyss, Freedman, Siebert, 2014)
o   (Lai, 2012)
o   (Anderson, Mitchell, & Osgood, 2008)
o   (Rovai, 2007)
·         Authentic Performance Assessment
o   (Smith et al., 2013) – instructors observed students performing online interviews
·         Blog Scraping
o   Yong Chen, 2014
Final thoughts: 
In 2004, an author claimed “We make no claim here that assessing e-learning is really radically different from assessing learning – the same principles apply.” (MacDonald, 2004, p 224)
However, by 2013, Liburd & Christensen wrote: “Web 2.0 is a radically different way of understanding and co-creating knowledge and learning, which has a range of implications.”  The literature is shifting to suggest that e-learning requires radically different assessment practices (Dalelio, 2013) because assessment is a fundamental driver of how and what students learn (Khare & Lam, 2008; Ross et al., 2006).  In other words “the purpose, criteria, and intended outcomes of assessment must be established” (Gayton & McEwen, 2007).

Unfortunately there is a feeling in the literature that “criteria for assessing discussion skills remain unclear” (Lai, 2012).  
This is very clearly a gap.