Farewell Assignment


Evaluating the Method – Digital Ethnography

Ethnographic Research in a Cyber Era

  1. Ronald Hallett and Kristen Barber make the argument that researchers need to figure out ways to incorporate the digital data that colors and shapes a lot of real-life experiences into existing ethnographic practices. Their argument notes that the traditional understanding of a “natural habitat” must be modified to include an “online habitat.”
  2. Digital ethnography is defined by Hallett and Barber as a blend between cyber ethnography and traditional ethnography in an effort to highlight the necessity for researchers to consider online spaces in their work. They focused on Facebook, Yelp, and corporate websites to follow participants across multiple platforms for a fuller perspective. The great thing about digital ethnography is that it is a catalogue of interactions.
  3. There are always ethical boundaries to consider when looking at privacy as it concerns to the participants. While we might get to see unfiltered narratives, many people do carefully curate who can see what they can post, or edit privacy which could prevent a totality being seen by researchers. Additionally, there can be a disparity between a person’s true demeanor and presentation and what they publish on the web. Finally, with the increase of news-worthy headlines about digital pasts hurting public figures, people are much more cognizant and hesitant to share full thoughts online.

Uncovering longitudinal life narratives: scrolling back on Facebook

  1. Robards and Lincoln aim to explore the role of sustained social media use in longitudinal research, particularly looking at how stories about “growing up,” are told and archived over the course of years. They argue that reviewing Facebook data with a participant as a co-analyst adds to the quality of QLR while discussing limitations, ethical issues, and how to include these texts in research.
  2. Robards and Lincoln identify digital ethnography as a tool that can be used to uncover previous life-changing moments of participants, while being able to gather direct quotes from participants. They use digital tracing while using participant profile social media posts as “texts,” engaging the participants in discussions as they review in order to provide context.
  3. The researchers here did not take direct quotes to protect participant identity. It is also a huge variability factor in what quality of addition QLR that participants added depending on physical in-person dynamics, and the participants got to influence how texts were interpreted. Additionally, how did they create equity between participants who limited content the researchers had access to versus the participants who did not? What bias could have been created here?

The studies both conducted digital ethnographies and explored social media usage, particularly both Facebook. However, Robards and Lincoln reviewed texts with participants, while Hallett and Barber took a more removed approach to initially gather data (although they still conducted interviews).

The face-to-face interview method and interaction adds to qualitative research, but Robards and Lincoln did not get the same in-depth understanding of each participant.


Evaluating the Method: Twitter Data

  1. Summarize briefly each study: what are the researchers trying to accomplish?  Where do they get their data? What are their findings? What are they arguing?

Pond’s Riots and Twitter Data seeks out a better understanding of connective action theory looking at internet activism and what drives people to action. The data they collected was from the hashtags #RiotCleanUp and #OperationCupOfTea, finding that the two movements did not incite the same level of action. Notably, they argue that mobilization occurs only from larger influence networks and deeper discourse beyond a hashtag movement.

In a similar vein, Barisione’s #RefugeesWelcome article addresses the influence social media platforms have to enhance collective action from users. By analysing the Twitter hashtag “RefugeesWelcome,” as a case of “digital movement of opinion” (DMO) for conducting metadata analysis and understanding the origins of the movement. They argue that the concept of DMO is a useful heuristic for digital research and media participation.

  1. How are they using twitter data?  What methods are they employing? What is their methodological approach to coding? (inductive? Deductive?)

The main methods for Pond’s Riots and Twitter Data were content coding and close textual reading of various discourse to assess and analyze the influences of the different hashtags. As they pulled tweets to analyze and look for common patterns, they employed an inductive method to gathering data. Their method to identify, analyze, and compare had 3 parts: 1. establishing an overview of discourse and determining the hashtags during a relevant timeframe, 2. finding a manner of differentiating between these hashtags that supports a critical analysis of their relative influence and permit the interactive dynamics of the Twitter platform, and finally 3. establishing a mechanism for interpreting this influence in terms of connective action.

For Barisione’s look at #RefugeesWelcome, another inductive approach was used as a triangulation of quantitative methods from existing Twitter networks and content. Looking at the singular hashtag, Twitter networks, and metadata analyses were the methodologies used.

  1. What are the strengths and weaknesses of the methods that they are using?  What do they capture well? What do you think they are missing out on? If you were to conduct this study, would you do anything differently?  What and why or why not?

Pond’s Riots and Twitter Data found strength in number of hashtags. 7 hashtags allows for multiple pieces of representative data on similar topics, without casting a painfully large and broad net. Unfortunately, the researcher did not get very large samples for each hashtag. Only look at 1,000 can be costly and miss out a lot of useful data points.

Barisione’s #RefugeesWelcome benefits from the strength in numbers with over 1,000,000 tweets. This could serve as a double-edged sword, though, as this means hours of combing through the content. They might have been able to possibly narrow it down by looking at other factors or hashtags to reduce the volume needed for this research.


Discuss & Assess 3: Evaluating the Method

  1. Summarize briefly each study: what are the researchers trying to accomplish?  Where do they get their data? What are their findings? What are they arguing?

Longo’s Keeping It in “the Family” aims to analyze the juxtaposition of a gender neutral US naturalization process to a gender-biased migration process by examining content on an online immigration forum. Longo argues that these online forums police immigration requests based on gendered ideals, sexual standards, and issues like fertility and desirability.

Keller looks into digital media platform use for feminist dialogue and processing in Speaking ‘unspeakable things’ to understand what experiences women take to the net, how women are using digital media to document and process, and why there is a motivation for women to use digital platforms in this method. Keller gathers data from Twitter, Twitter users directly, and another anti-harassment website. Keller argues that a digital media method of processing and documenting these experiences allows women to better define their personal boundaries in methods and ways not previously known or used.

  1. How are they using content analysis?  What are they coding for? What is their methodological approach to coding? (inductive? Deductive?)

As Longo does not conduct interviews or gather new data, she is using content analysis via data scraping. Her coding approach involved using Python to collect and clean over 48,000 posts to search for the key term “red flag,” in order to understand the “what, how, and why” of the evaluations of forum users as they weighed in on individual immigration requests. Longo also spent time understanding the forum as a bystander for best insights into social dynamics, terminology, and themes prior to conducting her analytic process.

Content analysis is performed on the data from the Twitter hashtag #BeenRapedNeverReported and posts from the site Hollaback! Additionally, interviews were conduced with Twitter users vocal about the problems of rape-culture. Additional theory and explanation resulted in an inductive approach to collecting data.

  1. What are the strengths and weaknesses of the methods that they are using?  What do they capture well? What do you think they are missing out on? If you were to conduct this study, would you do anything differently?  What and why or why not?

Strengths for Longo’s Keeping It in “The Family” the data was easily obtained, accessible, and ethical as she was removed from interacting with the forum users. Collection was effective and cost-effective. The Python script was effective in finding the “Red Flag” keyword, but might leave room for missing context. Similarly, some downsides of content analysis is that the results might not be easily replicated without Longo’s explanation. Longo noted another flaw: the data collected excluded the LGBT users. Additionally, some people might have the same gender-biased behavior outside of the issues of immigration, but the data would not show if individual users have more systemic biases outside of the issue of marriages with immigration issues.

Keller’s Speaking ‘unspeakable things’ finds strength in that data collection was done entirely online and through different media. It explored the difference between interactions with a hashtag versus posts on a site whose environment is dedicated to anti-harassment. It was unobtrusive as the posters largely used pseudonyms, and the researcher did not have to interact with anyone. However, there were only three methods of discussion examined. There is a large swath of data relating to sexual harassment and abuse that was left undiscovered by this method. Additionally, the volume of collection was likely timely.


Rachel B. Personal Introduction – SOC 676

Hello! I’m hoping these work as intended. I can sometimes be a bit verbose, my apologies! I never want anything I say or mean to be misconstrued across digital channels (because that never happens!).

Rambling: part 1
Rambling: part 2. Featuring a bad still/freeze frame.