Is Ethical Digital Research POSSIBLE!?!?!?!?!?!

Jannie Phienboupha

What does ethical digital research mean to you?

Ethical digital research to me contains the same core components as detailed by the IRB for traditional research: minimizing the risk of harm, obtaining informed consent, protecting anonymity and confidentiality, avoiding dangerous/harmful practices, and providing the right to withdraw.  However, I believe the playing field for digital research is much more vast and complex. Ethical digital research must be accompanied by corporations and companies’, like social media platforms, cooperation. It should also encompass how data is extracted and exactly through what means or through which “tunnel” it is being extracted from, with the informed consent of participants.

Given your knowledge of the IRB, do you think that they ensure ethical digital research as defined by you? Why or why not?

Given my knowledge of the IRB, I currently do not think they ensure ethical digital research. I say this because it is not all encompassing of the digital world. There are many ways in which ethical procedure can be compromised in the digital world and the IRB can only be applied to digital platforms that already have structures in place that work along the lines of the IBR. I imagine it would be difficult to do a study on human subjects on a platform where data is sold to advertising companies without consumers even knowing they are being duped.

Is it even possible to protect human subjects in digital research? Is there a point in digital research, particularly when examining ‘big data’, where we can truly say that human subjects aren’t affected? If so, what is that threshold?

At this point in time, I do not believe it is possible to protect human subjects in digital research. As the Ethics as Methods: Doing Ethics in the Era of Big Data Research article points out, a huge data mining firm were able to access personal information of 50 million Facebook users without them knowing! Furthermore, accountability and responsibility of where the ethics should have began is difficult to pinpoint in digital spaces. Maybe the threshold where human subjects start and end with just analyzing the digital platforms, like how much revenue they make, how they make, etc and excluding anything that involves the actual users.. However, consequently I stand by my opinion that ethical digital research currently does not have the ability to protect human subjects from being affected, but I will say that will be possible as seen through the TedTalk. With the ethical practices outlined in the video like transparency, simplicity and easy to use and understand terms of service, personal empowerment for users, consumers, and employees and maybe some more, ethical digital research may be obtainable. Ultimately with all this said, I am a strong believer that digital platforms have a responsibility to be transparent to users and data researchers should be ethnically inclined to not collect data without users knowledge and consent.
(sidenote: this reminded me of a tweet I saw of how someone had to get a background check for their job and the report is a 300+ page pdf and printout of every tweet they’ve ever liked with the word “fuck” in it

this is an example of how the background check company and Twitter were not… so ethical)

What are something that you can do as a digital sociologist to protect the human subjects in your own research projects?

Some things I could do as a digital sociologist to protect my human subjects is of course by instituting the tenets of transparency, simplicity and easy to use, personal empowerment for users, consumers, and employees to create that trust and relationship needed to carry out research. I would also try to employ what the aforementioned article suggests to do: 1) develop a heuristic ethical decision making technique that is both practical and less abstract and 2) seek inspiration from colleagues who have dealt with sensitive topics and high-risk research situations. In other words, don’t go into it blindly and be prepared.

6 thoughts on “Is Ethical Digital Research POSSIBLE!?!?!?!?!?!

  1. Hi, Jannie! I agree with your thoughts. I like how you included an example from Twitter of a background check that seem really unethical and invasive. Making techniques that are practical and less abstract as well as seeking help or inspiration from other colleagues is a great idea towards making digital research more ethical!

  2. Jannie,
    Thanks for the reflection. I like your description of how digital data differs- it indeed can be much more complex than other forms of research- with some exclusive considerations. It is true that it can be difficult for the IRB to be a major force in protecting research participants, most notably due to the digital nature, the volume of data, and the lack of regulations and protections provided by the entities that create the platforms for participants to create the sources of such large data. I like that you included points other than the common ways in which a researcher can protect participants- transparency and personal empowerment to the participants in addition to established standards already in place like the IRB. Also, I think it is important to consider the ethical considerations of researchers- if research is conducted with ethics as a major focus the less emphasis would need to be placed on making sure participants were protected- big data would fall be similar to other forms of research.

  3. I like how you pointed out that digital research must be accompanied by companies cooperation. I think this is the main issue when it comes to ethics in digital data research. If companies like social media platforms would be more transparent in what data of their users is being shared and where its going and what its being used for, I think people would be less paranoid about data mining.

    I like the example of the Twitter background check you used, this shows how data mining can be misinterpreted by employers and how that data can be misinterpreted by researchers.

    I also agree with you that digital platforms have a responsibility to be transparent to their users. I think its unethical to collect data on people online without their knowledge unless its just like purely data driven and doesn’t identify the user personally.

    • Hi Eric, thank you for your comment! I like how you pointed out the Twitter example shows how data mining can be misinterpreted. I couldn’t find the right words to describe how it was unethical, so thank you!

  4. Yes, Cambridge Analytics collected that information with the help of Facebook and it wasn’t even illegal to do so. The terms of services and privacy policies pretty much negate anyone’s access to informed consent. Most people just clicked yes or agreed, and didn’t realize that quizzes that asked people what dinosaur they would be or which romantic movie best fit their personality were forms of targeted data mining.

Leave a Reply