What does ethical digital research mean to you?
Ethical digital research to me contains the same core components as detailed by the IRB for traditional research: minimizing the risk of harm, obtaining informed consent, protecting anonymity and confidentiality, avoiding dangerous/harmful practices, and providing the right to withdraw. However, I believe the playing field for digital research is much more vast and complex. Ethical digital research must be accompanied by corporations and companies’, like social media platforms, cooperation. It should also encompass how data is extracted and exactly through what means or through which “tunnel” it is being extracted from, with the informed consent of participants.
Given your knowledge of the IRB, do you think that they ensure ethical digital research as defined by you? Why or why not?
Given my knowledge of the IRB, I currently do not think they ensure ethical digital research. I say this because it is not all encompassing of the digital world. There are many ways in which ethical procedure can be compromised in the digital world and the IRB can only be applied to digital platforms that already have structures in place that work along the lines of the IBR. I imagine it would be difficult to do a study on human subjects on a platform where data is sold to advertising companies without consumers even knowing they are being duped.
Is it even possible to protect human subjects in digital research? Is there a point in digital research, particularly when examining ‘big data’, where we can truly say that human subjects aren’t affected? If so, what is that threshold?
At this point in time, I do not believe it is possible to protect human subjects in digital research. As the Ethics as Methods: Doing Ethics in the Era of Big Data Research article points out, a huge data mining firm were able to access personal information of 50 million Facebook users without them knowing! Furthermore, accountability and responsibility of where the ethics should have began is difficult to pinpoint in digital spaces. Maybe the threshold where human subjects start and end with just analyzing the digital platforms, like how much revenue they make, how they make, etc and excluding anything that involves the actual users.. However, consequently I stand by my opinion that ethical digital research currently does not have the ability to protect human subjects from being affected, but I will say that will be possible as seen through the TedTalk. With the ethical practices outlined in the video like transparency, simplicity and easy to use and understand terms of service, personal empowerment for users, consumers, and employees and maybe some more, ethical digital research may be obtainable. Ultimately with all this said, I am a strong believer that digital platforms have a responsibility to be transparent to users and data researchers should be ethnically inclined to not collect data without users knowledge and consent.
(sidenote: this reminded me of a tweet I saw of how someone had to get a background check for their job and the report is a 300+ page pdf and printout of every tweet they’ve ever liked with the word “fuck” in it
this is an example of how the background check company and Twitter were not… so ethical)
What are something that you can do as a digital sociologist to protect the human subjects in your own research projects?
Some things I could do as a digital sociologist to protect my human subjects is of course by instituting the tenets of transparency, simplicity and easy to use, personal empowerment for users, consumers, and employees to create that trust and relationship needed to carry out research. I would also try to employ what the aforementioned article suggests to do: 1) develop a heuristic ethical decision making technique that is both practical and less abstract and 2) seek inspiration from colleagues who have dealt with sensitive topics and high-risk research situations. In other words, don’t go into it blindly and be prepared.