Ethics of Researching Connected Learning (Pt. 1)

The summer semester, it seems, is officially over.  What did I do with it? Well, I wrote the Case for Connected Learning, the white paper that strives not to be a white paper, for Virginia Commonwealth University’s Academic Transformation Lab (ALT Lab).  As I described in previous post, working on that project meant a lot to me.

So what’s next?

As part of the Virginia Commonwealth University Academic Learning Transformation (ALT) Lab initiative to study, design, and promote assessments for connected learning I will be investigating potential applications of social network analysis in technology-enhanced learning environments. Social network analysis in this context is not new but there is plenty of space for further investigation. Before getting too messy with data, however, I thought a quick dive into internet research ethics was warranted.

As many scholars over the last twenty years have suggested, the internet challenges everything we thought we knew about research ethics.

Some of that challenge relates to the contiguity of the internet as a social phenomenon, research and data collection tool, and a location for ethnographic (and other) research.  When we start using the internet as all three, simultaneously, things get blurry (Esposito, 2012).

The Association of Internet Researchers (AoIR) recently updated their recommendations for conducting ethical internet research via a working committee paper: Ethical Decision-Making and Internet Research. The report’s authors identify tension around three ethical areas which overlap and deserve a lot of explanation.  Reading Kanuka and Anderson (2007) and Esposito (2012) helped me fill in the blanks, at least at a very superficial level.  I’ll go through them briefly, as befits my status as a research ethics newbie. 

  • Defining participants as “Human Subjects.”  This refers to the currently accepted model of research ethics, developed for the needs of and consequences of biomedical research.  The debate around the wisdom of fitting biomedical research constructs onto social science research has been going on forever, but has a particular zing in the context of internet research.  Are people who participate in internet culture actually subjects, like in biomedical research?  Or should they be treated more like authors who are knowingly putting their things into the public domain?  Such distinctions truly impact things like “opting in” versus “opting out” consent, ownership, and the assumption of anonymity as the preferred default.
  • Defining public versus private.  Kanuka and Anderson (2007) does a good job of explaining this concept.  Are online communications publically-private, privately-public, or semi-private? (1) People may operate in public but maintain strong perceptions of privacy (although Walther 2002 objects).  (2) People may expect proper contextualization to their data or limitations on who or what uses their information.  They might not imagine that a search engine might pick up something up.  I am reminded of an article on search engine optimization I read in the Chronicle of Higher Education several months ago.  A faculty member was complaining that a webcrawler had picked up on a comment he had posted on a blog – and, when reported out of context, the comment made him sound like a racist. At the time of the Chronicle of Higher Education article, the “racist” blog comment was the first thing to pop up when his name was Googled – it’s true, I actually checked it out. I can understand why he was upset.  (3)  With the web capturing every bit of data out there, a permanent record now exists of what was once fleeting.  The above example applies, as well as the “Right to be Forgotten” ruling against Google in the European Union earlier this year.  
  • //platform.twitter.com/widgets.js

  • Defining data versus persons.  Personally, I think this bullet could have been named more effectively – possibly something with “ownership” in it.  I’d never thought of privacy this way, but  Kanuka and Anderson (2007) defines privacy as the participants rights to control the access of others to information about them.  Privacy is about control and ownership.  Who owns the data once it’s out there?  The author? The curator? No one?  And is an email or G-chat or interview via Skype soooo different from old twitter feeds, blogs, and blog comments?  

I mentioned contiguity as one challenge related to internet research.  As the AoIR “areas of tension” make clear, another challenge to internet research ethics relates to personhood. Is an avatar a person?   Virtual interactions often transcend the person who creates them.  And speaking of transcendence,

Since connected learning experiences transcend the line between formal and informal education, how do we study it effectively and ethically? 

From the five or six articles I read, I got the impression that we all need to slow down and really think about Internet research – and then demand that our IRBs to do the same.

It all comes back to Aristotle and his concept of phronesis: Rather than using a one-size-fits-all pronouncement, ethical decision making is best approached by applying practical judgement and paying attention to the specific context. But how do we balance the necessity of teleological (consequence-based, contextualized) perceptions of ethics with the safety of deontological (rule-based; good for stable constructs) perceptions of ethics (Kanuka and Anderson, 2007)?

And then all this talk made me think of this:

So what does that mean for the unseasoned graduate student researcher who is trying to study assessment in connected learning environments?

It means she has a headache now.  Nevertheless, she will continue her stuggles with internet research ethics next week – with a brief review of the ethics of studying microblogging.

Summarizing “Research Questions and (Better) Learning Analytics”

Today I joined Audrey Watters of Hack Education, Andrew Sliwinski at Mozilla Foundation, Justin Reich of HarvardX and Berkman Center, andVanessa Gennarelli of P2PU for their lunchtime discussion of Research Questions and (Better) Learning Analytics. A lot of the discussion was applicable to educational assessment and social learning environments in general, sprinkled with commentary on learning analytics, ed tech, and MOOCs specifically.  It was more synthesis and less discovery for me – but it was validating to hear “experts” say many of the things I thought I was reading in their blogs and articles.  Even so, it was a reminder of just how tricky educational assessment really is. I’ve embedded the video below but just in case you don’t have an hour, here are some of the salient points I wrote down between bites of edamame and basil patties and broccoli (tasty, vegan, and ridiculously high in Vitamins A, C, calcium, and fiber – as all lunches should be).

  • The entities represented in this talk operate with the underlying assumption that learning is social: People learn better when they learn together. These organizations need to be able to test that hypothesis, and learning analytics may be one way to do that.
  • Instructional design and learning experiences are shaped by how we measure learning and evaluate programs.  The problem is that we tend to measure what is easily measured – which is almost never what we really should be measuring.  For example it’s very difficult to measure disposition but very easy to measure recall.  So which do you think we measure – ALL THE TIME?
  • Also consider the impact of platform – if an instructor is using a platform that spits out learning analytics and this is the basis of assessment…what does this do to student choice?  If the platform can’t measure participation in the form of Soundcloud or YouTube, I guess this means the student can’t use those??
  • Measurements of student engagement are often used as proxies for measurements of learning – and there is some data to support that conflation.  Maybe that’s ok.
  • We are still learning how to measure engagement.
  • The current educational environment – and society and government – assumes that the answers are always in the data…if we have enough data, we will have the answer.  But is learning really a science?  Is that an assumption? (as an aside, there seemed to be consensus that a killer pearl probably does exist somewhere in the really big scary ocean that is big data).
  • Not all analytics are the same.  Google analytics and learning analytics are only one word away but are very different.
  • Don’t conflate ED tech with AV tech.  The design should drive the tech, not the other way around.  Unfortunately educational research is starting to go the same way…the “easy data” is starting to drive research designs.
  • “We build and teach what we can measure.”  (That’s uncomfortable to think about, isn’t it?)
  • One major point of the project-based learning movement is to break down disciplinary silos – using origami to teach geometry.
  • But one problem with project-based learning is that it is based on engagement metrics – a feedback loop that tends to optimize what is already being done rather than assessing whether or not they should be building something that looks entirely different.  
  • We need to be careful about how we present learning analytics to the individuals in classes.  What do you do with the information that you have only a 38% chance of finishing a class?  That information can impact people very differently.
  • “It’s just SO not romantic – the quantified self.” –Vanessa Gennarelli.  Best quote of the day.
  • ETHICS.  It’s the last twenty minutes of the video.  Educators have done a horrible job of explaining to students how their data is already being used to impact institutional decisions – how many times they eat in the dining hall, which buildings they enter after hours, how many classes they registered for last semester. 
  • Users need to know what information is being collected and how that data might eventually be of value to the student.  For example, in medical contexts, people allow huge amounts of personal data to be collected – they do so because they have an underlying assumption that providing that data will ultimately lead to a more accurate diagnosis or better health in the future.  Educators don’t have enough information right now to be able to make the same claims – but they need to get there.



[youtube http://www.youtube.com/watch?v=5LtpOZT0z7E]