Evidence of My Connected Learning Obsession

Seriously, my obsession with connected learning has got to be annoying some people by now, and I haven’t even defended my prospectus yet.  Here is some evidence of my obsession, as well as documentation of Reason #3: Why it’s fun to work with Jon Becker.

Connected Assessment for Connected Learning

This is a (slightly) more polished version of my previous post.

[slideshare id=42696471&w=425&h=355&style=border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;&sc=no]

My dissertation research is focused on creating an assessment protocol for connected learning courses in higher education environments. My work begins with the premise that connected learning is an inherently participatory act. Connected Learning emerged in the context of the digital age, which, according to Henry Jenkins and the MacArthur Foundation, supports a participatory culture. This speaks to the fact that people don’t just consume information on the Internet; they curate, annotate, critique, like, mash-up, comment on, tweet, link to, remix, repurpose, and otherwise transform digital information. In other words, they participate in how they and others understand the world. Since Connected Learning empowers student to be impactful and intentional citizens of this digital world, it makes sense that it requires significant student engagement and participation for connected learning to occur.

 The idea of learning as participatory is not a new concept; many have called learning an active process: In the modern age, first there was Dewey and Montessori, Lindeman and, later, Lave and Wenger. In the 1960s Jerome Bruner suggested that the work of learning is an active dialogue: one of constructing and reconstructing hypotheses. Freire and Mezirow said similar things..in short, there is little revolution in suggesting that active participation and social interaction are acts of deeper, meaningful learning. However assessing participation, which is generally considered a transient action rather than a tangible product, can be difficult. And as we tend to build what we can assess, the difficulty of assessing participation can impact the emphasis we place on it within the classroom.

The assessment literature suggests that we attempt to assess or document learning for three reasons:
• To evaluate curriculum and instructional practice
• To provide formative feedback for students and instructors and
• To assess student performance for the purpose of making decisions around student advancement or promotion.

Therefore as I work to create an assessment protocol for connected learning, I am hoping to create something that can serve all purposes by:
• Generating potentially comparable and generalizable metrics
• Providing real-time data useful for instructor, peer, or self-assessment
• Offering a more sophisticated and elegant approach to assessing classroom participation

 I find Salomon’s (1993) work on collaborative knowledge creation useful because he describes three forms of participation that are particularly relevant to connected learning environments: contributing pieces to the whole; acting as nodes or connectors within a network; and interpreting shared meaning. But how do we begin to think about these forms of participation in terms that can be operationalized for the purpose of assessment? A deep reading of the connected learning literature suggests that participation – or contribution, connection, and interpretation – might be more fully and concretely defined through the constructs of group cognition, associative trailblazing, and creative capacity. 

Group Cognition. 
Group cognition is the ability to locate and then move oneself around within class discourse and the course content. In the context of class discourse, students with high levels of group cognition are able to make their points efficiently and effectively, moving across digital platforms and using a variety of media formats as needed. In the context of course content, these students understand how all the pieces of the course curriculum fit together. This enables what Carl Rogers called the “freedom to learn” where students individualize their navigation through a body of information to suit their own interests and needs. 
 As a concept, group cognition borrows from many established constructs such as Donald Schon’s action and reflection-on-action. But in practice, it closely resembles Doug Engelbart’s concept of bootstrapping the collective IQ in which individuals focus on high level issues of process to reframe inquiry in a way that significantly advances individual and group positions. Group cognition directly relates to quantity of contribution – for example – the number of posts and tweets a student contributes or their degree of network centrality in a social network analysis. But it also relates to how students choose to contribute – for example – their fluency of movement across digital platforms, which can be captured in ratios of their activity as well as in basic descriptive statistics. 
 Associative Trailblazing. 
Associative trailblazing is about making visible the connection between concepts and between concepts and people. The term arises from the work of Vannevar Bush who first described the associative trail in his 1945 essay As We May Think. He was deeply concerned that our traditional indexing systems that store information on one library shelf or in one disciplinary journal creates a barrier for the right people gaining access to the right information at the right time. He argued that information storage should resemble the way we think, which he called associative trails. We all think in associative trails, but there is power in making them explicit; Doug Engelbart described this method, the one I call associative trailblazing, as one in which you “externalize your thoughts in the concept structures that are meaningful outside; moving around flexibly, manipulating them and viewing them.” 
Connected Learning encourages students to participate by documenting the connections they make between different types of information in different contexts. The use of hyperlinks is extremely helpful in this process because, as Tim Berners-Lee said, hyperlinks can “point to anything be it personal, local or global, be it draft or highly polished.” The purpose of associative trailblazing is to make connections visible. It is often accomplished through the practices of replying, mentioning, retweeting, and linking which are all acts captured by the digital platforms upon which connected learning takes place. Betweenness centrality in a social network analysis documents the practice of bridging people and subgroups within a community. The analysis of the content of hyperlinks is also possible. 
 Creative Capacity. Creative capacity transcends the ability to acquire course content and involves the process of interpretation in two ways: First, creative capacity is the ability to transform aggregated information or stored, static data into a repurposed, remixed, or otherwise adapted knowledge product. Students must synthesize or find meaning in all of the information they previously connected. Jerome Bruner described it as going beyond the information given. He wrote: “One of the greatest triumphs of learning (and of teaching) is to get things organised in your head in a way that permits you to know more than you “ought” to. And this takes reflection, brooding about what it is that you know. The enemy of reflection is the breakneck pace.” Second, students must interpret or create meaning around the collaborative knowledge product for their individual context. This relates to transfer of knowledge and the building of personal experience as well as personal reflection. Creative capacity is a form of interpretation that typically results in a tangible product that can be assessed either by an instructor, peers, or the student herself. Keyword and content analysis may also play a role in assessing creative capacity. 
 In assessing connected learning, we must find a way to capture these qualities of participation but the challenge does not stop there – Davies (2010) argues that digital age assessments should also be flexible – appropriate across disciplines and educational formats and adaptable for instructor-, peer-, and self-assessment – with an emphasis on peer- and self-assessment. They should also be scalable, or appropriate for all class sizes. This last criterion of scalability triggers the question: “How do we blend qualitative and quantitative, automated and non-automated data collection into a practical as well as meaningful forms of student assessment?” Which brings me to the third assumption of my study: If we are to create meaningful, flexible, scalable assessments of connected learning we will need to take advantage of the digital environments in which connected learning takes place. 
  In other words we need connected assessments for connected learning. 
Connected Assessments. 

Based on preliminary research using participant data from two connected learning courses, I have proposed a framework for connected learning assessment. It involves capturing contribution, connection, and interpretation – the 3 forms of participation – through the digital actions associated with connected learning. These acts are automatically captured by the social media learning platforms routinely used in connected learning environments, such as WordPress and Twitter. 
 What you are looking at here (Slide 10) is a part of an assessment blueprint, aligning common connected learning activities with the form of participation and the proposed units of assessment. These operationalizations span several forms of analyses and we’ll get into that in just a minute. You can also see that there is a place for assessing content of student work as well as students’ digital practices. And I’m experimenting with a variety of ways to get at content analysis by truly meaningful yet scalable means. 
 Here (Slide 11) I’ve shifted over several columns in my blueprint to show a variety of ways in which these data might be analyzed for the purpose of student assessment, as well as the instruments that I can use to perform the analyses. Briefly, KBDex is a novel open source software that combines keyword-driven content analysis with social network analysis to allow instructors to look at relationships between keywords and between keywords and specific students within the context of blog posts and in a longitudinal fashion. There are documented cases of its use in formative as well as summative assessment performed by instructors and students. I’m currently working with the software developers in Japan to see if KBDex is a good fit for my project. 
 And so really I’m at the beginning of my work, but the next steps are fairly clear: 1. Confirm that these units of assessment are as easily accessible as I think they are. I’ve already done some preliminary work to suggest that they are. 2. Then I’ll need to confirm that these analyses provide meaningful information about student performance – and I’ll be doing this by triangulating my results with a traditional content analysis of a portion of the same data. 3. Finally I’ll need to streamline the analysis process so that it can be performed in the context of formative student assessment. I am confident that this can be done because of what has been documented related to the use of content and discourse analysis and social network analysis in Japan, Netherlands, Australia, and the UK.

Stiching the Diss: Understanding the Assessment of Participation in Connected Learning

If you are interested in reading this, consider reading the update with slides

By way of introduction, I am working to create an assessment protocol for connected learning in higher education environments.  

The assessment literature suggests that we document learning processes for three reasons: (1) to evaluate curriculum and instructional practice; (2) to provide formative feedback for students, instructors, and the learning community in its entirety; (3) and to assess student performance for the purpose of making decisions around student advancement or promotion. And so, as I work to create an assessment protocol (first a “tool,” then a “toolkit,” now a “protocol”), I am hoping to create something that can serve all purposes by:   

  •   Generating potentially comparable and generalizable metrics.*
  • Providing real-time data useful for instructor, peer, or self-assessment.
  • Offering a more sophisticated and elegant approach to assessing classroom participation.
  • I focus on participation for this post because connected learning is an inherently participatoryact. People who engage in connected learning curate, annotate, critique, comment on, tweet out, link to, mash-up, remix, repurpose, imitate, support, criticize, and transform information as they learn. In describing the distributed cognition and collective discourse of connected learning environments, Salomon (1996) characterizes active student participation in three ways: as contributing to the whole; as nodes (connectors) in the network; and as interpreters of shared meaning. 

    When done well, these forms of participation – contributing, connecting, and interpreting –  have unique qualities that can be assessed which can be described as group cognition, associative trailmaking, and creative capacity. The following post introduces these qualities. I recognize that nothing I am writing is new or revolutionary but rather a synthesis of previously described knowledge found across disciplines such as cognitive psychology, information technology, and educational research.  

    Understanding Participation

    Group cognition: Contributing to the whole. Group cognition is the individual’s understanding of his or her current position and role within networked communities, contexts, or worlds. (Akkerman, 2007)  Reflective practice lies at the heart of group cognition and it helps students become mindful and (hopefully) responsible participants of the group. However group cognition enables more than locating and naming; it activates the individual’s ability to intentionally move from one place to another within the learning community and the curriculum (Campbell, 2014). Group cognition emphasizes individual effort – and agency – within a social stream of activity . Doug Englebart called this “bootstrapping;” the ability to change one’s position using existing resources to boost collective IQ.  I call it understanding the big picture for the purpose of increasing individual and group impact.
    Associative trails: Connecting within the network. In a global network that invites (requires) participation, innovation is more widely distributed across new places and actors (David & Foray, 2002).  The amount of knowledge generated in this environment is unfathomable and unprecedented. Even before the digital age, Vannevar Bush (1945) was deeply concerned about how  knowledge would be organized so that the right individual at the right time might discover it.  In his seminal article “As We May Think” Bush argues that indexing, by which information is stored on a library shelf or in one journal, (or anachronistically, in one computer folder), is the major cause of our inability to synthesize information into knowledge across disciplines. He described an alternative form of information storage, called associative trails.  The development of associative trails involves the explicit practice of linking information sources based on an individual’s thought process rather than artificial disciplines, media, or information formats. The effective use of tags and hyperlinks facilitate access to information from any number of directions simultaneously and, as Tim Berners-Lee suggests, links can “point to anything, be it personal, local, or global, be it draft or highly polished.” Associate trails, facilitated through links and tagging, enable intellectual flexibility and encourage connections across disciplines and spheres of learning. and encourage students to make intellectual connections. Students in connected learning environments learn how to be explicit in their associative trails, connecting information across disciplines and spheres of learning (personal, peer, and academic) and fostering the intellectual flexibility required in this era of distributed cognition.
    Creative capacity: Interpreters of shared meaning.  Scholars have written that innovation is a dominant activity in our society, as evidenced by the fact that the economic status of a nation has less to do with natural resources more to do with its ability to augment productivity (David & Foray, 2001). Creative capacity is the ability to innovate: to transform aggregated information (static, stored data sets) into a repurposed, remixed, or otherwise adapted knowledge product appropriate for the task or situation at hand.  The ability to create knowledge transcends course content acquisition and is a practice better suited for our continuously shifting reality (Paavola et al., 2004). Jerome Bruner described the capacity for knowledge creation as “going beyond the information given.” It is, he writes:

    …One of the few untarnishable joys of life. One of the great triumphs of learning (and of teaching) is to get things organised in your head in a way that permits you to know more than you “ought” to. And this takes reflection, brooding about what it is that you know. The enemy of reflection is the breakneck pace – the thousand pictures.

     Even the revised Bloom’s taxonomy replaces “evaluation” with “create” at the top of the pyramid (Anderson et al., 2001). Students in connected learning environments use the information they connect to take the leap towards creation; we need evidence of growing capacity to learn, think, innovate.

    Assessing Participation

    It has been decided (thus far only by me) that my assessment protocol must not only be pedagogically aligned but also flexible – flexible across disciplines, educational contexts, and instructor/peer/self use – and scalable.  Moreover a  connected assessment protocol, like connected learning, should use the affordances of the web.  Why? Because we are operating under Doug Engelbart’s principles of bootstrapping the collective IQ: we must solve this difficult task by using all of the resources available to us – and not just thinking about how to build a better standardized test.
    And so I have mapped out a framework, which is a starting place in this emergent assessment design. 
    Simply put:
    Learning Activities
    Participation Principle
              Measurement   Tool
    Establishing and Maintaining a Personal Learning Network (PLN)
    #Posts, Comments
    Network Degree Centrality
    Curating, and Critiquing, Data and Data Sources
    Connecting or coordinating people and concepts over space, time, and spheres of learning
    Ratio: Posts, Comments, Tweets
    Classmate Mentions
    #Retweets, Mentions, Replies, Links
    Network Betweeness Centrality
    Links (Content)
    Links (Content)
    KBDex (Excel)
    Transforming data into new products
    Product Assessment
    Sharing new product with PLN
    Contributor, Connector
    Network Degree Centrality
    Links (Content)
    Classmate Mentions
     #Mentions, Replies, Links
    Network Betweeness Centrality
    And so, once I get started on my dissertation, my tasks will be to (1) confirm that these analyses can be done; (2) confirm that they provide results approaching the meaningfulness of a full content analysis of the student work; (3) streamline the process so that it is flexible, scalable, and totally doable for the faculty. 

    Ok, that’s it for now.  This is very much in the early thinking stages, so all feedback is needed and very welcome.


    Akkerman, S., Van den Bossche, P., Admiraal, W., Gijselaers, W., Segers, M., Simons, R. J., & Kirschner, P. (2007). Reconsidering group cognition: from conceptual confusion to a boundary area between cognitive and socio-cultural perspectives?. Educational Research Review2(1), 39-63.
    Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., … & Wittrock, M. C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives, abridged edition. White Plains, NY: Longman.
    Bush, V. (1945). As we may think. The atlantic monthly176(1), 101-108.
    Campbell, G. (2014, May 26). Permission to wonder.  Retrieved from: http://www.gardnercampbell.net/blog1/?p=2285
    David, P. A., & Foray, D. (2002). An introduction to the economy of the knowledge society. International social science journal54(171), 9-23.
    Paavola, S., Lipponen, L., & Hakkarainen, K. (2004). Models of innovative knowledge communities and three metaphors of learning. Review of educational research74(4), 557-576.
    Salomon, G. (Ed.). (1993). Distributed cognitions: Psychological and educational considerations (Vol. 11, No. 9). Cambridge, UK: Cambridge University Press.

    *For those of you who are cringing, please understand that I come at this with mixed epistemological and methodological sensitivity.  I’m not a positivist.