Research Limitations

As much as I love digital annotations – as teaching, communication, and assessment tool – I want to be very explicit about my stance on the annotation-centricl assessment tools I’ve created with my dissertation research.  Here are some limitations pulled (more or less) directly from the full dissertation text.

The limitations of this study can and should be framed in two ways: limitations of the assessment strategies outlined in chapter four and limitations of the study design, methodology, and implementation.  Regarding the assessment strategies, it is important to note that what is presented – the rubrics, dashboards, and approaches to data visualization – are only prototypes. Not only do they require additional piloting and adjustment, but they are purposefully generalized.  Classroom assessments must be adapted for the priorities of specific students and faculty in a specific course environment. The rubrics, objectives, criteria, and dashboards described throughout this study are meant to be remixed, edited, and adjusted so they might be seamlessly integrated into the course design and context.

Furthermore, the assessment strategies described are not intended to be a comprehensive approach to assessment of student learning in VCU connected courses. Rather, they should be considered in terms of a larger assessment system – one that takes into account the need to document learning in terms of product and process, individual and social learning, and the different learning objectives and goals associated with each course. Courses have multiple learning objectives that reflect the need for students to develop disciplinary-based expertise, professional skills, and intellectual dispositions; it only stands to reason that course instructors would need different strategies for assessing student progress as related to the different desired outcomes.

Finally, even as this study attempts to develop automated processes for the quantification of student connections, it is not the intention of the researcher to suggest that all student assessment should follow a model of automated quantification and counting.  As stated above, student learning should be assessed in terms of a system of approaches.  The purpose of automating some aspects of assessment is to free the instructor for the more meaningful, more qualitative, and more human aspects of teaching and learning, including assessment.

From the position of traditional educational research, the limitations of this study are numerous and diverse. From a post-positivist perspective, the sampling procedures and the heterogeneity in participants, participation, and course implementation are all worrisome. From a constructivist perspective, the lack of student voice, limited attempts at triangulation, and the researcher-generated judgments about student activities are also worrisome. As discussed in chapter one, this study is admittedly messy, its findings overtly impermanent, and design representative of work done in a time of rapid change and highly variable conditions.  Furthermore, the purpose of this study was to develop alternative assessment strategies, which is an inherently valuating and judgmental process.  The limitations of the study as seen from either research paradigm should not be considered limitations as much as opportunities for improvement in future studies.

One of the limitations of this study was its failure to include student input in the development of the blogging and tweeting typologies.  “Communicative impact” is important, because students must be able to express their connections so that others can comprehend them (for reasons outlined by Harel and Papert, 1991).  However, “communicative intent,” or the student’s motivation behind making the connection, is much more indicative of the learning that has occurred. This limitation also indicates a next step in future research, namely that students should be interviewed or surveyed their thought process before, during, and after digital annotation use.