The Importance of Peer Feedback in Online Education

Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., et al. (2007). Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication, 12(2), 412-433.

The importance of feedback to students in education is a known factor, yet the research of how feedback impacts the online learner is something Ertmer and colleagues (2007) find little studied. In their article, Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study, Ertmer et. al (2007) sought to investigate the impact peer feedback has on both the overall quality of student posts and on their perceptions of the value of both giving and receiving feedback as part of an online class.

To frame their need for a study, Ertmer et. al (2007) presented a literature review. Feedback, they note, serves to help the learner evaluate their knowledge and can assist them in altering their viewpoints when presented new information. To do this, good feedback, according to the authors,  should help to “clarify what a good performance is” for the learner, assist the learner in developing their ability for “self-reflection and assessment,” help the learner gain “high quality information” about their learning, focus faculty-student interactions towards learning, increase motivation and self-esteem for the learner, help the learner “close the gap” between current performance and their final performance goals, and assist the educator in accessing information towards improving the quality of teaching (Ertmer et. al. 2007, 413-414). When it comes to online education, Ertmer and colleagues (2007) emphasized how instructor feedback can act “as a catalyst for student learning” and argue that, to be most effective, studies showed that online feedback should be timely, specific and consistent and can extend from formative to summative formats (p. 414). However, good feedback can overload the time and effort abilities of  the faculty member. To offset the increased workload good online feedback requires, Ertner and colleagues (2007) proposed investigating utilizing peer feedback as part of their instructional design. The advantages to this, they argued, is that it could increase the timeliness of feedback while helping the student to be part of the community of learners.  Studies summarized by Ertmer et. al (2007) indicate that the giving and receiving of feedback helps to increase collaborative meaning construction, increases the overall quality of discussions, and can give the learner greater “understanding and appreciation for their peers’ experiences and perspectives” (Ertmer et. al., 2007,p. 415). This, Ertmer et. al. (2007) can increase student motivation and satisfaction with a course and can increase learner autonomy. However, the authors also outlined that the literature indicates several drawbacks in using peer feedback. These included increasing student anxiety about participating in peer feedback, inexperience of students in providing quality peer feedback, and general negative perceptions about the overall quality of peer feedback.

Based on this information,  Ertmer et. al (2007) developed three research questions.These questions were (p. 416):

  1. What is the impact of peer feedback on the quality of students’ postings in an online environment? Can the quality of discourse/earning be maintained and/or increased through the use of peer feedback?
  2. What are students’ perceptions of the value of receiving peer feedback? how do these perceptions compare to the perceived value of receiving instructor feedback?
  3. What are students’ perceptions of the value of giving peer feedback?

To answer these which they utilized a case study of peer-feedback in a graduate level course. Within this course which ran a single semester, 15 students were required to post to weekly discussion questions and comment on the post of one classmate. For the first 6 weeks, only the course instructors provided feedback to each student. This feedback on their posts consisted of a numerical score relative to what level of Bloom’s taxonomy they demonstrated on their post, and overall comments regarding the quality of the posts made. After week six, students then were asked to provide feedback on two peers postings for each discussion using the same system as demonstrated by the faculty member earlier in the class. This feedback was moderated and collected by the faculty member, anonymized, and given to the posting student usually within two weeks of the original post deadline. The authors gathered qualitative and quantitative data from interviews with participants, scored ratings of the students’ weekly discussions, and the students’ pre- and post surveys.  The pre-surveys occurred at the end of the faculty feedback period and the post-survey following the end of the peer feedback perio. It is also worth noting that the authors opted not to use the students’ peer feedback scoring for their analysis as they felt there were inconsistency in scoring and instead two of the project researchers scored all the student posts and these were used to assess the first research question.

In evaluating the impact of peer feedback on the quality of student postings. Ertmer et. al (2007) compared the average scores on postings during the instructor feedback period (M=1.31) to that during the peer feedback period (M=1.33).  The data showed no significance difference in the quality of postings . Quality did not improve or decrease when compared between the forms suggesting that “peer feedback may be effective in maintaining quality…once a particular level of quality has been reached” (Ertmer et. al , 2007, p 421-422). These results were then compared to qualitative data from interviews conducted during the peer feedback period. Student interview responses, the authors argue, showed that students felt they used the peer feedback to improve their writing. In their discussion of their paper, the authors present that the lack of change in quality may be attributed to several factors including the variability in posting quality between required and optional posts (all of which were included in the tabulations), use of a limited post scoring system (two levels only), and discussion questions which were not designed intentionally to elicit higher-level analyses.

In evaluating the perception of the value of receiving peer versus instructor feedback as well the value of giving feedback, came from data of the pre- and post surveys as well as the interview responses. Overall the authors noted that students surveys showed a significant increase in the perceived importance of feedback from the start of the course to the end. But that overall feedback from the instructor was perceived as being more important than peer feedback at both the beginning and end of the course. Interview responses indicated to the authors that this may stem from student perceptions of the overall quality peer feedback and the belief that bias may be present in peer feedback. however despite this, students within the study did feel as if peer feedback was valued. This is reflected also in the fact that students valued giving and receiving feedback as part of their learning process. In discussing the greater perceived value for instructor feedback,  the authors noted that this was what was seen in other studies but that factors such as the up to two week delay in getting peer feedback (due to instructor intermediate processing) may have contributed to students perceiving this as being less timely and therefore of less value. In addition, concern about impact on grades (since peer feedback was incorporated into peer grades for the boards) may have also increased the anxiety associated with providing and receiving peer feedback.

In reflecting on the limitations of their study, the authors acknowledge the issues presented by a small study of relatively short duration and consider that future work should include more training for students on both the benefit of peer feedback and how to effectively rate peers feedback.  From this the authors conclude that feedback in general is of value in online courses and that, based on interview, students valued and learned from peer feedback even if their perceived of instructor feedback as being more important. They wrapped up their paper, by providing some general recommendations faculty could consider when trying to implement peer feedback in their online courses.

In examining this work, the use of both qualitative and quantitative data to reflect on three specific research questions was well situated.  The fact that this is an exploratory study suggests the authors are seeking to improve upon the general development of assessing feedback. Given this, there were several aspects of the study which were problematic which I think could be better addressed in future studies. In considering the impact of feedback, the fact that feedback was delayed in reaching the recipient (for up to two weeks) makes one question how much was the actual scoring of posts a measurement of the impact of the feedback the student received from peers (since these were delayed) versus the general progress of the student in developing their own self-regulating abilities after having received faculty feedback for the first five weeks of the term. Secondly, in reflecting on the discussion question design, the author’s noted that issues of construction may have limited how much students were prompted to approach the higher levels of Bloom’s taxonomy in their responses since many of the questions “were not particularly conducive to high-level responses” Ertmer et. al , 2007, p 426).  I would have liked to have seen a breakdown data between the differing discussions to see if perhaps there is a pattern that is being lost due to some underperforming questions. In addition, when reflecting on use of feedback and the scoring system, the authors assumed familiarity with using Bloom’s taxonomy  given their study population but, given that several indicated problems in using the system to score posts, there may be underlying population variations which could perhaps be impacting their data. Overall I would be interested in seeing if a expanded version of this study addressing the limitations noted by the authors and above could be applied to a larger population of undergraduates to see if these patterns hold true to lower level students.

 

New Literacies: Risks, Rewards, and Responsibilities

“To be literate tomorrow will be defined by even newer technologies that have yet to appear and even newer discourses and social practices that will be created to meet future needs. Thus, when we speak of new literacies we mean that literacy is not just new today; it becomes new every day of our lives” (Leu, 2012, p. 78)

New literacies are the “ways in which meaning-making practices are evolving under contemporary conditions that include, but are in no way limited to, technological changes associated with the rise and proliferation of digital electronics” (Knobel and Lankshear, 2014, p. 97). It involves examining how, through the use of digital technology, the learner of today can come to identify, understand, interpret, create and communicate knowledge in novel and often unconventional ways.  While the incorporation of new literacies allows the educator to meet students where they are at, to engage and enliven learning through the relevancy and interest of the learner, restructure the power dynamics of learning, and to extend learning beyond the classroom, the approach of the educator towards engaging with new literacies is often a daunting undertaking.  In her article, Hagood (2012) highlights the processes by which teachers were introduced to and implemented new literacies into their classrooms. Working with a group of 9 middle school teachers during bi-monthly meetings over the course of a year, the author (2012) provided them with a three phase process towards introducing new literacies. These phases included an introduction phase to learn about new literacies, an exploration phase of the skills and tools necessary for new literacies, and a design and implement phase. The output was an inquiry-based project incorporating new literacies the educators could use in their classes. Using the participants’ reflections on this process, Hagood (2012) outlined their takeaways towards the implementing new literacies so as to lessen push-back, increase interest for participation and overall increase teacher satisfaction with incorporating new literacies. These included starting small and learning to implement new literacies through pre-existing assignments,  test trying new literacies to facilitate learning when traditional avenues fail, and expecting to fail and retry as part of the process for developing their educator skills with new literacies. Hagood (2012) noted that while many of the participants recognized the fact that students were well ahead in their connectedness to digital technology, that this was not the motivator for their implementation of new literacies. Rather it was the fact that many of the participants felt invigorated by what they saw their students capable of producing, the increased engagement of their students, by their own personal growth, and by their renewed enjoyment of teaching through new literacies. In addition, the educators felt that they developed a collaborative network which not only pushed them to stay on task but also made them feel more invested in sharing what they had learned thereby reiterating the connectedness to context and people that comes with new literacy.

While this article lacks in any quantifiable data with regards to how implementing digital literacy impacted student and teach motivation and student success within these classes, the incorporation of the teacher’s voices in reflecting on what resulted carries great weight in thinking about how this introduction of new literacies must be transformed into workable practices for the educator. This was a single small group in a single school from a single training year and Hagood (2012) presents no follow-up or check-in to see how these teachers are fairing in their use of new literacies in the following years. Have they expanded their incorporation of new literacies beyond the one inquiry-based project and how did they do this? Or perhaps they limited themselves to the one project, change projects, or abandoned new literacies altogether? What obstacles came about over time which impacted how they developed their skills and their overall implementation of new literacies? These are questions this article doesn’t address but are of interest when thinking about how to aid educators in exploring and adopting new literacies. What did their students think of these new literacies

In thinking about research, the above questions bear greater examination.  It would be interesting to expand upon this towards examining the best processes for implementing new literacies by examining outcomes such as motivation, efficacy, self-directedness, and overall success for both student and teacher.

Hagood, M. C. (2012) Risks, Rewards, and Responsibilities of Using New Literacies in
Middle Grades. Voices from the Middle, Volume 19 Number 4, May 2012

Leu, D. J., & Forzani, E. (2012). New literacies in a Web 2.0, 3.0, 4.0, …∞ world. Research in the Schools, 19(1), 75-81

Knobel, M., & Lankshear, C. (2014). Studying new literacies. Journal of Adolescent & Adult Literacy, 57(9), 1-5

 

 

 

Unpacking TPACK…

Gómez, M. (2015). When Circles Collide: Unpacking TPACK Instruction in an Eighth-Grade Social Studies Classroom. Computers in the Schools32(3/4), 278–299.

Coming into teaching from a graduate program in anthropology where the concern was not on how to teach but on how to research, the idea of evaluating the knowledge needed to effectively teach much less teach with technology is novel to this author.  Thus while the overall importance of Mishra’s and Koehler’s (2006) work on Technological Pedagogical Content Knowledge (TPCK) towards understanding the practice of teaching with technology is evident to this author, the actual process of implementation within the actual class design was difficult to visualize. To clarify the steps to how Mishra’s and Koehler’s model is applied and is implemented within course design, Gomez’s (2015)  illustrated applying TPACK to a case study of a single 8th grade teacher and two social studies classroom. Using data collected through classroom observations, formal and interviews, and the analysis of artifacts produced, Gomez used a constant comparative approach to organize the data along themes which related to the
intersections of TPACK: technology knowledge (TK), content knowledge (CK), pedagogical knowledge (PK), technology content knowledge (TCK), technology pedagogical knowledge (TPK), pedagogical content knowledge (PCK), and technological pedagogical content knowledge (TPCK) and examined when and how these intersected within the framework of the class. Interestingly , when interviewing the teacher of the class, he offered up that he was designing his class not with TPACK in mind but rather as a way to reach his desired goal – to teach students to think historically – and that technology is only a tool that helps him to engage them in doing this by helping him to shape the lesson in a way that meets this goal.

Overall this is only a single case study so aspects of design towards implementation are bound to vary by teacher, school and students. The act of selecting this class and teacher was not random, rather the teacher was recommended to the researcher as someone who uses technology regularly in the classroom. In addition, the school utilized was a K-12 private school withone-to-one technology and thus it this scenario presents one where there is a great degree of technological access and affordances which may not be available to all teachers and schools. Gomez recognizes these limitations and approapriately makes no generalizations from these oberservations and interviews which should be broadly applied.

Despite this, this articles is offering one example of how in TPACK might be implemented in course design. Based on what Gomez (2015) observed, he does acknledge that this case example does breaks down the idea that the components of TPACK must be intersecting concurrently. Rather he notes “TPACK no longer becomes the intersection of these three types of knowledge, but rather it becomes the layered combination of these three
types of knowledge” (p. 295). In addition, Gomez (2015) highlights how teachers may approach TPACK very differently in implementation as the teacher of the 8th grade classes studied indicated that “teaching effectively with technology (TPACK) begins with an understanding of what he wants his students to learn” (p. 296). Therefore he frames TPACK within a framework of what he wants students to know.  Gomez presents that this may be a common way that teachers may implement TPACK and therefore “understanding the role students play in making decisions about using technology in instruction” should be considered more within the TPACK design (p. 296).

Mishra, P. and Koehler M.J. (2006) Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge. Teachers College Record, 2006, Vol.108(6), p.1017-1054

Promoting Student Engagement in Videos Through Quizzing

Cummins, S. Beresford, A.R. and Rice. A (2016) Investigating Engagement with In-Video Quiz Questions in a Programming Course. IEEE Transactions on Learning Technologies 9(1): 57-66

The use of videos to supplement or replace lectures that were previously done face-to-face is a standard to many online courses. However these videos often encourage passivity on the part of the learner. Other than watching and taking notes, there may be little to challenge to the video-watching learner to transform the information into retained knowledge, to self-assess whether or not they understand the content, and to demonstrate their ability to utilize what they have learned towards novel situations. Since engagement with videos is often the first step towards learning, Cummins, Beresford, and Rice (2016) tested whether or not student can become actively engaged in video materials through the use of in-video quizzes. They had two research questions: a) “how do students engage with quiz questions embedded within video content” and b) “what impact do in-video quiz questions have on student behavior” (p. 60).

Utilizing an Interactive Lecture Video Platform (ILVP) they developed and open sourced, the researchers were able to collect real-time student interactions with 18 different videos developed as part of a flipped classroom for programmers. Within each video, multiple choice and text answer based questions were embedded and were automatically graded by the system. Videoplay was automatically stopped at each question and students were require to answer. Correct answers automatically resumed playback while students had the option of retrying incorrect ones or moving ahead. Correct responses were discussed immediately after each quiz question when payback resumed. The style of questions were on the level of Remember, Understand, Apply, and Analyse within Bloom’s revised taxonomy . In addition to the interaction data, the researchers also administered anonymous questionnaires to collect student thoughts on technology and on behaviors they observed and also evaluated student engagement based on question complexity. Degree of student engagement was measured by on the number of students answering the quiz questions relative the number of students accessing the video.

According to the Cummins et. al. (2016), that students were likely to engage with the video through the quiz but that question style, question difficulty, and the overall number of questions in a video impacted the likelihood of engagement. In addition, student behaviors were variable in how often and in what ways this engagement took place. Some students viewed videos in their entirety while others skipped through them to areas they felt were relevant. Others employed a combination of these techniques. The authors suggest that, based both on the observed interactions and on questionnaire responses, four patterns of motivating are present during student engagement with the video – completionism (complete everything because it exists), challenge-seeking (only engage in those questions they felt challenged by), feedback (verify understanding of material), and revision (review of materials repeatedly). Interestingly, the researchers noted that student recollection of their engagement differed in some cases with actual recorded behavior but, the authors suggest this may actually show that students are not answering the question in the context of the quiz but are doing so within other contexts not recorded by the system. Given the evidence in student selectivity in responding to questions based on motivations, the author’s suggest a diverse approach to question design within videos will offer something for all learners.

While this study makes no attempt to assess the actual impact on performance and retention of the learners (due to the type of class and the assessment designs within it relative to the program), it does show that overall in-video quizzes may offer an effective way to promote student engagement with video based materials. It is unfortunate the authors did not consider an assessment structure within this research design so as to collect some assessment of learning. However given that the platform they utilized it available to anyone (https://github.com/ucam-cl-dtg/ILVP-prolog) and that other systems of integrated video quizzing are available  (i.e. Techsmith Relay) which, when combined with key-strokes and eye movement recording technology, could capture similar information does open up the ability to further test how in-video quizzing impacts student performance and retention.

In terms of further research, one could visual a series of studies using a similar processes which could examine in-video quizzing to greater depth not only for data on how it specifically impacts engagement, learning and retention but also how these may be impacted based on variables such as video purpose, length, context and the knowledge level of the questions.  As Schwartz and Hartmann (2007) noted design variations with regards to video genres may depend on learning outcomes so assessing if this engagement only exists for lecture based transitions or may transfer to other genre is intriguing. As the Cummins et. al (2016) explain, students “engaged less with the Understand questions in favour of other questions” (p.  62) which would suggest that students were actively selecting what they engaged with based on what they felt were most useful to them. Thus further investigation of how to design more engaging and learner centered questions would be useful towards knowledge retention. In addition, since the videos were sessions to replace lectures and ranged in length from 5 minutes and 59 seconds to 29 minutes and 6 seconds understanding how length impacts engagement would help to understand if there is a point at which student motivation and thus learning waivers. While the authors do address some specifics as to where drop-offs in engagement occurred relative to specific questions, they do not offer a breakdown as to engagement versus the relative length of the video and overall admit that the number of questions varied between videos (three had no questions at all) and that there was no connection between number of questions and the video length. Knowing more about the connections between in-video quizzing and student learning as well as the variables which impact this process could help to better assess the overall impact of in-video quizzing  and allow us to optimize in-video quizzes to promote student engagement, performance and retention.

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science. pp 349-366 Mahwah, NJ: Lawrance Erlbaum Associates.