Promoting Student Engagement in Videos Through Quizzing

Cummins, S. Beresford, A.R. and Rice. A (2016) Investigating Engagement with In-Video Quiz Questions in a Programming Course. IEEE Transactions on Learning Technologies 9(1): 57-66

The use of videos to supplement or replace lectures that were previously done face-to-face is a standard to many online courses. However these videos often encourage passivity on the part of the learner. Other than watching and taking notes, there may be little to challenge to the video-watching learner to transform the information into retained knowledge, to self-assess whether or not they understand the content, and to demonstrate their ability to utilize what they have learned towards novel situations. Since engagement with videos is often the first step towards learning, Cummins, Beresford, and Rice (2016) tested whether or not student can become actively engaged in video materials through the use of in-video quizzes. They had two research questions: a) “how do students engage with quiz questions embedded within video content” and b) “what impact do in-video quiz questions have on student behavior” (p. 60).

Utilizing an Interactive Lecture Video Platform (ILVP) they developed and open sourced, the researchers were able to collect real-time student interactions with 18 different videos developed as part of a flipped classroom for programmers. Within each video, multiple choice and text answer based questions were embedded and were automatically graded by the system. Videoplay was automatically stopped at each question and students were require to answer. Correct answers automatically resumed playback while students had the option of retrying incorrect ones or moving ahead. Correct responses were discussed immediately after each quiz question when payback resumed. The style of questions were on the level of Remember, Understand, Apply, and Analyse within Bloom’s revised taxonomy . In addition to the interaction data, the researchers also administered anonymous questionnaires to collect student thoughts on technology and on behaviors they observed and also evaluated student engagement based on question complexity. Degree of student engagement was measured by on the number of students answering the quiz questions relative the number of students accessing the video.

According to the Cummins et. al. (2016), that students were likely to engage with the video through the quiz but that question style, question difficulty, and the overall number of questions in a video impacted the likelihood of engagement. In addition, student behaviors were variable in how often and in what ways this engagement took place. Some students viewed videos in their entirety while others skipped through them to areas they felt were relevant. Others employed a combination of these techniques. The authors suggest that, based both on the observed interactions and on questionnaire responses, four patterns of motivating are present during student engagement with the video – completionism (complete everything because it exists), challenge-seeking (only engage in those questions they felt challenged by), feedback (verify understanding of material), and revision (review of materials repeatedly). Interestingly, the researchers noted that student recollection of their engagement differed in some cases with actual recorded behavior but, the authors suggest this may actually show that students are not answering the question in the context of the quiz but are doing so within other contexts not recorded by the system. Given the evidence in student selectivity in responding to questions based on motivations, the author’s suggest a diverse approach to question design within videos will offer something for all learners.

While this study makes no attempt to assess the actual impact on performance and retention of the learners (due to the type of class and the assessment designs within it relative to the program), it does show that overall in-video quizzes may offer an effective way to promote student engagement with video based materials. It is unfortunate the authors did not consider an assessment structure within this research design so as to collect some assessment of learning. However given that the platform they utilized it available to anyone (https://github.com/ucam-cl-dtg/ILVP-prolog) and that other systems of integrated video quizzing are available  (i.e. Techsmith Relay) which, when combined with key-strokes and eye movement recording technology, could capture similar information does open up the ability to further test how in-video quizzing impacts student performance and retention.

In terms of further research, one could visual a series of studies using a similar processes which could examine in-video quizzing to greater depth not only for data on how it specifically impacts engagement, learning and retention but also how these may be impacted based on variables such as video purpose, length, context and the knowledge level of the questions.  As Schwartz and Hartmann (2007) noted design variations with regards to video genres may depend on learning outcomes so assessing if this engagement only exists for lecture based transitions or may transfer to other genre is intriguing. As the Cummins et. al (2016) explain, students “engaged less with the Understand questions in favour of other questions” (p.  62) which would suggest that students were actively selecting what they engaged with based on what they felt were most useful to them. Thus further investigation of how to design more engaging and learner centered questions would be useful towards knowledge retention. In addition, since the videos were sessions to replace lectures and ranged in length from 5 minutes and 59 seconds to 29 minutes and 6 seconds understanding how length impacts engagement would help to understand if there is a point at which student motivation and thus learning waivers. While the authors do address some specifics as to where drop-offs in engagement occurred relative to specific questions, they do not offer a breakdown as to engagement versus the relative length of the video and overall admit that the number of questions varied between videos (three had no questions at all) and that there was no connection between number of questions and the video length. Knowing more about the connections between in-video quizzing and student learning as well as the variables which impact this process could help to better assess the overall impact of in-video quizzing  and allow us to optimize in-video quizzes to promote student engagement, performance and retention.

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science. pp 349-366 Mahwah, NJ: Lawrance Erlbaum Associates.

Video Podcasts and Education

Kay, R. H. (2012). Exploring the use of video podcasts in education: A comprehensive review of the literature. Computers in Human Behavior, 28, 820-831

While the use of podcasts in education is growing, the literature to support their effectiveness in learning is far from concluded. Kay (2012) offers an overview of the literature on the use of podcasts in education a) to understand the ways in which podcasts have been used,  b) to identify the overall benefits and challenges to using video podcasts, and c) to outline areas of research design which could enhance evaluations of their effectiveness in learning. Utilizing keywords, such as ‘podcasts, vodcasts, video podcasts, video streaming, webcasts, and online videos” (p. 822), Kay searched for articles published in peer-reviewed journals. Through this she identified 53 studies published between 2009 and 2011 to analyze. Since the vast number of these were of focused on specific fields of undergraduates, Kay presents this as a review of  “the attitudes, behaviors and learning outcomes of undergraduate students studying science, technology, arts and health” (p. 823) Within this context, Kay (2012) shows there is a lot of diversity in how podcasts are used and how they are structured and tied into learning. She notes that podcasts generally fall into four categories (lecture-based, enhanced, supplementary and worked examples), can be variable in length and segmentation, designed for differing pedagogical approaches (passive viewing, problem solving and applied production) and have differing levels of focus (from narrow to specific skills to broader to higher cognitive concepts).  Because of the variability in research design, purpose and analysis methods, Kay (2012) approached this not from a meta-analysis perspective but from a broad comparison perspective with regards to the benefits from and challenges presented in using video podcasts.

In comparing the benefits and challenges, Kay (2012) presents that while there are great benefits shown in most studies, some studies are less conclusive. In examining the benefits, Kay finds that students in these studies are coming into podcasts primarily in evenings and weekends, primarily on home computers and not mobile devices (but this will vary by the type of video),  are utilizing different styles of viewing and that access is tied to a desire to improve knowledge (often ahead of an exam or class). This suggests that students are engaged in the flexibility and freedom afforded them through podcasts to learn anywhere and in ways that are conducive to their learning patterns. Overall student attitudes with regards to podcasts are positive in many of the studies. However, some showed a student preference for lectures over podcasts which limited the desire of the student to access them. Many studies commonly noted that students felt podcasts gave them a sense of control over their learning,  motivated them to learn through relevancy and attention, and helped them improve their understanding and performance. In considering performance, some of the studies showed improvement over traditional approaches with regards to tests scores while others showed no improvement. In additional while some studies showed that educators and students believed there were specific skills such as team building, technology usage and teaching skills the processes as to how these occur were not shared. In addition, some studies indicate technical problems with podcasts and lack of awareness can made podcasts inaccessible to some students and that several studies showed that students who regularly accessed podcasts attended class less often.

In reflecting on this diverse outcomes, Kay presents that the conflict evident in understanding the benefits and challenges is connected to research design. Kay (2012) argues that issues of podcast description, sample selection and description and data collection need to be addressed  “in order to establish the reliability and validity of results, compare and contrast results from different studies, and address some of the more difficult questions such as under what conditions and with whom are video podcasts most effective” (p. 826).  She argues that understanding more about the variation in length, structure and purpose of podcasts can better help to differentiate and better compare study data. Furthermore, Kay asks for more diverse populations (K-12) and better demographic population descriptions within studies so as to remove limits on ability to compare any findings among different contexts. Finally, she presents that an overall lack of examination of quantitative data and overall low quality descriptions of qualitative data techniques undermine the data being collected. “It is difficult to have confidence in the results reported, if the measures used are not reliable and valid or the process of qualitative data analysis and evaluation is not well articulated.” (p. 827) From these three issues, Kay recommends an overall greater depth to the design, descriptions, and data collection of research is needed in video podcasting research.

While literature review offers a general overview of the patterns the author witnessed in the studies collected, there are questions about data collection process as the author is unclear as to a) why three prior literature reviews were included as part of an analysis and b) as to whether the patterns she discusses are only from those papers which had undergraduate populations (as is intimated by her statement on this – as noted in italics above) or is it of all samples she collected. The author also used articles published in peer-reviewed journals and included no conference papers. It is unclear what difference in data would have resulted from including these other sources.

Overall the most critical information she provides from this study is the fact that there is no unifying research design that underlies the studies on video podcasts and this results in a diverse set of studies without complete consensus on the effective use of podcasts in education and overall little applicability on how to effectively implement video podcasts. The importance of research design in creating a comparative body of data cannot be understated and is something which should be considered in all good educational technology research. Unfortunately, while Kay denotes the issues present in how various studies are coding and how data is collected and analyzed in the studies she examined, she does not address the underlying research design issues much when thinking about areas of further research.  While this is not to lessen the issues she does bring up for future research, the need for better research design is evident and given little specifics by Kay.  One would have liked a more specific vision from her on this issue since greater consideration towards the underlying issues of research design with regards to describing and categorizing video podcasts, sampling strategies and developing methods of both qualitative and quantitative analysis are needed.