Inviting Video Games to the Educational Table

Gee, J. Learning and Games. The Ecology of Games: Connecting Youth, Games, and Learning. Edited by Katie Salen. The John D. and Catherine T. MacArthur Foundation Series on Digital Media and Learning. Cambridge, MA: The MIT Press, 2008. 21–40

According to Gee (2008), “good video games recruit good learning” but it all rests on good design (p.21). This is because well-designed video games provide experiences to the learner which meet conditions that “recruit learning as a form of pleasure and mastery” (p. 21).  These conditions include in which providing an experience that is goal structured and requiring interpretation towards making those goals, that provides immediate feedback and the opportunity to apply prior knowledge/experiences of self and others towards success in meeting goals. If done in such a way, Gee (2008) argues that they allow the learner’s experiences to be “organized in memory in such a way that they can draw on those experiences as from a data bank”(p. 22). As Gee presented (2008), these conditions, coupled with the social identity building that good game design incorporates,  help “learners understand and make sense of their experience in certain ways. It helps them understand the nature and purpose of the goals, interpretations, practices, explanations, debriefing, and feedback that are integral to learning” (p. 23). These conditions are the key to good game design as they provide several key aspects which play into learning science.  First they create a “situated learning matrix”- the set of goals and norms which require the player to “master a certain set of skills, facts, principles, and procedures” and utilize the tools and technologies available within the game to do this – including other player and non-player characters who represent a community of practice in which the learner is self-situating (Gee, 2009,p. 25).  This combination of game (in game design) and Game (social setting), as Gee (2009) explained, provides the learner with a foundation for good learning since “learning is situated in experience but goal driven, identity-focused experience” (p. 26). In addition, many well-designed games  incorporate models and modeling, which “simplify complex phenomena in order to make those phenomena easier to deal with” (Gee, 2009,p. 37). Many good games also enhance learning through the emphasis on distributed intelligence, collaboration, and cross-functional teams which create “a sense of production and ownership,” situate meanings/terms within motivating experiences at the time they are needed and provide an emotional attachment for the player (which aids in memory retention) while keep frustration levels down to prevent them pulling away (Gee, 2009, p. 37). As Gee (2009) pointed out “the language of learning is one important way in which to talk about video games, and video games are one important way in which to talk about learning. Learning theory and game design may, in the future, enhance each other” (p. 37).
In breaking down the connections to learning which can be present within well-designed video games, Gee (2009) has not only outlined the structures through which good educational games should be built but is constructively addressing common arguments presented against using video games. Recognizing the assets well-designed games can bring to the educational table is important since, more often than not, the skills and content learned in games are learner-centered and content connected but are “usually not recognized as such unless they fall into a real-world domain” (Gee, 2009, p. 27). This is likely why the discussion of the role of video games within education is necessary. As Gee (2009) commented,

“any learning experience has some content, that is, some facts, principles, information, and skills that need to be mastered. So the question immediately arises as to how this content ought to be taught?Should it be the main focus of the learning and taught quite directly? Or should the content be subordinated to something else and taught via that “something else”? Schools usually opt for the former
approach, games for the latter. Modern learning theory suggests the game approach is the better one” (p. 24)

 

 

 

Video Games as Digital Literacy

Steinkuelher, C. (2010). Digital literacies: Video games and digital literacies. Journal of Adolescent & Adult Literacy, 54(1), 61-63.

In reflecting on if educators are selling video games short when it comes to learning, Steinkueler (2010) offered the anecdotal case of “Julio”, 8th grade student. Julio spent a significant amount of free time involved in video game culture, designing and writing about gaming. However read three grade levels below where he should be and was often disinterested and disengaged from school. Even when presented with game-related readings he still also did not excel. But when given choice in reading, he selected a 12th grade reading that appealed to his interests and managed to succeed despite the obstacles this reading presented him. Steinkueler (2010) argued it was the action of giving him choice to select something that appealed to his interest that increased his auto-correction rate and thus gave him persistence to overcome and meet the challenge.  Steinkuler (2010) opined that “video games are a legitimate medium of expression. They recruit important digital literacy practices” (p. 63) and as such may offer an outlet for the student, particularly disengaged males, to engage in learning that may otherwise be unmet through traditional structures.

The efforts the author highlighted Julio engaged in– writing, reading, researching for gaming — certainly suggest that video games may offer a way to engage between new and traditional literacies as Gee (2008) suggests.  However, this is but a single example and alone offers very little in terms of tangible data to rest any confirmed ideas about the important of video gaming in education. However, it does offer the notion of considering how video games present as new literacies which can open doors for expression of meaning and ideas particularly those who my feel marginalized within traditional curriculum plans and by those who consider video games a “waste of time”.

The appeal of the qualitative analysis approach to investigating how students view and experience the use of gaming with education is especially appealing given this case of Julio. Would he have seen that is outside activities were translatable into educational acumen? Would his teacher or parents? There is so little in this small single case study to say much but it does give one ideas.

Gee, J. Learning and Games. The Ecology of Games: Connecting Youth, Games, and Learning. Edited by Katie Salen. The John D. and Catherine T. MacArthur Foundation Series on Digital Media and Learning. Cambridge, MA: The MIT Press, 2008. 21–40.

 

Twitter and New Literacy

Greenhow, C., & Gleason, B. (2012). Twitteracy: Tweeting as a New Literacy practice. The Educational Forum, 76(4), 464-478.

Just how useful can social media be to promoting learning?  According to Greenhow and Gleason (2012), microblogging, through technologies such as Twitter, opens up the opportunity for students to connect to “the kinds of new literacies increasingly advocated in the educational reform literature” (p. 467).  New literacy is ” dynamic, situationally
specific, multimodal, and socially mediated practice that both shapes and is shaped by
digital technologies” (Greenhow & Gleason, 2012, p.467) As such is allows meaning and learning to stretch into both formal and informal interactions and to be responsive to relationships that develop within these settings such that authorship is neither singular or static but is constantly being created and re-created and expressed through new means of combining text, images, sound, motion, and color. To examine how microblogging through social media such as Twitter connects to learning and new literacy, the authors conducted a literature search of journal articles to answer questions such as:

  • How do young people use Twitter in formal and informal learning settings, and with what results?
  • Can tweeting be considered a new literacy practice?
  • How do tweeting practices align with standards based literacy curricula?

According to the study, the authors found that studies show  “Twitter use in higher education may facilitate increased student engagement with course content and increased student-to-student or student–instructor interactions—potentially leading to stronger positive relationships that improve learning and to the design of richer experiential or authentic learning experiences” (Greenhow & Gleason, 2012, p. 470). However, at the time of their research, few studies had examined the use of Twitter as a new literacy practice. Looking to research on literacy practices and social media, Greenhow and Gleason (2012), suggested that “youth-initiated virtual spaces,”  such as fan-fiction sites, Facebook and MySpace, afford students “allow young people to perform new social acts not previously possible”, and they demonstrate the new literacy practices (p. 471). Tweets, Greenhow and Gleason (2012) argued, offers similar themes and opportunities since it they are:

  • “multimodal, dynamically updating, situationally specific, and socially mediated” (p. 472)
  • offer “unique combinations of text, images, sound, and color that characterize teens’ self-expressions on social network sites, individual tweets and retweets typically comprise a multiplicity of modes” (p. 472)
  •  develop into “constantly evolving, co-constructed” conversations that require the participant to have understood the situational context of the conversation and the conventions within it in order to participate thereby demonstrating a “. (p. 472)
  • show “a use of language and other modes of meaning” that is “tied to
    their relevance to the users’ personal, social, cultural, historical, or economic lives.” (p. 472)

As a result, the Greenhow and Gleason (2012) argued that, when considering curricula, tweeting creates “opportunities for their development of standard language proficiencies” and “encourage the development of 21st century skills, such as information literacy skills” (p. 473-474).  However, the need for further research is important to addressing how to best addres this as a new literacy within traditional educational practices. Due to the paucity of research, the authors recommended more large-scale and in-depth studies of how students of varying subgroups use Twitter as well research specifically focused on :

  • tweeting practices and “the potential learning opportunities that exist across school and non-school settings,”(p. 474)
  • how learners frame and come to view their experiences and place within the Twitter community
  • developing pedagogy for analyzing social media communications to understand socio-cultural connections
  • how teachers are incorporating social media into secondary and higher education

Given the generally negative perceptions many parents and districts have of student use of social media, along with the hurdles of “authority, control, content management (e.g., managing what is shared, received, tagged, and remixed), security, and copyright,” Greenhow and Gleason (2012) caution that such research will likely focus on higher education until there is “an accumulation of evidence that suggests that the benefits of social media integration in learning environments outweigh the costs” (p. 475).

As Greenhow and Gleason (2012) literature research suggested, there can be a lag between when technology is introduced, when it becomes used in education, and when research strategies are targeted towards understanding its placement and performance in promoting learning among various student populations. At the time of their research,  the authors were only able to locate 15 studies which met their broader search criteria of social media and new literacy, and only 6 that specifically discussed microblogging.  In a more recent literature study, Tang and Hew (2018) found 51 papers which specifically examined microblogging and/or Twitter that were published between 2006 and 2015.  While microblogging platforms, such as Twiducate, have been offered to make microblogging more K-12 friendly, the question of whether or not the use of Twitter has reached its full potential is less certain. Tang and Hew (2018) suggest that Twitter and similar technologies are most often being used for assessment and communication and that more professional develop is needed to make faculty more adept at using and designing learning activities through Twitter as well as in training students to effectively use Twitter and lessen the distractions social media presents to them.  As Tang and Hew (2018) remarked still more research is needed “in how different students experience Twitter and are engaged by it” (p. 112).

Tang, & Hew. (2017). Using Twitter for education: Beneficial or simply a waste of time? Computers & Education, 106(C), 97-118.

 

ARLE’s & VRLE’s: Horizons for Learning

Today’s classroom is so much larger than four walls and a white/chalk board.  The opportunity for the educator to take their students to to new worlds or to help them see new aspects within everyday landscapes is vast.  Virtual reality learning environments (VRLE’s) are 3-D immersive experiences that can be accessed through the desktop or involve more specialized hardware such as goggles.  Augmented reality learning environments (ARLE’s) combine virtual objects (2-D and 3-D) within the actual environment of the user in real time. Often these two are seen as occupying various aspects of the reality-to-virtuality continuum. Each of these presents new opportunities and challenges for educators .

In examining virtual reality environments, Dalgarno and Lee (2010) pinpointed that representational fidelity and learner interaction are key characteristics of VLRE’s which, through interaction with the learner, allows for the “construction of identity, sense of presence and co-presence” within the virtual space (Dalgarno and Lee, 2010, p 14). This creates that sense of immersion which can be impactful to the learner. Representational fidelity relates to environment of interaction. Critical aspects of this include how realistic a display of the environment is presented, how smooth the display of view changes and how motions are handled, how consistent object behavior is within the environment, is their spatial audio, is there tactile force and kinaesthetic feedback and users representation through the construction of an avatar .  Learner interaction is how the user interacts and is displayed within the environment. Aspects of this include aspects of embodied action, verbal and non-verbal communication, object interactions and control of the environment.  These key aspects come together to create the ways in which VRLE’s can potentially impact learning, Dalgarno and Leed (2010) outlined five affordances that VRLE’s facilitate. These include spatial knowledge representation, experiential learning, engagement, contextualized learning and collaborative learning.  However, Dalgarno and Lee (2010) suggested that in order to  assess how to use 3-D VRLE’s in “pedagogically sound ways” that more meaningful research was necessary (p 23).  They offered several recommendations of research to consider including studying basic assumptions held in VRLE’s and linking characteristics to the affordances they outlined. They also argued that research needs to be done to establish guidelines and best practices when it comes to VRLE’s implementation.  They also appropriately recommended that this not be done through comparison of 2-D to 3-D as these would be “contrived examples in inauthentic settings” (Dalgarno and Lee, 2010,p 25). Given that this was more a call-to-arms than a general “what can VRLE’s do for you” presentation, it is not surprising that there is little discussion of the challenges that implementing and using VLRE’s in education. As Salmon duly noted in her five-stage for scaffolding learners into multi-player virtual realities, there is a need to recognize and structure for the challenges that VRLE’s present (Salmon et. al., 2010). This requires recognition of technological and educator interventions needed to support the learner

When it comes to comparing VRLE’s and ARLE’s, Dunleavy et. al (2009) offered that, when considering affordances, there may be greater representational fidelity to ARLE’s due to they natural overlaying into the real world which allows for greater feel, sights and smells in the experience . In addition, the ability to talk face-to-face as well as virtually may allow for easier collaboration between users. However, the authors noted that within VRLE’s each actions by a user is “captured and time-stamped by the interface: where they go, what they hear and say, what data they collect or access” and as such this allows for greater visualization of “every aspect of the learning experience for formative and summative assessment” (Dunleavy, 2009, p 22). When considering the limitations of ARLE’s, the authors recognized that there was need for considerations of hardware and software issues in implementing ARLE’s and that, much like VRLE’s, there is a need for logistical support and lesson management during activities which utilize ARLE’s. In addition, Dunleavy et. al. (2009) found that student’s expressed cognitive overload due to both the newness of the experience and confusion with what was to be done. They recommend than significant modeling, facilitating, and scaffolding is needed when using ARLE’s .

Given my interest in using both AR and VR learning experiences in my courses as a means for providing realistic training opportunities which would otherwise be limited, the affordances and issues outlined by these authors offer consideration of what potentials and problems exist. However as neither article offered much in terms of direct assessment of what specific impacts AR and VR have on student outcomes, social connection and motivation and offer no specifics which link to the specific aspects of design and implementation, these articles represent only a starting point.

Dalgarno, B., & Lee, M. J. W. (2010). What are the learning affordances of 3-D virtual environments. British Journal of Educational Technology41(1), 10–32.

Dunleavy, M., Dede, C., & Mitchell, R. (2009). Affordances and Limitations of Immersive Participatory Augmented Reality Simulations for Teaching and Learning. Journal of Science Education and Technology, 18(1), 7-22.

Salmon, G., M. & Nie, P., (2010). Developing a five-stage model of learning in Second Life. Educational Research52(2): 169-182

 

 

New Literacies: Risks, Rewards, and Responsibilities

“To be literate tomorrow will be defined by even newer technologies that have yet to appear and even newer discourses and social practices that will be created to meet future needs. Thus, when we speak of new literacies we mean that literacy is not just new today; it becomes new every day of our lives” (Leu, 2012, p. 78)

New literacies are the “ways in which meaning-making practices are evolving under contemporary conditions that include, but are in no way limited to, technological changes associated with the rise and proliferation of digital electronics” (Knobel and Lankshear, 2014, p. 97). It involves examining how, through the use of digital technology, the learner of today can come to identify, understand, interpret, create and communicate knowledge in novel and often unconventional ways.  While the incorporation of new literacies allows the educator to meet students where they are at, to engage and enliven learning through the relevancy and interest of the learner, restructure the power dynamics of learning, and to extend learning beyond the classroom, the approach of the educator towards engaging with new literacies is often a daunting undertaking.  In her article, Hagood (2012) highlights the processes by which teachers were introduced to and implemented new literacies into their classrooms. Working with a group of 9 middle school teachers during bi-monthly meetings over the course of a year, the author (2012) provided them with a three phase process towards introducing new literacies. These phases included an introduction phase to learn about new literacies, an exploration phase of the skills and tools necessary for new literacies, and a design and implement phase. The output was an inquiry-based project incorporating new literacies the educators could use in their classes. Using the participants’ reflections on this process, Hagood (2012) outlined their takeaways towards the implementing new literacies so as to lessen push-back, increase interest for participation and overall increase teacher satisfaction with incorporating new literacies. These included starting small and learning to implement new literacies through pre-existing assignments,  test trying new literacies to facilitate learning when traditional avenues fail, and expecting to fail and retry as part of the process for developing their educator skills with new literacies. Hagood (2012) noted that while many of the participants recognized the fact that students were well ahead in their connectedness to digital technology, that this was not the motivator for their implementation of new literacies. Rather it was the fact that many of the participants felt invigorated by what they saw their students capable of producing, the increased engagement of their students, by their own personal growth, and by their renewed enjoyment of teaching through new literacies. In addition, the educators felt that they developed a collaborative network which not only pushed them to stay on task but also made them feel more invested in sharing what they had learned thereby reiterating the connectedness to context and people that comes with new literacy.

While this article lacks in any quantifiable data with regards to how implementing digital literacy impacted student and teach motivation and student success within these classes, the incorporation of the teacher’s voices in reflecting on what resulted carries great weight in thinking about how this introduction of new literacies must be transformed into workable practices for the educator. This was a single small group in a single school from a single training year and Hagood (2012) presents no follow-up or check-in to see how these teachers are fairing in their use of new literacies in the following years. Have they expanded their incorporation of new literacies beyond the one inquiry-based project and how did they do this? Or perhaps they limited themselves to the one project, change projects, or abandoned new literacies altogether? What obstacles came about over time which impacted how they developed their skills and their overall implementation of new literacies? These are questions this article doesn’t address but are of interest when thinking about how to aid educators in exploring and adopting new literacies. What did their students think of these new literacies

In thinking about research, the above questions bear greater examination.  It would be interesting to expand upon this towards examining the best processes for implementing new literacies by examining outcomes such as motivation, efficacy, self-directedness, and overall success for both student and teacher.

Hagood, M. C. (2012) Risks, Rewards, and Responsibilities of Using New Literacies in
Middle Grades. Voices from the Middle, Volume 19 Number 4, May 2012

Leu, D. J., & Forzani, E. (2012). New literacies in a Web 2.0, 3.0, 4.0, …∞ world. Research in the Schools, 19(1), 75-81

Knobel, M., & Lankshear, C. (2014). Studying new literacies. Journal of Adolescent & Adult Literacy, 57(9), 1-5

 

 

 

Digital Games, Design and Learning: A Meta-Analysis

Clark, D. B, Tanner-Smith, E.E, and Killingsworth, S.S. (2016) Digital Games, Design and Learning: A Systematic Review and Meta-Analysis. Review of Educational Research 86(1):  79-122.

Within this article, Clark, Tanner-Smith and Killingsworth (2016) offer a refined and expanded evaluation of research on digital games and learning.  To ground their study, the authors summarize three prior meta-analyses of digital games. It is from these three studies and their findings that the authors develop a set of two core hypotheses about how digital games impact learning  that were tested in their meta-analysis. These two core hypotheses were further examined for that the authors term as moderator conditions and from this the authors developed sub-theories for each core theory to also test in their meta-analysis. Utilizing databases spanning “Engineering, Computer Science, Medicine, Natural Sciences, and Social Sciences” the authors sought research published between 2000 and 2012 to identify studies which examined digital games in K-16 settings, which addressed “cognitive, intrapersonal and interpersonal learning outcomes”(p. 82) and had studies which either had comparisons of digital games versus non-game conditions or utilized a value-added approach (something the prior meta-analyses ignored) to compare standard and enhanced versions of the same game. In addition they required a set of criteria for these studies to meet which included specifics on game design, participant parameters, and pre and post testing data which could be used to assess change in outcomes. Overall, they identified 69 studies which met the parameters outlined in their research procedures. From this population they discerned the following signficant patterns:

  1. In studies of game versus non-game conditions in media comparisons, students in digital game conditions demonstrated signficantly better outcomes overall relative to students in the non-game comparisons conditions (p. 94). This was significant for both cognitive and interpersonal outcomes (p.95). The number of studies with interpersonal outcomes was too small for statistical significance.
  2.  In studies of standard game and enhanced game versions through value-added comparisons, students in enhanced games showed “significant positive outcomes” relative to standard versions (p. 98). While overall there were too few studies with specific features for cross comparisons, the one feature of enhanced scaffolding (personalized, adaptive play)was present in enough studies and showed a significant overall effect (p. 99).
  3. Overall in examining game conditions, games which allowed the learner multiple play sessions performed better than those of single game play when compared against non-game conditions. Game duration (time played) seemed to have no impact on overall impact. (p. 99) These results did not vary even when considerations of the visual aspects of the game were measured.
  4. Despite what was seen in previous meta-analyses, there was no difference in outcomes for games paired with additional non-game instruction versus those without the additional non-game instruction. (p. 99)
  5. There was significant differences with player configurations within games. Overall, single player games had the most signficant impact on learning outcomes relative to group game structure and these outcomes were higher in single player games with no formal collaboration or competition. (p. 100). However games with collaborative team competition had signficantly larger effects on learning outcomes when compare to single competitive player games.
  6. Games with greater engagement of the player with actions within the game had greater impact than those with only a small variety of actions of the screen which did not change much over the course of play.
  7. Overall the visual and narrative perspective qualities of the games both simple and more complex game designs showed effectiveness in learning outcomes but overall schematic (schematic, symbolic or text-based) games were more effective than cartoon or realistic games

In reflecting on their findings, the authors recognized some limitations present based upon both their search parameters and their methodological breakdowns for analysis and encourage further examination of studies which fell outside of their range (for example simulation games) and greater examination of the subtleties of the individual studies included within their analysis before any larger generalizations can be made as to the specifics of best practices for game design.

Perhaps the most interesting aspect of this study is not the outcomes it presents for future study (even though these are great food for thought about intentional game design for educational purposes) but the proposition it makes that educational technology researchers should “shift emphasis from proof-of-concept studies (“can games support learning?”) and media comparison analyzes (“are games better or worse than other media for learning?”) to cognitive-consequences and value-added studies exploring how theoretically driven design decisions can influence situated learning outcomes for the board diversity of learners within and beyond our classrooms” (p. 116).

 

 

Online learning as online participation

Hrastinski, S. (2009). A theory of online learning as online participation. Computers & Education, 52(1), 78–82

In this article, Hrastinski (2009) presents the argument that online participation is a critical and often undervalued aspect of online learning and that models which relegate it to solely a social aspect for learning are ignoring its larger contributions to how students connect to materials and each other in the online environment.  In support of his ideas, Hrastinski (2009) offers an overview of literature on online participation which highlights that online learning is “best accomplished when learners participate and collaborate” (p.  79) and this translates into better learning outcomes when measured by “perceived learning, grades, tests and quality of performances and assignments” (p. 79).  In order to evaluate online participation, Hrastinski (2009) presents a conceptualization of online participation as more than just counting how often a student participates in a conversation but rather reflects on the online participation as “a process of learning by taking part and maintaining relations with others. It is a complex process comprising doing, communicating, thinking, feeling and belonging which occurs both online and offline” (p. 80). Hrastinski (2009) in reflecting on the work of others, offers up a view that participation creates community which in turn supports collaboration and construction of knowledge-building communities which foster learning between each other and the group at large. This learning through participation requires physical tools for structuring this participation and the psychological tools to help the learner engage with the materials.  This suggests examining aspects of motivation to learn within the structure of designing materials directed towards participation. He presents this means we should be looking at participation through more than just counting how much someone talks or writes but developing activities which require engagement with others in variety of learning modes.

While the importance of participation being seen as a critical component of online learning and the idea of reflecting on ways in which students may reflect online participation through more than just discussion boards is a good thing to see. Hrastinski (2009) offers little in terms of concrete examples to demonstrate how he sees this theory of online participation playing out through these different learning modes. While he may not have included examples as a way of preventing a formulaic approach to considering online participation, the inclusion of either examples or greater descriptions with how he sees faculty being able to construct both the physical and psychological tools of online participation would have been helpful for those less familiar with these to visualize the increasing ways they can apporach structuring online engagement.

As I have a deep interest in examining ways in which community and culture are structured through online classes and the impacts this has on learning, I found this article both intersting and encouraging for research avenues. In particular the rethinking he proposes on how we see online participation being constructed is encouraging and I would like to see the ways in which faculty and students may seem this idea of “what is participation” similarly or differently and the connection these perceptions have on how they both approach online larning and how they evaluate online learning.

 

 

Unpacking TPACK…

Gómez, M. (2015). When Circles Collide: Unpacking TPACK Instruction in an Eighth-Grade Social Studies Classroom. Computers in the Schools32(3/4), 278–299.

Coming into teaching from a graduate program in anthropology where the concern was not on how to teach but on how to research, the idea of evaluating the knowledge needed to effectively teach much less teach with technology is novel to this author.  Thus while the overall importance of Mishra’s and Koehler’s (2006) work on Technological Pedagogical Content Knowledge (TPCK) towards understanding the practice of teaching with technology is evident to this author, the actual process of implementation within the actual class design was difficult to visualize. To clarify the steps to how Mishra’s and Koehler’s model is applied and is implemented within course design, Gomez’s (2015)  illustrated applying TPACK to a case study of a single 8th grade teacher and two social studies classroom. Using data collected through classroom observations, formal and interviews, and the analysis of artifacts produced, Gomez used a constant comparative approach to organize the data along themes which related to the
intersections of TPACK: technology knowledge (TK), content knowledge (CK), pedagogical knowledge (PK), technology content knowledge (TCK), technology pedagogical knowledge (TPK), pedagogical content knowledge (PCK), and technological pedagogical content knowledge (TPCK) and examined when and how these intersected within the framework of the class. Interestingly , when interviewing the teacher of the class, he offered up that he was designing his class not with TPACK in mind but rather as a way to reach his desired goal – to teach students to think historically – and that technology is only a tool that helps him to engage them in doing this by helping him to shape the lesson in a way that meets this goal.

Overall this is only a single case study so aspects of design towards implementation are bound to vary by teacher, school and students. The act of selecting this class and teacher was not random, rather the teacher was recommended to the researcher as someone who uses technology regularly in the classroom. In addition, the school utilized was a K-12 private school withone-to-one technology and thus it this scenario presents one where there is a great degree of technological access and affordances which may not be available to all teachers and schools. Gomez recognizes these limitations and approapriately makes no generalizations from these oberservations and interviews which should be broadly applied.

Despite this, this articles is offering one example of how in TPACK might be implemented in course design. Based on what Gomez (2015) observed, he does acknledge that this case example does breaks down the idea that the components of TPACK must be intersecting concurrently. Rather he notes “TPACK no longer becomes the intersection of these three types of knowledge, but rather it becomes the layered combination of these three
types of knowledge” (p. 295). In addition, Gomez (2015) highlights how teachers may approach TPACK very differently in implementation as the teacher of the 8th grade classes studied indicated that “teaching effectively with technology (TPACK) begins with an understanding of what he wants his students to learn” (p. 296). Therefore he frames TPACK within a framework of what he wants students to know.  Gomez presents that this may be a common way that teachers may implement TPACK and therefore “understanding the role students play in making decisions about using technology in instruction” should be considered more within the TPACK design (p. 296).

Mishra, P. and Koehler M.J. (2006) Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge. Teachers College Record, 2006, Vol.108(6), p.1017-1054

Promoting Student Engagement in Videos Through Quizzing

Cummins, S. Beresford, A.R. and Rice. A (2016) Investigating Engagement with In-Video Quiz Questions in a Programming Course. IEEE Transactions on Learning Technologies 9(1): 57-66

The use of videos to supplement or replace lectures that were previously done face-to-face is a standard to many online courses. However these videos often encourage passivity on the part of the learner. Other than watching and taking notes, there may be little to challenge to the video-watching learner to transform the information into retained knowledge, to self-assess whether or not they understand the content, and to demonstrate their ability to utilize what they have learned towards novel situations. Since engagement with videos is often the first step towards learning, Cummins, Beresford, and Rice (2016) tested whether or not student can become actively engaged in video materials through the use of in-video quizzes. They had two research questions: a) “how do students engage with quiz questions embedded within video content” and b) “what impact do in-video quiz questions have on student behavior” (p. 60).

Utilizing an Interactive Lecture Video Platform (ILVP) they developed and open sourced, the researchers were able to collect real-time student interactions with 18 different videos developed as part of a flipped classroom for programmers. Within each video, multiple choice and text answer based questions were embedded and were automatically graded by the system. Videoplay was automatically stopped at each question and students were require to answer. Correct answers automatically resumed playback while students had the option of retrying incorrect ones or moving ahead. Correct responses were discussed immediately after each quiz question when payback resumed. The style of questions were on the level of Remember, Understand, Apply, and Analyse within Bloom’s revised taxonomy . In addition to the interaction data, the researchers also administered anonymous questionnaires to collect student thoughts on technology and on behaviors they observed and also evaluated student engagement based on question complexity. Degree of student engagement was measured by on the number of students answering the quiz questions relative the number of students accessing the video.

According to the Cummins et. al. (2016), that students were likely to engage with the video through the quiz but that question style, question difficulty, and the overall number of questions in a video impacted the likelihood of engagement. In addition, student behaviors were variable in how often and in what ways this engagement took place. Some students viewed videos in their entirety while others skipped through them to areas they felt were relevant. Others employed a combination of these techniques. The authors suggest that, based both on the observed interactions and on questionnaire responses, four patterns of motivating are present during student engagement with the video – completionism (complete everything because it exists), challenge-seeking (only engage in those questions they felt challenged by), feedback (verify understanding of material), and revision (review of materials repeatedly). Interestingly, the researchers noted that student recollection of their engagement differed in some cases with actual recorded behavior but, the authors suggest this may actually show that students are not answering the question in the context of the quiz but are doing so within other contexts not recorded by the system. Given the evidence in student selectivity in responding to questions based on motivations, the author’s suggest a diverse approach to question design within videos will offer something for all learners.

While this study makes no attempt to assess the actual impact on performance and retention of the learners (due to the type of class and the assessment designs within it relative to the program), it does show that overall in-video quizzes may offer an effective way to promote student engagement with video based materials. It is unfortunate the authors did not consider an assessment structure within this research design so as to collect some assessment of learning. However given that the platform they utilized it available to anyone (https://github.com/ucam-cl-dtg/ILVP-prolog) and that other systems of integrated video quizzing are available  (i.e. Techsmith Relay) which, when combined with key-strokes and eye movement recording technology, could capture similar information does open up the ability to further test how in-video quizzing impacts student performance and retention.

In terms of further research, one could visual a series of studies using a similar processes which could examine in-video quizzing to greater depth not only for data on how it specifically impacts engagement, learning and retention but also how these may be impacted based on variables such as video purpose, length, context and the knowledge level of the questions.  As Schwartz and Hartmann (2007) noted design variations with regards to video genres may depend on learning outcomes so assessing if this engagement only exists for lecture based transitions or may transfer to other genre is intriguing. As the Cummins et. al (2016) explain, students “engaged less with the Understand questions in favour of other questions” (p.  62) which would suggest that students were actively selecting what they engaged with based on what they felt were most useful to them. Thus further investigation of how to design more engaging and learner centered questions would be useful towards knowledge retention. In addition, since the videos were sessions to replace lectures and ranged in length from 5 minutes and 59 seconds to 29 minutes and 6 seconds understanding how length impacts engagement would help to understand if there is a point at which student motivation and thus learning waivers. While the authors do address some specifics as to where drop-offs in engagement occurred relative to specific questions, they do not offer a breakdown as to engagement versus the relative length of the video and overall admit that the number of questions varied between videos (three had no questions at all) and that there was no connection between number of questions and the video length. Knowing more about the connections between in-video quizzing and student learning as well as the variables which impact this process could help to better assess the overall impact of in-video quizzing  and allow us to optimize in-video quizzes to promote student engagement, performance and retention.

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science. pp 349-366 Mahwah, NJ: Lawrance Erlbaum Associates.

Video Podcasts and Education

Kay, R. H. (2012). Exploring the use of video podcasts in education: A comprehensive review of the literature. Computers in Human Behavior, 28, 820-831

While the use of podcasts in education is growing, the literature to support their effectiveness in learning is far from concluded. Kay (2012) offers an overview of the literature on the use of podcasts in education a) to understand the ways in which podcasts have been used,  b) to identify the overall benefits and challenges to using video podcasts, and c) to outline areas of research design which could enhance evaluations of their effectiveness in learning. Utilizing keywords, such as ‘podcasts, vodcasts, video podcasts, video streaming, webcasts, and online videos” (p. 822), Kay searched for articles published in peer-reviewed journals. Through this she identified 53 studies published between 2009 and 2011 to analyze. Since the vast number of these were of focused on specific fields of undergraduates, Kay presents this as a review of  “the attitudes, behaviors and learning outcomes of undergraduate students studying science, technology, arts and health” (p. 823) Within this context, Kay (2012) shows there is a lot of diversity in how podcasts are used and how they are structured and tied into learning. She notes that podcasts generally fall into four categories (lecture-based, enhanced, supplementary and worked examples), can be variable in length and segmentation, designed for differing pedagogical approaches (passive viewing, problem solving and applied production) and have differing levels of focus (from narrow to specific skills to broader to higher cognitive concepts).  Because of the variability in research design, purpose and analysis methods, Kay (2012) approached this not from a meta-analysis perspective but from a broad comparison perspective with regards to the benefits from and challenges presented in using video podcasts.

In comparing the benefits and challenges, Kay (2012) presents that while there are great benefits shown in most studies, some studies are less conclusive. In examining the benefits, Kay finds that students in these studies are coming into podcasts primarily in evenings and weekends, primarily on home computers and not mobile devices (but this will vary by the type of video),  are utilizing different styles of viewing and that access is tied to a desire to improve knowledge (often ahead of an exam or class). This suggests that students are engaged in the flexibility and freedom afforded them through podcasts to learn anywhere and in ways that are conducive to their learning patterns. Overall student attitudes with regards to podcasts are positive in many of the studies. However, some showed a student preference for lectures over podcasts which limited the desire of the student to access them. Many studies commonly noted that students felt podcasts gave them a sense of control over their learning,  motivated them to learn through relevancy and attention, and helped them improve their understanding and performance. In considering performance, some of the studies showed improvement over traditional approaches with regards to tests scores while others showed no improvement. In additional while some studies showed that educators and students believed there were specific skills such as team building, technology usage and teaching skills the processes as to how these occur were not shared. In addition, some studies indicate technical problems with podcasts and lack of awareness can made podcasts inaccessible to some students and that several studies showed that students who regularly accessed podcasts attended class less often.

In reflecting on this diverse outcomes, Kay presents that the conflict evident in understanding the benefits and challenges is connected to research design. Kay (2012) argues that issues of podcast description, sample selection and description and data collection need to be addressed  “in order to establish the reliability and validity of results, compare and contrast results from different studies, and address some of the more difficult questions such as under what conditions and with whom are video podcasts most effective” (p. 826).  She argues that understanding more about the variation in length, structure and purpose of podcasts can better help to differentiate and better compare study data. Furthermore, Kay asks for more diverse populations (K-12) and better demographic population descriptions within studies so as to remove limits on ability to compare any findings among different contexts. Finally, she presents that an overall lack of examination of quantitative data and overall low quality descriptions of qualitative data techniques undermine the data being collected. “It is difficult to have confidence in the results reported, if the measures used are not reliable and valid or the process of qualitative data analysis and evaluation is not well articulated.” (p. 827) From these three issues, Kay recommends an overall greater depth to the design, descriptions, and data collection of research is needed in video podcasting research.

While literature review offers a general overview of the patterns the author witnessed in the studies collected, there are questions about data collection process as the author is unclear as to a) why three prior literature reviews were included as part of an analysis and b) as to whether the patterns she discusses are only from those papers which had undergraduate populations (as is intimated by her statement on this – as noted in italics above) or is it of all samples she collected. The author also used articles published in peer-reviewed journals and included no conference papers. It is unclear what difference in data would have resulted from including these other sources.

Overall the most critical information she provides from this study is the fact that there is no unifying research design that underlies the studies on video podcasts and this results in a diverse set of studies without complete consensus on the effective use of podcasts in education and overall little applicability on how to effectively implement video podcasts. The importance of research design in creating a comparative body of data cannot be understated and is something which should be considered in all good educational technology research. Unfortunately, while Kay denotes the issues present in how various studies are coding and how data is collected and analyzed in the studies she examined, she does not address the underlying research design issues much when thinking about areas of further research.  While this is not to lessen the issues she does bring up for future research, the need for better research design is evident and given little specifics by Kay.  One would have liked a more specific vision from her on this issue since greater consideration towards the underlying issues of research design with regards to describing and categorizing video podcasts, sampling strategies and developing methods of both qualitative and quantitative analysis are needed.