Inviting Video Games to the Educational Table

Gee, J. Learning and Games. The Ecology of Games: Connecting Youth, Games, and Learning. Edited by Katie Salen. The John D. and Catherine T. MacArthur Foundation Series on Digital Media and Learning. Cambridge, MA: The MIT Press, 2008. 21–40

According to Gee (2008), “good video games recruit good learning” but it all rests on good design (p.21). This is because well-designed video games provide experiences to the learner which meet conditions that “recruit learning as a form of pleasure and mastery” (p. 21).  These conditions include in which providing an experience that is goal structured and requiring interpretation towards making those goals, that provides immediate feedback and the opportunity to apply prior knowledge/experiences of self and others towards success in meeting goals. If done in such a way, Gee (2008) argues that they allow the learner’s experiences to be “organized in memory in such a way that they can draw on those experiences as from a data bank”(p. 22). As Gee presented (2008), these conditions, coupled with the social identity building that good game design incorporates,  help “learners understand and make sense of their experience in certain ways. It helps them understand the nature and purpose of the goals, interpretations, practices, explanations, debriefing, and feedback that are integral to learning” (p. 23). These conditions are the key to good game design as they provide several key aspects which play into learning science.  First they create a “situated learning matrix”- the set of goals and norms which require the player to “master a certain set of skills, facts, principles, and procedures” and utilize the tools and technologies available within the game to do this – including other player and non-player characters who represent a community of practice in which the learner is self-situating (Gee, 2009,p. 25).  This combination of game (in game design) and Game (social setting), as Gee (2009) explained, provides the learner with a foundation for good learning since “learning is situated in experience but goal driven, identity-focused experience” (p. 26). In addition, many well-designed games  incorporate models and modeling, which “simplify complex phenomena in order to make those phenomena easier to deal with” (Gee, 2009,p. 37). Many good games also enhance learning through the emphasis on distributed intelligence, collaboration, and cross-functional teams which create “a sense of production and ownership,” situate meanings/terms within motivating experiences at the time they are needed and provide an emotional attachment for the player (which aids in memory retention) while keep frustration levels down to prevent them pulling away (Gee, 2009, p. 37). As Gee (2009) pointed out “the language of learning is one important way in which to talk about video games, and video games are one important way in which to talk about learning. Learning theory and game design may, in the future, enhance each other” (p. 37).
In breaking down the connections to learning which can be present within well-designed video games, Gee (2009) has not only outlined the structures through which good educational games should be built but is constructively addressing common arguments presented against using video games. Recognizing the assets well-designed games can bring to the educational table is important since, more often than not, the skills and content learned in games are learner-centered and content connected but are “usually not recognized as such unless they fall into a real-world domain” (Gee, 2009, p. 27). This is likely why the discussion of the role of video games within education is necessary. As Gee (2009) commented,

“any learning experience has some content, that is, some facts, principles, information, and skills that need to be mastered. So the question immediately arises as to how this content ought to be taught?Should it be the main focus of the learning and taught quite directly? Or should the content be subordinated to something else and taught via that “something else”? Schools usually opt for the former
approach, games for the latter. Modern learning theory suggests the game approach is the better one” (p. 24)

 

 

 

Video Games as Digital Literacy

Steinkuelher, C. (2010). Digital literacies: Video games and digital literacies. Journal of Adolescent & Adult Literacy, 54(1), 61-63.

In reflecting on if educators are selling video games short when it comes to learning, Steinkueler (2010) offered the anecdotal case of “Julio”, 8th grade student. Julio spent a significant amount of free time involved in video game culture, designing and writing about gaming. However read three grade levels below where he should be and was often disinterested and disengaged from school. Even when presented with game-related readings he still also did not excel. But when given choice in reading, he selected a 12th grade reading that appealed to his interests and managed to succeed despite the obstacles this reading presented him. Steinkueler (2010) argued it was the action of giving him choice to select something that appealed to his interest that increased his auto-correction rate and thus gave him persistence to overcome and meet the challenge.  Steinkuler (2010) opined that “video games are a legitimate medium of expression. They recruit important digital literacy practices” (p. 63) and as such may offer an outlet for the student, particularly disengaged males, to engage in learning that may otherwise be unmet through traditional structures.

The efforts the author highlighted Julio engaged in– writing, reading, researching for gaming — certainly suggest that video games may offer a way to engage between new and traditional literacies as Gee (2008) suggests.  However, this is but a single example and alone offers very little in terms of tangible data to rest any confirmed ideas about the important of video gaming in education. However, it does offer the notion of considering how video games present as new literacies which can open doors for expression of meaning and ideas particularly those who my feel marginalized within traditional curriculum plans and by those who consider video games a “waste of time”.

The appeal of the qualitative analysis approach to investigating how students view and experience the use of gaming with education is especially appealing given this case of Julio. Would he have seen that is outside activities were translatable into educational acumen? Would his teacher or parents? There is so little in this small single case study to say much but it does give one ideas.

Gee, J. Learning and Games. The Ecology of Games: Connecting Youth, Games, and Learning. Edited by Katie Salen. The John D. and Catherine T. MacArthur Foundation Series on Digital Media and Learning. Cambridge, MA: The MIT Press, 2008. 21–40.

 

Twitter and New Literacy

Greenhow, C., & Gleason, B. (2012). Twitteracy: Tweeting as a New Literacy practice. The Educational Forum, 76(4), 464-478.

Just how useful can social media be to promoting learning?  According to Greenhow and Gleason (2012), microblogging, through technologies such as Twitter, opens up the opportunity for students to connect to “the kinds of new literacies increasingly advocated in the educational reform literature” (p. 467).  New literacy is ” dynamic, situationally
specific, multimodal, and socially mediated practice that both shapes and is shaped by
digital technologies” (Greenhow & Gleason, 2012, p.467) As such is allows meaning and learning to stretch into both formal and informal interactions and to be responsive to relationships that develop within these settings such that authorship is neither singular or static but is constantly being created and re-created and expressed through new means of combining text, images, sound, motion, and color. To examine how microblogging through social media such as Twitter connects to learning and new literacy, the authors conducted a literature search of journal articles to answer questions such as:

  • How do young people use Twitter in formal and informal learning settings, and with what results?
  • Can tweeting be considered a new literacy practice?
  • How do tweeting practices align with standards based literacy curricula?

According to the study, the authors found that studies show  “Twitter use in higher education may facilitate increased student engagement with course content and increased student-to-student or student–instructor interactions—potentially leading to stronger positive relationships that improve learning and to the design of richer experiential or authentic learning experiences” (Greenhow & Gleason, 2012, p. 470). However, at the time of their research, few studies had examined the use of Twitter as a new literacy practice. Looking to research on literacy practices and social media, Greenhow and Gleason (2012), suggested that “youth-initiated virtual spaces,”  such as fan-fiction sites, Facebook and MySpace, afford students “allow young people to perform new social acts not previously possible”, and they demonstrate the new literacy practices (p. 471). Tweets, Greenhow and Gleason (2012) argued, offers similar themes and opportunities since it they are:

  • “multimodal, dynamically updating, situationally specific, and socially mediated” (p. 472)
  • offer “unique combinations of text, images, sound, and color that characterize teens’ self-expressions on social network sites, individual tweets and retweets typically comprise a multiplicity of modes” (p. 472)
  •  develop into “constantly evolving, co-constructed” conversations that require the participant to have understood the situational context of the conversation and the conventions within it in order to participate thereby demonstrating a “. (p. 472)
  • show “a use of language and other modes of meaning” that is “tied to
    their relevance to the users’ personal, social, cultural, historical, or economic lives.” (p. 472)

As a result, the Greenhow and Gleason (2012) argued that, when considering curricula, tweeting creates “opportunities for their development of standard language proficiencies” and “encourage the development of 21st century skills, such as information literacy skills” (p. 473-474).  However, the need for further research is important to addressing how to best addres this as a new literacy within traditional educational practices. Due to the paucity of research, the authors recommended more large-scale and in-depth studies of how students of varying subgroups use Twitter as well research specifically focused on :

  • tweeting practices and “the potential learning opportunities that exist across school and non-school settings,”(p. 474)
  • how learners frame and come to view their experiences and place within the Twitter community
  • developing pedagogy for analyzing social media communications to understand socio-cultural connections
  • how teachers are incorporating social media into secondary and higher education

Given the generally negative perceptions many parents and districts have of student use of social media, along with the hurdles of “authority, control, content management (e.g., managing what is shared, received, tagged, and remixed), security, and copyright,” Greenhow and Gleason (2012) caution that such research will likely focus on higher education until there is “an accumulation of evidence that suggests that the benefits of social media integration in learning environments outweigh the costs” (p. 475).

As Greenhow and Gleason (2012) literature research suggested, there can be a lag between when technology is introduced, when it becomes used in education, and when research strategies are targeted towards understanding its placement and performance in promoting learning among various student populations. At the time of their research,  the authors were only able to locate 15 studies which met their broader search criteria of social media and new literacy, and only 6 that specifically discussed microblogging.  In a more recent literature study, Tang and Hew (2018) found 51 papers which specifically examined microblogging and/or Twitter that were published between 2006 and 2015.  While microblogging platforms, such as Twiducate, have been offered to make microblogging more K-12 friendly, the question of whether or not the use of Twitter has reached its full potential is less certain. Tang and Hew (2018) suggest that Twitter and similar technologies are most often being used for assessment and communication and that more professional develop is needed to make faculty more adept at using and designing learning activities through Twitter as well as in training students to effectively use Twitter and lessen the distractions social media presents to them.  As Tang and Hew (2018) remarked still more research is needed “in how different students experience Twitter and are engaged by it” (p. 112).

Tang, & Hew. (2017). Using Twitter for education: Beneficial or simply a waste of time? Computers & Education, 106(C), 97-118.

 

ARLE’s & VRLE’s: Horizons for Learning

Today’s classroom is so much larger than four walls and a white/chalk board.  The opportunity for the educator to take their students to to new worlds or to help them see new aspects within everyday landscapes is vast.  Virtual reality learning environments (VRLE’s) are 3-D immersive experiences that can be accessed through the desktop or involve more specialized hardware such as goggles.  Augmented reality learning environments (ARLE’s) combine virtual objects (2-D and 3-D) within the actual environment of the user in real time. Often these two are seen as occupying various aspects of the reality-to-virtuality continuum. Each of these presents new opportunities and challenges for educators .

In examining virtual reality environments, Dalgarno and Lee (2010) pinpointed that representational fidelity and learner interaction are key characteristics of VLRE’s which, through interaction with the learner, allows for the “construction of identity, sense of presence and co-presence” within the virtual space (Dalgarno and Lee, 2010, p 14). This creates that sense of immersion which can be impactful to the learner. Representational fidelity relates to environment of interaction. Critical aspects of this include how realistic a display of the environment is presented, how smooth the display of view changes and how motions are handled, how consistent object behavior is within the environment, is their spatial audio, is there tactile force and kinaesthetic feedback and users representation through the construction of an avatar .  Learner interaction is how the user interacts and is displayed within the environment. Aspects of this include aspects of embodied action, verbal and non-verbal communication, object interactions and control of the environment.  These key aspects come together to create the ways in which VRLE’s can potentially impact learning, Dalgarno and Leed (2010) outlined five affordances that VRLE’s facilitate. These include spatial knowledge representation, experiential learning, engagement, contextualized learning and collaborative learning.  However, Dalgarno and Lee (2010) suggested that in order to  assess how to use 3-D VRLE’s in “pedagogically sound ways” that more meaningful research was necessary (p 23).  They offered several recommendations of research to consider including studying basic assumptions held in VRLE’s and linking characteristics to the affordances they outlined. They also argued that research needs to be done to establish guidelines and best practices when it comes to VRLE’s implementation.  They also appropriately recommended that this not be done through comparison of 2-D to 3-D as these would be “contrived examples in inauthentic settings” (Dalgarno and Lee, 2010,p 25). Given that this was more a call-to-arms than a general “what can VRLE’s do for you” presentation, it is not surprising that there is little discussion of the challenges that implementing and using VLRE’s in education. As Salmon duly noted in her five-stage for scaffolding learners into multi-player virtual realities, there is a need to recognize and structure for the challenges that VRLE’s present (Salmon et. al., 2010). This requires recognition of technological and educator interventions needed to support the learner

When it comes to comparing VRLE’s and ARLE’s, Dunleavy et. al (2009) offered that, when considering affordances, there may be greater representational fidelity to ARLE’s due to they natural overlaying into the real world which allows for greater feel, sights and smells in the experience . In addition, the ability to talk face-to-face as well as virtually may allow for easier collaboration between users. However, the authors noted that within VRLE’s each actions by a user is “captured and time-stamped by the interface: where they go, what they hear and say, what data they collect or access” and as such this allows for greater visualization of “every aspect of the learning experience for formative and summative assessment” (Dunleavy, 2009, p 22). When considering the limitations of ARLE’s, the authors recognized that there was need for considerations of hardware and software issues in implementing ARLE’s and that, much like VRLE’s, there is a need for logistical support and lesson management during activities which utilize ARLE’s. In addition, Dunleavy et. al. (2009) found that student’s expressed cognitive overload due to both the newness of the experience and confusion with what was to be done. They recommend than significant modeling, facilitating, and scaffolding is needed when using ARLE’s .

Given my interest in using both AR and VR learning experiences in my courses as a means for providing realistic training opportunities which would otherwise be limited, the affordances and issues outlined by these authors offer consideration of what potentials and problems exist. However as neither article offered much in terms of direct assessment of what specific impacts AR and VR have on student outcomes, social connection and motivation and offer no specifics which link to the specific aspects of design and implementation, these articles represent only a starting point.

Dalgarno, B., & Lee, M. J. W. (2010). What are the learning affordances of 3-D virtual environments. British Journal of Educational Technology41(1), 10–32.

Dunleavy, M., Dede, C., & Mitchell, R. (2009). Affordances and Limitations of Immersive Participatory Augmented Reality Simulations for Teaching and Learning. Journal of Science Education and Technology, 18(1), 7-22.

Salmon, G., M. & Nie, P., (2010). Developing a five-stage model of learning in Second Life. Educational Research52(2): 169-182

 

 

Unpacking TPACK…

Gómez, M. (2015). When Circles Collide: Unpacking TPACK Instruction in an Eighth-Grade Social Studies Classroom. Computers in the Schools32(3/4), 278–299.

Coming into teaching from a graduate program in anthropology where the concern was not on how to teach but on how to research, the idea of evaluating the knowledge needed to effectively teach much less teach with technology is novel to this author.  Thus while the overall importance of Mishra’s and Koehler’s (2006) work on Technological Pedagogical Content Knowledge (TPCK) towards understanding the practice of teaching with technology is evident to this author, the actual process of implementation within the actual class design was difficult to visualize. To clarify the steps to how Mishra’s and Koehler’s model is applied and is implemented within course design, Gomez’s (2015)  illustrated applying TPACK to a case study of a single 8th grade teacher and two social studies classroom. Using data collected through classroom observations, formal and interviews, and the analysis of artifacts produced, Gomez used a constant comparative approach to organize the data along themes which related to the
intersections of TPACK: technology knowledge (TK), content knowledge (CK), pedagogical knowledge (PK), technology content knowledge (TCK), technology pedagogical knowledge (TPK), pedagogical content knowledge (PCK), and technological pedagogical content knowledge (TPCK) and examined when and how these intersected within the framework of the class. Interestingly , when interviewing the teacher of the class, he offered up that he was designing his class not with TPACK in mind but rather as a way to reach his desired goal – to teach students to think historically – and that technology is only a tool that helps him to engage them in doing this by helping him to shape the lesson in a way that meets this goal.

Overall this is only a single case study so aspects of design towards implementation are bound to vary by teacher, school and students. The act of selecting this class and teacher was not random, rather the teacher was recommended to the researcher as someone who uses technology regularly in the classroom. In addition, the school utilized was a K-12 private school withone-to-one technology and thus it this scenario presents one where there is a great degree of technological access and affordances which may not be available to all teachers and schools. Gomez recognizes these limitations and approapriately makes no generalizations from these oberservations and interviews which should be broadly applied.

Despite this, this articles is offering one example of how in TPACK might be implemented in course design. Based on what Gomez (2015) observed, he does acknledge that this case example does breaks down the idea that the components of TPACK must be intersecting concurrently. Rather he notes “TPACK no longer becomes the intersection of these three types of knowledge, but rather it becomes the layered combination of these three
types of knowledge” (p. 295). In addition, Gomez (2015) highlights how teachers may approach TPACK very differently in implementation as the teacher of the 8th grade classes studied indicated that “teaching effectively with technology (TPACK) begins with an understanding of what he wants his students to learn” (p. 296). Therefore he frames TPACK within a framework of what he wants students to know.  Gomez presents that this may be a common way that teachers may implement TPACK and therefore “understanding the role students play in making decisions about using technology in instruction” should be considered more within the TPACK design (p. 296).

Mishra, P. and Koehler M.J. (2006) Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge. Teachers College Record, 2006, Vol.108(6), p.1017-1054

Promoting Student Engagement in Videos Through Quizzing

Cummins, S. Beresford, A.R. and Rice. A (2016) Investigating Engagement with In-Video Quiz Questions in a Programming Course. IEEE Transactions on Learning Technologies 9(1): 57-66

The use of videos to supplement or replace lectures that were previously done face-to-face is a standard to many online courses. However these videos often encourage passivity on the part of the learner. Other than watching and taking notes, there may be little to challenge to the video-watching learner to transform the information into retained knowledge, to self-assess whether or not they understand the content, and to demonstrate their ability to utilize what they have learned towards novel situations. Since engagement with videos is often the first step towards learning, Cummins, Beresford, and Rice (2016) tested whether or not student can become actively engaged in video materials through the use of in-video quizzes. They had two research questions: a) “how do students engage with quiz questions embedded within video content” and b) “what impact do in-video quiz questions have on student behavior” (p. 60).

Utilizing an Interactive Lecture Video Platform (ILVP) they developed and open sourced, the researchers were able to collect real-time student interactions with 18 different videos developed as part of a flipped classroom for programmers. Within each video, multiple choice and text answer based questions were embedded and were automatically graded by the system. Videoplay was automatically stopped at each question and students were require to answer. Correct answers automatically resumed playback while students had the option of retrying incorrect ones or moving ahead. Correct responses were discussed immediately after each quiz question when payback resumed. The style of questions were on the level of Remember, Understand, Apply, and Analyse within Bloom’s revised taxonomy . In addition to the interaction data, the researchers also administered anonymous questionnaires to collect student thoughts on technology and on behaviors they observed and also evaluated student engagement based on question complexity. Degree of student engagement was measured by on the number of students answering the quiz questions relative the number of students accessing the video.

According to the Cummins et. al. (2016), that students were likely to engage with the video through the quiz but that question style, question difficulty, and the overall number of questions in a video impacted the likelihood of engagement. In addition, student behaviors were variable in how often and in what ways this engagement took place. Some students viewed videos in their entirety while others skipped through them to areas they felt were relevant. Others employed a combination of these techniques. The authors suggest that, based both on the observed interactions and on questionnaire responses, four patterns of motivating are present during student engagement with the video – completionism (complete everything because it exists), challenge-seeking (only engage in those questions they felt challenged by), feedback (verify understanding of material), and revision (review of materials repeatedly). Interestingly, the researchers noted that student recollection of their engagement differed in some cases with actual recorded behavior but, the authors suggest this may actually show that students are not answering the question in the context of the quiz but are doing so within other contexts not recorded by the system. Given the evidence in student selectivity in responding to questions based on motivations, the author’s suggest a diverse approach to question design within videos will offer something for all learners.

While this study makes no attempt to assess the actual impact on performance and retention of the learners (due to the type of class and the assessment designs within it relative to the program), it does show that overall in-video quizzes may offer an effective way to promote student engagement with video based materials. It is unfortunate the authors did not consider an assessment structure within this research design so as to collect some assessment of learning. However given that the platform they utilized it available to anyone (https://github.com/ucam-cl-dtg/ILVP-prolog) and that other systems of integrated video quizzing are available  (i.e. Techsmith Relay) which, when combined with key-strokes and eye movement recording technology, could capture similar information does open up the ability to further test how in-video quizzing impacts student performance and retention.

In terms of further research, one could visual a series of studies using a similar processes which could examine in-video quizzing to greater depth not only for data on how it specifically impacts engagement, learning and retention but also how these may be impacted based on variables such as video purpose, length, context and the knowledge level of the questions.  As Schwartz and Hartmann (2007) noted design variations with regards to video genres may depend on learning outcomes so assessing if this engagement only exists for lecture based transitions or may transfer to other genre is intriguing. As the Cummins et. al (2016) explain, students “engaged less with the Understand questions in favour of other questions” (p.  62) which would suggest that students were actively selecting what they engaged with based on what they felt were most useful to them. Thus further investigation of how to design more engaging and learner centered questions would be useful towards knowledge retention. In addition, since the videos were sessions to replace lectures and ranged in length from 5 minutes and 59 seconds to 29 minutes and 6 seconds understanding how length impacts engagement would help to understand if there is a point at which student motivation and thus learning waivers. While the authors do address some specifics as to where drop-offs in engagement occurred relative to specific questions, they do not offer a breakdown as to engagement versus the relative length of the video and overall admit that the number of questions varied between videos (three had no questions at all) and that there was no connection between number of questions and the video length. Knowing more about the connections between in-video quizzing and student learning as well as the variables which impact this process could help to better assess the overall impact of in-video quizzing  and allow us to optimize in-video quizzes to promote student engagement, performance and retention.

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science. pp 349-366 Mahwah, NJ: Lawrance Erlbaum Associates.

Video Podcasts and Education

Kay, R. H. (2012). Exploring the use of video podcasts in education: A comprehensive review of the literature. Computers in Human Behavior, 28, 820-831

While the use of podcasts in education is growing, the literature to support their effectiveness in learning is far from concluded. Kay (2012) offers an overview of the literature on the use of podcasts in education a) to understand the ways in which podcasts have been used,  b) to identify the overall benefits and challenges to using video podcasts, and c) to outline areas of research design which could enhance evaluations of their effectiveness in learning. Utilizing keywords, such as ‘podcasts, vodcasts, video podcasts, video streaming, webcasts, and online videos” (p. 822), Kay searched for articles published in peer-reviewed journals. Through this she identified 53 studies published between 2009 and 2011 to analyze. Since the vast number of these were of focused on specific fields of undergraduates, Kay presents this as a review of  “the attitudes, behaviors and learning outcomes of undergraduate students studying science, technology, arts and health” (p. 823) Within this context, Kay (2012) shows there is a lot of diversity in how podcasts are used and how they are structured and tied into learning. She notes that podcasts generally fall into four categories (lecture-based, enhanced, supplementary and worked examples), can be variable in length and segmentation, designed for differing pedagogical approaches (passive viewing, problem solving and applied production) and have differing levels of focus (from narrow to specific skills to broader to higher cognitive concepts).  Because of the variability in research design, purpose and analysis methods, Kay (2012) approached this not from a meta-analysis perspective but from a broad comparison perspective with regards to the benefits from and challenges presented in using video podcasts.

In comparing the benefits and challenges, Kay (2012) presents that while there are great benefits shown in most studies, some studies are less conclusive. In examining the benefits, Kay finds that students in these studies are coming into podcasts primarily in evenings and weekends, primarily on home computers and not mobile devices (but this will vary by the type of video),  are utilizing different styles of viewing and that access is tied to a desire to improve knowledge (often ahead of an exam or class). This suggests that students are engaged in the flexibility and freedom afforded them through podcasts to learn anywhere and in ways that are conducive to their learning patterns. Overall student attitudes with regards to podcasts are positive in many of the studies. However, some showed a student preference for lectures over podcasts which limited the desire of the student to access them. Many studies commonly noted that students felt podcasts gave them a sense of control over their learning,  motivated them to learn through relevancy and attention, and helped them improve their understanding and performance. In considering performance, some of the studies showed improvement over traditional approaches with regards to tests scores while others showed no improvement. In additional while some studies showed that educators and students believed there were specific skills such as team building, technology usage and teaching skills the processes as to how these occur were not shared. In addition, some studies indicate technical problems with podcasts and lack of awareness can made podcasts inaccessible to some students and that several studies showed that students who regularly accessed podcasts attended class less often.

In reflecting on this diverse outcomes, Kay presents that the conflict evident in understanding the benefits and challenges is connected to research design. Kay (2012) argues that issues of podcast description, sample selection and description and data collection need to be addressed  “in order to establish the reliability and validity of results, compare and contrast results from different studies, and address some of the more difficult questions such as under what conditions and with whom are video podcasts most effective” (p. 826).  She argues that understanding more about the variation in length, structure and purpose of podcasts can better help to differentiate and better compare study data. Furthermore, Kay asks for more diverse populations (K-12) and better demographic population descriptions within studies so as to remove limits on ability to compare any findings among different contexts. Finally, she presents that an overall lack of examination of quantitative data and overall low quality descriptions of qualitative data techniques undermine the data being collected. “It is difficult to have confidence in the results reported, if the measures used are not reliable and valid or the process of qualitative data analysis and evaluation is not well articulated.” (p. 827) From these three issues, Kay recommends an overall greater depth to the design, descriptions, and data collection of research is needed in video podcasting research.

While literature review offers a general overview of the patterns the author witnessed in the studies collected, there are questions about data collection process as the author is unclear as to a) why three prior literature reviews were included as part of an analysis and b) as to whether the patterns she discusses are only from those papers which had undergraduate populations (as is intimated by her statement on this – as noted in italics above) or is it of all samples she collected. The author also used articles published in peer-reviewed journals and included no conference papers. It is unclear what difference in data would have resulted from including these other sources.

Overall the most critical information she provides from this study is the fact that there is no unifying research design that underlies the studies on video podcasts and this results in a diverse set of studies without complete consensus on the effective use of podcasts in education and overall little applicability on how to effectively implement video podcasts. The importance of research design in creating a comparative body of data cannot be understated and is something which should be considered in all good educational technology research. Unfortunately, while Kay denotes the issues present in how various studies are coding and how data is collected and analyzed in the studies she examined, she does not address the underlying research design issues much when thinking about areas of further research.  While this is not to lessen the issues she does bring up for future research, the need for better research design is evident and given little specifics by Kay.  One would have liked a more specific vision from her on this issue since greater consideration towards the underlying issues of research design with regards to describing and categorizing video podcasts, sampling strategies and developing methods of both qualitative and quantitative analysis are needed.

 

Designing Effective Qualitative Research

Hoepfl, M. C. (1997) Choosing qualitative research: A primer for technology education researchers. Journal of Technology Education, 9, 47–63

According to Hoepfl (1997), research in technology education has largely relied on quantitative research, possibly due to its own limitations in knowledge and skill on qualitative research design. Desiring to increase the implementation of qualitatively designed research, Hoepfl offers a “primer” on the purpose, processes and practice of qualitative research. Presenting qualitative research as expanding knowledge beyond what quantitative can achieve, Hoepfl (1997) sees it as having three critical purposes. First it can help understand issues about which little is known. Secondly it can offer new insight on what we already know. Thirdly, qualitative research can more easily convey the depth of data beyond what quantitative can. In addition, since qualitative data is often presented in ways which are similar to how people experience their world, he offers that it finds greater resonance with the reader. With regard to the processes of qualitative research, Hoepfl (1997) denotes that due to its nature, qualitative research design requires different consideration  as the “particular design of a qualitative study depends on the purpose of the inquiry, what information will be most useful, and what information will have the most credibility” (p.50). This leads to a flexibility – not finality – of research strategy before data collection and a de-emphasis on the confidence of data being a result solely of random sampling strategies and numbers. This flexibility in design strategy means a great deal of thought must be made on how to best situate data collection with recognition that actions in field may require adjustments of design as some questions fail or if new patterns emerge. In terms of strategies, the author offers up purposeful sampling options and discusses how maximum variation sampling may lead to both depth of description and sensitivity for emergent pattern recognition. He also outlines some of the various forms of data available in qualitative research and the stages of data analysis. In doing this, Hoepfl (1997) recognizes that qualitative data is much more difficult to collect and analyze than quantitative data and that often the research may require numerous cyclical movements through the various stages of collection and analysis. Importantly he addresses the practices of the researcher and reviewer in considering authority and trustworthiness in qualitative research by examining issues of credibility, transferability, dependability and confirmability.

In examining Hoepfl’s work, he offers a quality start to understanding the strengths and struggles of qualitative research. He correctly argues that the ability for qualitative research to have increasing acceptance within technology education rests on the ability of the researcher to address the questions of authority and trustworthiness which are more easily (albeit possibly erroneous) accepted in quantitative research. However  there were other aspects which are inherent in qualitative research which he gives almost no treatment to at all. These include consideration of  how relationships become built and defined between subjects and researcher and the impacts these can have on subject behavior. Hoepfl (1997) makes mention of these relationships and the risk of altering participant behavior denoting that “the researcher must be aware of, and work to minimize.” (p.  53) but  he offers no process for either recognizing when this occurs within the data nor how to actually go about minimizing this.  When it comes to the ethics of human subject interaction, Hoepfl (1997)  denotes that “the researcher must consider the legal and ethical responsibilities associated with naturalistic observation” (p. 53) but earlier offered that limiting the knowledge of the researcher’s identity and purpose or even hiding them may be appropriate. This is a problematic statement given informed consent guidelines and outlines a key aspect of information missing in this primer – that of how to consider human subject research ethics within qualitative research design. Since Hoepfl is offering a general guide to qualitative research and since the existence of IRB’s and the primacy guidelines of informed consent were established in 1974 by the National Research Act,  one would have expected at least some consideration of those guidelines, a mentioning of informed consent, or at least a discussion of how to handle the sensitive data that may come with qualitative data collection.

In reflecting on the applicability of Hoepfl’s work to my research interests, the emphasis on what qualitative research can bring to the educational technology table is enlightening as I did not recognize how much of a new approach this was to education as it was something of a staple to my anthropological education. Of particular interest was Hoepfl discussion of maximum variation sampling. He cites Patton in saying

“The maximum variation sampling strategy turns that apparent weakness into a strength by applying the following logic: Any common patterns that emerge from great variation are of particular interest and value in capturing the core experiences and central, shared aspects or impacts of a program” (Hoepfl, 1997 p.52)

This statement and his discussion of trustworthiness connected to a recent article I read on generalizing in educational research written by Ercikan and Roth’s (2014). In particular, the authors discuss the reliance on quantitative research for its supposed ability to be generalized but then break down this assumption to argue that qualitative data actually has more applicability since, if properly designed, can create essentialist generalizations. These are:

“the result of a systematic interrogation of “the particular case by constituting it as a ‘particular instance of the possible’… in order to extract general or invariant properties….In this approach, every case is taken as expressing the underlying law or laws; the approach intends to identify invariants in phenomena that, on the surface, look like they have little or nothing in common”(p. 10).

Thus by looking at “central, shared aspects” denoted by Hoepfl through maximum variation sampling and discerning the essential aspects which underlie the patterns, qualitative research could “identify the work and processes that produce phenomena.” Once this is established, the testability of the generalization is done by examining it to any other case study. If issues of population heterogeneity are also considered within the design of the qualitative data collection, the authors then argue that the ability to generalize from data is potentially greater with qualitative research.

Additional References

Ercikan, K. and Roth W-M (2014) Limits of Generalizing in Education Research: Why Criteria for Research Generalization Should Include Population Heterogeneity and Uses of Knowledge Claims. Teachers College Record Volume 116 (5): 1-28

Using A Learning Ecology Perspective

Barron, B. (2006). Interest and self-sustained learning as catalysts of development: A learning ecology perspective. Human Development, 49, 193-224.

Not all learning is done in school. While such a statement may seem obvious, Barron (2006) denotes that studies of learning often focus specifically on formal atmospheres of learning (schools and labs) and in doing so miss the big picture of how a learner will co-op and connect various resources, social networks, activities and interactions together to create a landscape where their learning takes place.  Using a learning ecology framework, the author desires to understand how exactly a learner may go about learning by examining the multiple contexts and resources they have available to them.  Learning ecology is “the set of contexts found in physical and virtual spaces that provide opportunities for learning.” (Barron, 2006)  By understanding how the learner negotiates this landscapes for learning that surround them, the author believes educators can think more broadly about ways to connect in-class and outside learning together. Using qualitative interviews with students and their families as the focal point of her research, Barron (2006) focuses on creating “portraits of learning about technology” (p 202) to better understand how interest is found and then self-sustained through several contexts. Through this work she demonstrates that there is not one means by which a student may develop interest and maintain learning but that common themes are prevalent. Among her case studies, Barron (2006) outlines five modes of self-initiated learning. These include finding text-based resources to gain knowledge, building knowledge networks for mentoring and gaining opportunities, creating interactive activities to promote self-learning, seeking out structured learning through classes and workshops, and exploring media to learn and find examples of interests.  By examining the interplay of these various strategies, the Barron (2006) demonstrates how the learner was an active participant in constructing their own learning landscape such that “learning was distributed across activities and resources” (p. 218). Because of this Barron (2006) argues that researchers should consider “the interconnections and complex relations between formal learning experiences provided by schools and the informal learning experienced that students encounter in contexts outside of school” (p 217).

To me, the strengths in Barron’s work come from three areas. First, by using the foundation of learning ecology as “a dynamic entity,” which is shaped by a variety of interconnected interactions and interfaces, she is centering the discussion of learning on the learner and how they are an active agent using interest to seek out new sources and applications for knowledge. Secondly, by emphasizing that what a student accesses outside of school may be as, if not more, critical to fostering their own learning, Barron suggests that the science of learning needs to consider how to take in and study these other contexts along side what is done in formal educational settings. Thirdly, by approaching this from a interview perspective, Barron demonstrates how qualitative data enables a deeper understanding of the how and why learning can occur. Such a methodology is time and analysis intensive and does limit the researcher in what they can accomplish. In Barron’s case, she only presents three case studies for analysis and it would be interesting and beneficial to see how these same five factors of self-initiated learning are present throughout the larger in-depth interviews she conducted and if specific variations are present based on different population demographics.

For me, this work is extremely interesting for how it connects to what I understand as I enter the field of education from the field of anthropology.  In anthropology, the marrying of qualitative and quantitative data has always been considered necessary to better understand human endeavors – including that how we learn and what impacts that learning.  In anthropology, the human is not only a receptor of culture but actively participates in the transformation of that culture and thus their agency is a given. Finally, the examination of the interconnections of contexts and the interplay between them mirrors the integrative nature by which humans operate in their world.  Thus the learning ecology perspective married to a qualitative data collection technique seems to hold great potential for deeper exploration of how learning occurs and what impacts technology can have in that process.

 

A Consequence of Design – Considering Social Inequality in Educational Technology Research

Tawfik, A. A., Reeves, T., & Stich, A. (2016). Intended and Unintended Consequences of Educational Technology on Social Inequality. Techtrends: Linking Research & Practice To Improve Learning60(6), 598-605

Technology has often been considered a potential route for addressing inequalities of access and quality within education.  However, Tawfik et. al. (2016) consider such a perspective to be premature. In examining the educational system, the authors argue that the significant inequalities present among populations, based on aspects of socio-economic status, location, race and ethnicity, have not been addressed much in educational technology research and, in cases where it has been considered, differences in outcomes are exhibited by these populations. Denoting that student’s racial, ethnic and socioeconomic background have influence on educational attainment and achievement, Tawfik et. al (2016) examine how this inequality, with regards to technology, is present in some form at all levels of education – from early education (through access to media and apps associated with learning) to construction of college applications, to in-class and online learning, and through lifelong education.  Their review of the literature offers that not only is inequality evidenced in issues of access, interpretation and application of technology by students but also in the ability for teachers to access technology and the professional development related to learning technologies. The authors come to the conclusion that, while there is evidence of success of educational technologies to address gaps, there is also evidence that they can exacerbate them,  inadvertently increasing educational inequality.  Tawfik and colleagues (2016) argue that greater consideration of the consequences of educational technology on societal inequalities needs to be considered as part of the design, development and implementation of educational technology research

In reflection, Tawfik et. al (2016) offer a broad examination of the intended and unintended consequences of educational technology and offer food-for-thought on what good educational technology research needs to consider. While not a complete review of all literature as it relates to technology and educational inequality, the authors supplement their points of issue with cited examples to support their conclusions.  They see a failing within research design and appropriately make a reasoned argument that greater reflection on issues of social inequality – as it related to educational technology among both students and educators – needs to be considered. Perhaps intended to spur the conversation and not so much to guide it, their recommendations for specific ways to implement this within research design are not forthcoming; making one to wonder just how they see this induction of greater consideration of social inequality into educational research being implemented.

In consideration of research design, the argument can been made that, for research to have application and impact policy, understanding the populational structures under which an assessment is done and to which it can be applied, is critical to the generalizability of the results. If educational technology has both the ability to lessen and widen the gaps in educational achievement in ways often not predicted in research design, consideration of aspects of socio-economic status, race and ethnicity should be examined.  This is especially true if one wants to move into strategizing  implementation since proper determination of situational generalizability is necessary. Moreover, given that educational technology can influence the potential outcomes of groups with regards to attainment and achievement, a reflection on its role in both decreasing and increasing educational inequality is essential towards a critical understanding of what we do in this field.