Unpacking TPACK…

Gómez, M. (2015). When Circles Collide: Unpacking TPACK Instruction in an Eighth-Grade Social Studies Classroom. Computers in the Schools32(3/4), 278–299.

Coming into teaching from a graduate program in anthropology where the concern was not on how to teach but on how to research, the idea of evaluating the knowledge needed to effectively teach much less teach with technology is novel to this author.  Thus while the overall importance of Mishra’s and Koehler’s (2006) work on Technological Pedagogical Content Knowledge (TPCK) towards understanding the practice of teaching with technology is evident to this author, the actual process of implementation within the actual class design was difficult to visualize. To clarify the steps to how Mishra’s and Koehler’s model is applied and is implemented within course design, Gomez’s (2015)  illustrated applying TPACK to a case study of a single 8th grade teacher and two social studies classroom. Using data collected through classroom observations, formal and interviews, and the analysis of artifacts produced, Gomez used a constant comparative approach to organize the data along themes which related to the
intersections of TPACK: technology knowledge (TK), content knowledge (CK), pedagogical knowledge (PK), technology content knowledge (TCK), technology pedagogical knowledge (TPK), pedagogical content knowledge (PCK), and technological pedagogical content knowledge (TPCK) and examined when and how these intersected within the framework of the class. Interestingly , when interviewing the teacher of the class, he offered up that he was designing his class not with TPACK in mind but rather as a way to reach his desired goal – to teach students to think historically – and that technology is only a tool that helps him to engage them in doing this by helping him to shape the lesson in a way that meets this goal.

Overall this is only a single case study so aspects of design towards implementation are bound to vary by teacher, school and students. The act of selecting this class and teacher was not random, rather the teacher was recommended to the researcher as someone who uses technology regularly in the classroom. In addition, the school utilized was a K-12 private school withone-to-one technology and thus it this scenario presents one where there is a great degree of technological access and affordances which may not be available to all teachers and schools. Gomez recognizes these limitations and approapriately makes no generalizations from these oberservations and interviews which should be broadly applied.

Despite this, this articles is offering one example of how in TPACK might be implemented in course design. Based on what Gomez (2015) observed, he does acknledge that this case example does breaks down the idea that the components of TPACK must be intersecting concurrently. Rather he notes “TPACK no longer becomes the intersection of these three types of knowledge, but rather it becomes the layered combination of these three
types of knowledge” (p. 295). In addition, Gomez (2015) highlights how teachers may approach TPACK very differently in implementation as the teacher of the 8th grade classes studied indicated that “teaching effectively with technology (TPACK) begins with an understanding of what he wants his students to learn” (p. 296). Therefore he frames TPACK within a framework of what he wants students to know.  Gomez presents that this may be a common way that teachers may implement TPACK and therefore “understanding the role students play in making decisions about using technology in instruction” should be considered more within the TPACK design (p. 296).

Mishra, P. and Koehler M.J. (2006) Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge. Teachers College Record, 2006, Vol.108(6), p.1017-1054

Promoting Student Engagement in Videos Through Quizzing

Cummins, S. Beresford, A.R. and Rice. A (2016) Investigating Engagement with In-Video Quiz Questions in a Programming Course. IEEE Transactions on Learning Technologies 9(1): 57-66

The use of videos to supplement or replace lectures that were previously done face-to-face is a standard to many online courses. However these videos often encourage passivity on the part of the learner. Other than watching and taking notes, there may be little to challenge to the video-watching learner to transform the information into retained knowledge, to self-assess whether or not they understand the content, and to demonstrate their ability to utilize what they have learned towards novel situations. Since engagement with videos is often the first step towards learning, Cummins, Beresford, and Rice (2016) tested whether or not student can become actively engaged in video materials through the use of in-video quizzes. They had two research questions: a) “how do students engage with quiz questions embedded within video content” and b) “what impact do in-video quiz questions have on student behavior” (p. 60).

Utilizing an Interactive Lecture Video Platform (ILVP) they developed and open sourced, the researchers were able to collect real-time student interactions with 18 different videos developed as part of a flipped classroom for programmers. Within each video, multiple choice and text answer based questions were embedded and were automatically graded by the system. Videoplay was automatically stopped at each question and students were require to answer. Correct answers automatically resumed playback while students had the option of retrying incorrect ones or moving ahead. Correct responses were discussed immediately after each quiz question when payback resumed. The style of questions were on the level of Remember, Understand, Apply, and Analyse within Bloom’s revised taxonomy . In addition to the interaction data, the researchers also administered anonymous questionnaires to collect student thoughts on technology and on behaviors they observed and also evaluated student engagement based on question complexity. Degree of student engagement was measured by on the number of students answering the quiz questions relative the number of students accessing the video.

According to the Cummins et. al. (2016), that students were likely to engage with the video through the quiz but that question style, question difficulty, and the overall number of questions in a video impacted the likelihood of engagement. In addition, student behaviors were variable in how often and in what ways this engagement took place. Some students viewed videos in their entirety while others skipped through them to areas they felt were relevant. Others employed a combination of these techniques. The authors suggest that, based both on the observed interactions and on questionnaire responses, four patterns of motivating are present during student engagement with the video – completionism (complete everything because it exists), challenge-seeking (only engage in those questions they felt challenged by), feedback (verify understanding of material), and revision (review of materials repeatedly). Interestingly, the researchers noted that student recollection of their engagement differed in some cases with actual recorded behavior but, the authors suggest this may actually show that students are not answering the question in the context of the quiz but are doing so within other contexts not recorded by the system. Given the evidence in student selectivity in responding to questions based on motivations, the author’s suggest a diverse approach to question design within videos will offer something for all learners.

While this study makes no attempt to assess the actual impact on performance and retention of the learners (due to the type of class and the assessment designs within it relative to the program), it does show that overall in-video quizzes may offer an effective way to promote student engagement with video based materials. It is unfortunate the authors did not consider an assessment structure within this research design so as to collect some assessment of learning. However given that the platform they utilized it available to anyone (https://github.com/ucam-cl-dtg/ILVP-prolog) and that other systems of integrated video quizzing are available  (i.e. Techsmith Relay) which, when combined with key-strokes and eye movement recording technology, could capture similar information does open up the ability to further test how in-video quizzing impacts student performance and retention.

In terms of further research, one could visual a series of studies using a similar processes which could examine in-video quizzing to greater depth not only for data on how it specifically impacts engagement, learning and retention but also how these may be impacted based on variables such as video purpose, length, context and the knowledge level of the questions.  As Schwartz and Hartmann (2007) noted design variations with regards to video genres may depend on learning outcomes so assessing if this engagement only exists for lecture based transitions or may transfer to other genre is intriguing. As the Cummins et. al (2016) explain, students “engaged less with the Understand questions in favour of other questions” (p.  62) which would suggest that students were actively selecting what they engaged with based on what they felt were most useful to them. Thus further investigation of how to design more engaging and learner centered questions would be useful towards knowledge retention. In addition, since the videos were sessions to replace lectures and ranged in length from 5 minutes and 59 seconds to 29 minutes and 6 seconds understanding how length impacts engagement would help to understand if there is a point at which student motivation and thus learning waivers. While the authors do address some specifics as to where drop-offs in engagement occurred relative to specific questions, they do not offer a breakdown as to engagement versus the relative length of the video and overall admit that the number of questions varied between videos (three had no questions at all) and that there was no connection between number of questions and the video length. Knowing more about the connections between in-video quizzing and student learning as well as the variables which impact this process could help to better assess the overall impact of in-video quizzing  and allow us to optimize in-video quizzes to promote student engagement, performance and retention.

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science. pp 349-366 Mahwah, NJ: Lawrance Erlbaum Associates.

Video Podcasts and Education

Kay, R. H. (2012). Exploring the use of video podcasts in education: A comprehensive review of the literature. Computers in Human Behavior, 28, 820-831

While the use of podcasts in education is growing, the literature to support their effectiveness in learning is far from concluded. Kay (2012) offers an overview of the literature on the use of podcasts in education a) to understand the ways in which podcasts have been used,  b) to identify the overall benefits and challenges to using video podcasts, and c) to outline areas of research design which could enhance evaluations of their effectiveness in learning. Utilizing keywords, such as ‘podcasts, vodcasts, video podcasts, video streaming, webcasts, and online videos” (p. 822), Kay searched for articles published in peer-reviewed journals. Through this she identified 53 studies published between 2009 and 2011 to analyze. Since the vast number of these were of focused on specific fields of undergraduates, Kay presents this as a review of  “the attitudes, behaviors and learning outcomes of undergraduate students studying science, technology, arts and health” (p. 823) Within this context, Kay (2012) shows there is a lot of diversity in how podcasts are used and how they are structured and tied into learning. She notes that podcasts generally fall into four categories (lecture-based, enhanced, supplementary and worked examples), can be variable in length and segmentation, designed for differing pedagogical approaches (passive viewing, problem solving and applied production) and have differing levels of focus (from narrow to specific skills to broader to higher cognitive concepts).  Because of the variability in research design, purpose and analysis methods, Kay (2012) approached this not from a meta-analysis perspective but from a broad comparison perspective with regards to the benefits from and challenges presented in using video podcasts.

In comparing the benefits and challenges, Kay (2012) presents that while there are great benefits shown in most studies, some studies are less conclusive. In examining the benefits, Kay finds that students in these studies are coming into podcasts primarily in evenings and weekends, primarily on home computers and not mobile devices (but this will vary by the type of video),  are utilizing different styles of viewing and that access is tied to a desire to improve knowledge (often ahead of an exam or class). This suggests that students are engaged in the flexibility and freedom afforded them through podcasts to learn anywhere and in ways that are conducive to their learning patterns. Overall student attitudes with regards to podcasts are positive in many of the studies. However, some showed a student preference for lectures over podcasts which limited the desire of the student to access them. Many studies commonly noted that students felt podcasts gave them a sense of control over their learning,  motivated them to learn through relevancy and attention, and helped them improve their understanding and performance. In considering performance, some of the studies showed improvement over traditional approaches with regards to tests scores while others showed no improvement. In additional while some studies showed that educators and students believed there were specific skills such as team building, technology usage and teaching skills the processes as to how these occur were not shared. In addition, some studies indicate technical problems with podcasts and lack of awareness can made podcasts inaccessible to some students and that several studies showed that students who regularly accessed podcasts attended class less often.

In reflecting on this diverse outcomes, Kay presents that the conflict evident in understanding the benefits and challenges is connected to research design. Kay (2012) argues that issues of podcast description, sample selection and description and data collection need to be addressed  “in order to establish the reliability and validity of results, compare and contrast results from different studies, and address some of the more difficult questions such as under what conditions and with whom are video podcasts most effective” (p. 826).  She argues that understanding more about the variation in length, structure and purpose of podcasts can better help to differentiate and better compare study data. Furthermore, Kay asks for more diverse populations (K-12) and better demographic population descriptions within studies so as to remove limits on ability to compare any findings among different contexts. Finally, she presents that an overall lack of examination of quantitative data and overall low quality descriptions of qualitative data techniques undermine the data being collected. “It is difficult to have confidence in the results reported, if the measures used are not reliable and valid or the process of qualitative data analysis and evaluation is not well articulated.” (p. 827) From these three issues, Kay recommends an overall greater depth to the design, descriptions, and data collection of research is needed in video podcasting research.

While literature review offers a general overview of the patterns the author witnessed in the studies collected, there are questions about data collection process as the author is unclear as to a) why three prior literature reviews were included as part of an analysis and b) as to whether the patterns she discusses are only from those papers which had undergraduate populations (as is intimated by her statement on this – as noted in italics above) or is it of all samples she collected. The author also used articles published in peer-reviewed journals and included no conference papers. It is unclear what difference in data would have resulted from including these other sources.

Overall the most critical information she provides from this study is the fact that there is no unifying research design that underlies the studies on video podcasts and this results in a diverse set of studies without complete consensus on the effective use of podcasts in education and overall little applicability on how to effectively implement video podcasts. The importance of research design in creating a comparative body of data cannot be understated and is something which should be considered in all good educational technology research. Unfortunately, while Kay denotes the issues present in how various studies are coding and how data is collected and analyzed in the studies she examined, she does not address the underlying research design issues much when thinking about areas of further research.  While this is not to lessen the issues she does bring up for future research, the need for better research design is evident and given little specifics by Kay.  One would have liked a more specific vision from her on this issue since greater consideration towards the underlying issues of research design with regards to describing and categorizing video podcasts, sampling strategies and developing methods of both qualitative and quantitative analysis are needed.

 

Intentional Design for On Screen Reading

Walsh, G. (2016) Screen and Paper Reading Research – A Literature Review. Australian Academic & Research Libraries, Vol.47(3), p.160-173

As more students move into on-line courses and as more faculty consider incorporating open educational resources (O.E.R) into their courses, the impact of screen reading and learning material design on reading comprehension and overall learning is of essential consideration.  Walsh (2016), desiring to help academic librarians gain knowledge on issues of online reading, examines the current research (last 6 years) with regards to reading comprehension and the screen versus paper debate. Overall Walsh found no consistency in research design among the studies she examined, making cross-comparisons difficult. However, she concludes that “most studies find little differences between the print and screen reading for comprehension” (p 169). But, she notes, most were not focused on scholarly readings and those that did “concluded that participants gain better understanding of the content when reading from paper” (p 169).

Overall, this article offers a synthesis of recent scholarly literature (2010-2016) located in information management databases. While the scope of the study does not specify the exact search parameters used nor if search parameters were used to eliminate any studies from consideration, it does offer a brief overall glance at some of the literature that exists on this subject from an information management perspective. If the author had opened up this research to examine databases within learning, education and educational technology, additional research may have been found. However despite this limited search parameter, the information within this article, when synthesized together, highlights several aspects to screen reading which should be considered within educational technology.

In her article, Walsh (2016) notes that when considering reading and comprehension, neuroscience research suggests that deep reading is necessary for “furthering comprehension, deductive reasoning, critical thought and insight” (p 162) but that there is variation in the areas of the brain which are stimulated by print reading and versus those stimulated by screen reading. This variation may indicate that there may be some impingement upon the screen reader’s “ability to reflect, absorb and recall information as effectively as in formation in the paper form” (p 162) and may encourage more shallow or skim reading. While not specifically addressed by Walsh but when considered further, this information suggests that educators which rely on-screen based reading to help students gain material knowledge for their course may need to develop activities which work to promote deeper reading in students. This is not something students learn early on due to the predominance of paper assigned materials in early education. At the same time, this may not be a skill that can be developed with something as simple as giving them a set of questions to answer after having read. Kuiper et. al (2005) offered that, when examining how students searched the Internet, how the teacher structured the task impacted how the student approached the content. In the case of screen reading, well-structured tasks (to borrow from Kuiper et. al) may support only a seek-and-find strategy and not necessarily support the ability of the student to creatively and critically come to comprehend and synthesize the materials.

Walsh’s review also offers information which shows that the content’s format, intention and its length can impact how much the student may learn from screen reading. Walsh (2016) notes that even though students read off of screens for entertainment, when it comes to academic documents, students prefer to print off a document rather than reading it on the screen. This preference is related to not only the “high level of concentration and text comprehension” necessary but that academic reading also required the reader to interact with the document through annotating, highlighting and bookmarking passages for reference (p 163).  Walsh’s research suggests that students do not perceive themselves as being able to accomplish as much with screen reading of academic documents as print reading.  This perception is critical since even though many students within the studies indicated interest in screen reading, they doubted their own ability to be competent with it. This perception of competence could potentially undermine student interest in engaging with the reading fully. Thus, while Walsh does not specify this within the article, it does recommend that an educator who utilizes screen based academic reading as part of their course may need to offer more guidance to the readers with regards to both how they may engage with the reading (through digital annotation, tagging and bookmarking) and more encouragement for students to build self-confidence in their abilities.  In addition, Walsh (2016) highlights research showing there is very little difference in outcomes of performance between screen readers and print readers for shorter content but that for longer, more complex materials, learning and information retrieval can be impacted when reading from a screen. Furthermore text which were less data and fact based, which were less visual, and required more cognitive reasoning were easier to read in paper format than on-screen. These two points would suggest that a simple transformation of printed text to a digital format for screen reading – a common practice among educators and journals alike, may not be sufficient for materials to be comprehended as easily as the text version. Rather that utilizing technology to optimize the reading experience through visuals, textual divisions, and structured hypertext may benefit the comprehension of more complex longer materials.

Finally Walsh presents research which outlines how the platform characteristics with regards to design, user interaction and navigation can impact comprehension. The research Walsh presents suggest that platform structures not only create technical frustrations but may limit the level of engagement the student can have with the reading or increase the level of distractions they can experience. Not all readings are equally optimized for learning for all students in all platforms. Therefore this could recommend to the educator that careful consideration of platform tools (navigate, annotate, explore), overall student familiarity with a platform and its usability, and the ability of the educator and student to turn off and on hypertext/pop-ups should be considered when selecting for digital materials.

These points, taken together, suggest that educators need to have a more thoughtful, approach to the incorporation of digital reading materials in their courses and that students may be better served by educators approaching onscreen reading with more intentional design than is currently in use.

Additional References

Kuiper, E., Volman, M., & Terwel, J. (2005). The Web as an information resource in K–12 education: Strategies for supporting students in searching and processing information. Review of Educational Research, 75, 285–328

 

Designing Effective Qualitative Research

Hoepfl, M. C. (1997) Choosing qualitative research: A primer for technology education researchers. Journal of Technology Education, 9, 47–63

According to Hoepfl (1997), research in technology education has largely relied on quantitative research, possibly due to its own limitations in knowledge and skill on qualitative research design. Desiring to increase the implementation of qualitatively designed research, Hoepfl offers a “primer” on the purpose, processes and practice of qualitative research. Presenting qualitative research as expanding knowledge beyond what quantitative can achieve, Hoepfl (1997) sees it as having three critical purposes. First it can help understand issues about which little is known. Secondly it can offer new insight on what we already know. Thirdly, qualitative research can more easily convey the depth of data beyond what quantitative can. In addition, since qualitative data is often presented in ways which are similar to how people experience their world, he offers that it finds greater resonance with the reader. With regard to the processes of qualitative research, Hoepfl (1997) denotes that due to its nature, qualitative research design requires different consideration  as the “particular design of a qualitative study depends on the purpose of the inquiry, what information will be most useful, and what information will have the most credibility” (p.50). This leads to a flexibility – not finality – of research strategy before data collection and a de-emphasis on the confidence of data being a result solely of random sampling strategies and numbers. This flexibility in design strategy means a great deal of thought must be made on how to best situate data collection with recognition that actions in field may require adjustments of design as some questions fail or if new patterns emerge. In terms of strategies, the author offers up purposeful sampling options and discusses how maximum variation sampling may lead to both depth of description and sensitivity for emergent pattern recognition. He also outlines some of the various forms of data available in qualitative research and the stages of data analysis. In doing this, Hoepfl (1997) recognizes that qualitative data is much more difficult to collect and analyze than quantitative data and that often the research may require numerous cyclical movements through the various stages of collection and analysis. Importantly he addresses the practices of the researcher and reviewer in considering authority and trustworthiness in qualitative research by examining issues of credibility, transferability, dependability and confirmability.

In examining Hoepfl’s work, he offers a quality start to understanding the strengths and struggles of qualitative research. He correctly argues that the ability for qualitative research to have increasing acceptance within technology education rests on the ability of the researcher to address the questions of authority and trustworthiness which are more easily (albeit possibly erroneous) accepted in quantitative research. However  there were other aspects which are inherent in qualitative research which he gives almost no treatment to at all. These include consideration of  how relationships become built and defined between subjects and researcher and the impacts these can have on subject behavior. Hoepfl (1997) makes mention of these relationships and the risk of altering participant behavior denoting that “the researcher must be aware of, and work to minimize.” (p.  53) but  he offers no process for either recognizing when this occurs within the data nor how to actually go about minimizing this.  When it comes to the ethics of human subject interaction, Hoepfl (1997)  denotes that “the researcher must consider the legal and ethical responsibilities associated with naturalistic observation” (p. 53) but earlier offered that limiting the knowledge of the researcher’s identity and purpose or even hiding them may be appropriate. This is a problematic statement given informed consent guidelines and outlines a key aspect of information missing in this primer – that of how to consider human subject research ethics within qualitative research design. Since Hoepfl is offering a general guide to qualitative research and since the existence of IRB’s and the primacy guidelines of informed consent were established in 1974 by the National Research Act,  one would have expected at least some consideration of those guidelines, a mentioning of informed consent, or at least a discussion of how to handle the sensitive data that may come with qualitative data collection.

In reflecting on the applicability of Hoepfl’s work to my research interests, the emphasis on what qualitative research can bring to the educational technology table is enlightening as I did not recognize how much of a new approach this was to education as it was something of a staple to my anthropological education. Of particular interest was Hoepfl discussion of maximum variation sampling. He cites Patton in saying

“The maximum variation sampling strategy turns that apparent weakness into a strength by applying the following logic: Any common patterns that emerge from great variation are of particular interest and value in capturing the core experiences and central, shared aspects or impacts of a program” (Hoepfl, 1997 p.52)

This statement and his discussion of trustworthiness connected to a recent article I read on generalizing in educational research written by Ercikan and Roth’s (2014). In particular, the authors discuss the reliance on quantitative research for its supposed ability to be generalized but then break down this assumption to argue that qualitative data actually has more applicability since, if properly designed, can create essentialist generalizations. These are:

“the result of a systematic interrogation of “the particular case by constituting it as a ‘particular instance of the possible’… in order to extract general or invariant properties….In this approach, every case is taken as expressing the underlying law or laws; the approach intends to identify invariants in phenomena that, on the surface, look like they have little or nothing in common”(p. 10).

Thus by looking at “central, shared aspects” denoted by Hoepfl through maximum variation sampling and discerning the essential aspects which underlie the patterns, qualitative research could “identify the work and processes that produce phenomena.” Once this is established, the testability of the generalization is done by examining it to any other case study. If issues of population heterogeneity are also considered within the design of the qualitative data collection, the authors then argue that the ability to generalize from data is potentially greater with qualitative research.

Additional References

Ercikan, K. and Roth W-M (2014) Limits of Generalizing in Education Research: Why Criteria for Research Generalization Should Include Population Heterogeneity and Uses of Knowledge Claims. Teachers College Record Volume 116 (5): 1-28

Using A Learning Ecology Perspective

Barron, B. (2006). Interest and self-sustained learning as catalysts of development: A learning ecology perspective. Human Development, 49, 193-224.

Not all learning is done in school. While such a statement may seem obvious, Barron (2006) denotes that studies of learning often focus specifically on formal atmospheres of learning (schools and labs) and in doing so miss the big picture of how a learner will co-op and connect various resources, social networks, activities and interactions together to create a landscape where their learning takes place.  Using a learning ecology framework, the author desires to understand how exactly a learner may go about learning by examining the multiple contexts and resources they have available to them.  Learning ecology is “the set of contexts found in physical and virtual spaces that provide opportunities for learning.” (Barron, 2006)  By understanding how the learner negotiates this landscapes for learning that surround them, the author believes educators can think more broadly about ways to connect in-class and outside learning together. Using qualitative interviews with students and their families as the focal point of her research, Barron (2006) focuses on creating “portraits of learning about technology” (p 202) to better understand how interest is found and then self-sustained through several contexts. Through this work she demonstrates that there is not one means by which a student may develop interest and maintain learning but that common themes are prevalent. Among her case studies, Barron (2006) outlines five modes of self-initiated learning. These include finding text-based resources to gain knowledge, building knowledge networks for mentoring and gaining opportunities, creating interactive activities to promote self-learning, seeking out structured learning through classes and workshops, and exploring media to learn and find examples of interests.  By examining the interplay of these various strategies, the Barron (2006) demonstrates how the learner was an active participant in constructing their own learning landscape such that “learning was distributed across activities and resources” (p. 218). Because of this Barron (2006) argues that researchers should consider “the interconnections and complex relations between formal learning experiences provided by schools and the informal learning experienced that students encounter in contexts outside of school” (p 217).

To me, the strengths in Barron’s work come from three areas. First, by using the foundation of learning ecology as “a dynamic entity,” which is shaped by a variety of interconnected interactions and interfaces, she is centering the discussion of learning on the learner and how they are an active agent using interest to seek out new sources and applications for knowledge. Secondly, by emphasizing that what a student accesses outside of school may be as, if not more, critical to fostering their own learning, Barron suggests that the science of learning needs to consider how to take in and study these other contexts along side what is done in formal educational settings. Thirdly, by approaching this from a interview perspective, Barron demonstrates how qualitative data enables a deeper understanding of the how and why learning can occur. Such a methodology is time and analysis intensive and does limit the researcher in what they can accomplish. In Barron’s case, she only presents three case studies for analysis and it would be interesting and beneficial to see how these same five factors of self-initiated learning are present throughout the larger in-depth interviews she conducted and if specific variations are present based on different population demographics.

For me, this work is extremely interesting for how it connects to what I understand as I enter the field of education from the field of anthropology.  In anthropology, the marrying of qualitative and quantitative data has always been considered necessary to better understand human endeavors – including that how we learn and what impacts that learning.  In anthropology, the human is not only a receptor of culture but actively participates in the transformation of that culture and thus their agency is a given. Finally, the examination of the interconnections of contexts and the interplay between them mirrors the integrative nature by which humans operate in their world.  Thus the learning ecology perspective married to a qualitative data collection technique seems to hold great potential for deeper exploration of how learning occurs and what impacts technology can have in that process.

 

A Consequence of Design – Considering Social Inequality in Educational Technology Research

Tawfik, A. A., Reeves, T., & Stich, A. (2016). Intended and Unintended Consequences of Educational Technology on Social Inequality. Techtrends: Linking Research & Practice To Improve Learning60(6), 598-605

Technology has often been considered a potential route for addressing inequalities of access and quality within education.  However, Tawfik et. al. (2016) consider such a perspective to be premature. In examining the educational system, the authors argue that the significant inequalities present among populations, based on aspects of socio-economic status, location, race and ethnicity, have not been addressed much in educational technology research and, in cases where it has been considered, differences in outcomes are exhibited by these populations. Denoting that student’s racial, ethnic and socioeconomic background have influence on educational attainment and achievement, Tawfik et. al (2016) examine how this inequality, with regards to technology, is present in some form at all levels of education – from early education (through access to media and apps associated with learning) to construction of college applications, to in-class and online learning, and through lifelong education.  Their review of the literature offers that not only is inequality evidenced in issues of access, interpretation and application of technology by students but also in the ability for teachers to access technology and the professional development related to learning technologies. The authors come to the conclusion that, while there is evidence of success of educational technologies to address gaps, there is also evidence that they can exacerbate them,  inadvertently increasing educational inequality.  Tawfik and colleagues (2016) argue that greater consideration of the consequences of educational technology on societal inequalities needs to be considered as part of the design, development and implementation of educational technology research

In reflection, Tawfik et. al (2016) offer a broad examination of the intended and unintended consequences of educational technology and offer food-for-thought on what good educational technology research needs to consider. While not a complete review of all literature as it relates to technology and educational inequality, the authors supplement their points of issue with cited examples to support their conclusions.  They see a failing within research design and appropriately make a reasoned argument that greater reflection on issues of social inequality – as it related to educational technology among both students and educators – needs to be considered. Perhaps intended to spur the conversation and not so much to guide it, their recommendations for specific ways to implement this within research design are not forthcoming; making one to wonder just how they see this induction of greater consideration of social inequality into educational research being implemented.

In consideration of research design, the argument can been made that, for research to have application and impact policy, understanding the populational structures under which an assessment is done and to which it can be applied, is critical to the generalizability of the results. If educational technology has both the ability to lessen and widen the gaps in educational achievement in ways often not predicted in research design, consideration of aspects of socio-economic status, race and ethnicity should be examined.  This is especially true if one wants to move into strategizing  implementation since proper determination of situational generalizability is necessary. Moreover, given that educational technology can influence the potential outcomes of groups with regards to attainment and achievement, a reflection on its role in both decreasing and increasing educational inequality is essential towards a critical understanding of what we do in this field.

 

Rethinking Schools for the Future

Collins, A., & R. Halverson (2009). Rethinking Education in the Age of Technology: The Digital Revolution and the Schools. New York, NY: Teachers College Press, 2009.

In this promotional article for their book, Rethinking Education in the Age of Technology: The Digital Revolution and the Schools (2009), Collins and Halverson outline their argument for why education is changing in light of the pervasive influence of technology and how increasing incompatibility between schools and the use of technology will necessitate a change to how society perceives of education and the role of schools in the learning process.  Through an examination of how the Industrial Revolution shifted educational structures, the authors explain how the Digital Revolution offers another significant shifting point in the educational landscape.  By positioning education as a life learning opportunity that is no longer restricted to the classroom, Collins and Halverson (2009) outline how “schooling era”  learning is structured antithetic and somewhat at a disadvantage to the opportunities provided through technology-driven learning, leading them to ask “whether our current schools will be able to adapt and incorporate the new power of technology-driven learning for the next generation of public schooling” (p 2).  To not do so, they reflect could have the potential to exacerbate unequal access based on socio-economic status; with the wealthy investing in new technology-based educational models while those who cannot rely on learning  systems incompatible with our future society’s. Reflecting on the rise of aspects such as home-schooling, workplace training, distance education, adult education, learning centers, educational media, and computer-based learning systems, the authors present the case that the new system has already begun to form but lacks a cohesive vision for bringing it all together.

Written as a precursor to the book of the same title and for general consumption, Rethinking Education in the Age of Technology: The Digital Revolution and the Schools (2009) was not meant to lay out the complete pedagogical foundations of their ideas but is designed to whet the reader’s appetite for the forthcoming book. Despite this purpose, the article offers a clear outline of Collins and Halverson’s ideas for why schooling should change in the face of the rise of technology-driven learning and manages to be very thought-provoking. Far from ideally optimistic, the authors do offer up a list – albeit a short one – of the potential issues and gains such a shift could present to the educational landscape. These concerns, while shallowly addressed in this article, indicate the authors come facing the changing landscape of education with less-then-rose-colored glasses. By recognizing the societal foundations which underlie the educational systems of today as well as the connections of education in other facets of our culture, the authors offer a reflection that any shift will have large reaching consequences.  As a promotionally structured article,  it is rather sparse in supporting data or citations to address what directly falls under their moniker of “technology-driven learning” and if they consider it all equally effective in building the knowledge they outline for tomorrow’s world. It also lack specifics on their ideas for on how to build this cohesive vision of tomorrow’s educational system. However such shortcomings are expected given this article’s purpose and length, and are expected to be addressed in the full book.

Despite being only a “teaser” to the main book, this future thinking article has peaked this researcher’s interest towards their book.  With deep interest in the intersection of cultures and technology, I am particularly interested in examining how and in what ways aspects of culture and society change with shifts in technology. While I am often skeptical when it comes to “predictions” of society’s future as these often fail to adequately examined or even consider the complexity with which social systems operate and how cultural change is instituted and propagated, the dose of measured pragmatism evident in the authors consideration of the risks and gains society faces gives me hope they have made a deeper consideration of how and why societal institutions shift. As such I look forward to critically exploring their work deeper.

Knowledge is power. Information is liberating. Education is the premise of progress, in every society, in every family— Kofi Annan

post