The Role of Technology in Personalized Learning

Izmestiev, D (2012) Personalized learning: a new ICT-enabled education approach. UNESCO Institute for Information Technologies in Education Policy Brief March 2012. Available at http://iite.unesco.org/pics/publications/ en/files/3214716.pdf

In reflecting on the future of education, the Izmestiev’s policy brief for the United Nations Educational, Scientific and Cultural Organization (UNESCO) Institute for Information Technologies in Education (IITE) offered that learning will become more personalized. Lamenting the “the ‘one size fits all’, full-time classroom-based model” (Izmestiev, 2012, p 1) Izmestiev (2012) presented that the future of education requires “a new education paradigm characterized by greater flexibility and choice options for each individual student” (p.1) whereby the educator matches “what is taught and how it is taught with the needs of each individual learner” (p. 1). As Izmestiev (2012) denoted, such an idea is not new, but its implementation had been hampered by issues facing educators in implementing such approaches. These include managing workload but still meeting the needs of a diverse body of learners.  However the growth of information technology, digital educational resources and digital content delivery systems now offers educators greater opportunity in reaching this ideal of personalized learning.

But what exactly is personalized learning and what role can technology play? Reflecting on policies within the U.K., California, Calgary and British Columbia, Izmestiev (2012) offered, that “personalized learning is a methodology, according to which teaching and learning are focused on the needs and abilities of individual learners within classroom groups supervised by the teacher” (p. 3). There are five components to personalized learning as outlined by David Miliband, former United Kingdom Minister of State for School Standards. These include:

  • An emphasis on assessing individual learner strengths and weaknesses as well as their interests and needs through ” a range of assessment techniques, with an emphasis on formative assessment that engages the learner” (Izmestiev, 2012, p.4)
  • Use of effective teaching and learning which allow and emphasis the self-directed learner.
  • Offer the learner the ability to engage in “the selection of curriculum content as well as in the development of individually tailored learning program” but “with clear pathways through the system” (Izmestiev, 2012, p. 4)
  • Class organization is focused towards student progress such that school resources and design are redirected towards meeting that focus
  • Connection of learning outside the classroom through community partnership and  socially engaging activities

In reflecting on these components, Izmestiev (2012) offered that “information and communication technologies (ICTs) and digital content development tools” have made personalized learning more available. (p. 5).   The authors offered that learning management systems now are used to collect assessment data  in a managed work flow through a variety of assessment forms. Newer technologies mean the learner is now offered the ability to move at their own pace along a guided pathway, using system using based recommendations or adjustments in learning strategies and contents to meet individual student needs, while still being encouraged to progress towards specified learning goals. As Izmestiev (2012) commented “using Web 2.0 tools and social networks, learners can interact with each other beyond the classroom,” to broaden where, when and with whom the learners can engaged in meaningful learning goal directed activities. Within the personalized learning paradigm, the author offered that the teacher’s role shifts “from instruction to mentoring, advising and consulting” which necessities refocusing professional development and teacher training (Izmestiev, 2012, p. 7) However he also cautioned that there are risks to personalized learning when poorly implemented.  These include the potential to decrease in teacher-student and student-student interactions as well as a view of decreasing teacher engagement within the learning process in favor of more technology-augmented learning.  Well-intentioned implementation coupled with teacher professional development can address these risks in the author’s view.

In reflecting on this, one of my interests within educational technology is exploring personalized learning and the key affordances within its design could make for effective and engaged learning for the individual learner, the issues of design and implementation which impact both student, learner and institutions moving towards persoalized learning, as well as exploring if personalized learning is an effective means for increasing collaborative learning experiences for groups of learners.  Digital platforms specifically designed for personalized active and adaptive learning experiences are already being utilized in schools to assist student learning of key concepts as well as hands-on skill training and many textbook publishers are implementing these with their books. However, as Pane et al (2017) noted, while preliminary data show some potential for personalized learning to positively impact the learner in terms of performance and motivation, “the field lacks evidence on which practices are most effective or what policies must be in place to maximize the benefits” and more research was needed (Pane, et. al., 2017, p 7).

Additional readings for Week #15

Chen, C. M. (2008). Intelligent Web-Based Learning System with Personalized Learning Path Guidance. Computers & Education, 51(2), 787-814.

Huang, Y.-M., Liang, T.-H., Su, Y.-N., & Chen, N.-S.. (2012). Empowering Personalized Learning with an Interactive E-Book Learning System for Elementary School Students. Educational Technology Research and Development, 60(4), 703-722.

Hwang, G.-J., Sung, H.-Y., Hung, C.-M., Huang, I., & Tsai, C.-C.. (2012). Development of a Personalized Educational Computer Game Based on Students’ Learning Styles. Educational Technology Research and Development, 60(4), 623-638.

Kerr, P. (2016) ; Adaptive learning, ELT Journal, .70 (1): 88–93

Pane, J. F., Steiner, E.D., Baird, M.D,  Hamilton, L.S. and Pane, J.D. (2017) How Does Personalized Learning Affect Student Achievement?. Santa Monica, CA: RAND Corporation

Shaw, C., Larson, R., & Sibdari, S. (2014). An Asynchronous, Personalized Learning Platform― Guided Learning Pathways (GLP). Creative Education, 5, 1189-1204.

CRR#2: Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study

As part of the educational technology program, the following is offered as a critical reflection to specific questions on the following article:

Ertmer, P., Richardson, J., Belland, B., Camin, D., Connolly, P., Coulthard, G., Lei, K., and Mong, C. (2007). Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study. Journal of ComputerMediated Communication,12(2), 412-433.

1.Identify the clarity with which this article states a specific problem to be explored.

According to Maxwell (2005), a research problem “identifies something that is going on in the world, something in itself that is problematic or that has consequences that are problematic” (p. 40). In the article, “Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study,” Ertmer et al. (2007) presented that this exploratory study was created to address the research gap in understanding how feedback impacts learning – particularly regarding how peer feedback construction impacts higher levels of thinking learning.  How problematic this gap was to the authors was presented in the introduction.  The authors noted that online discussions are critical to “collaborative meaning-making” and that to be effective discussions need to “progress to include both reflection and critical thinking” (Ertmer et al., 2007. 413). But as they noted by citing Black, (2005) there is little evidence these interactions develop much beyond the basics of “sharing and comparing information” (Ertmer et al., 2007. 413). To help move online discussions into deeper levels of thinking, the authors proposed that peer feedback “specifically related to the quality of their postings” can assist students in developing their learning towards these deeper levels. (Ertmer et al., 2007. 413).

In reflecting on the clarity of how they proposed this research problem, the most specific statement of their research problem appeared within the Purpose of Study section, but their initial idea and reasoning are presented within the introduction and are supported throughout the literature review. There was clear flow between these sections as well as to the content of the abstract. If anything were to be improved from its current state, it would only be a minor adjustment within the last paragraph of the introduction wherein the authors could conclude with a firmer research problem statement which mirrors that in their Purpose of Study. Such a statement could encapsulate the literature review specifically towards framing the foundation principles underlying the study.  Such changes, however, are not specifically required as their research problem and reasoning were presented with well-developed clarity and whetted the audience’s appetite for further reading – a task specific to the introduction of any paper.

2. Comment on the need for this study and its educational significance as it relates to this problem.

The role of feedback is to increase student motivation and performance by connecting the student to their learning in meaningful and cognitively significant ways. This feedback can come in a variety of sources, some of which may be more effective than others in both meeting student and faculty expectations. This, Ertmer et al. (2007) noted, is particularly vital for online students where expectations about feedback can often impact retention. The authors advised that meeting students expectations regarding feedback requires “a significant amount of time and effort” on the part of the instructor (Ertmer et al., 2007, p. 414).  While they offer no research regarding the impact of feedback on instructor workload, from this reviewer’ perspective (and that of her online colleagues), providing personalized, timely and constructive feedback in online asynchronous discussions often requires the instructor be online continuously, be active in the conversations, and offer detailed feedback for further improvement which is time-consuming.  In proposing the use of peer feedback to address this workload issue, Ertmer et al. (2007) referenced four studies which suggest that students benefit from both giving and receiving peer feedback within traditional classroom settings but that these have yet to be determined fully within the online environment. With the increasing number of online courses and programs being offered, knowing what works in the online education is a particularly salient issue even eleven years after the publication of this paper. One could surmise that at the time when this was written, the use of online discussion and technology supporting them were relatively new within online education. Therefore, assessing how effective peer feedback is towards promoting higher order thinking would have been just as educationally significant then.

3. Comment on whether the problem “researchable”? That is, can it be investigated through the collection and analysis of data?

The point of this exploratory study was to examine “student perceptions of the value of giving and receiving peer feedback” with a goal of determining whether this feedback impacted the quality of the discussion postings (Ertmer et al., 2007, p 416). To address this, Ertmer et al. (2007) framed three specific research questions that were investigated through the collection and analysis of data.  Their first research question asked what the impact of peer feedback will be on the quality of online conversations. This testable by establishing a research design which would allow for the collection of data on the quality of student postings over time when they are given peer feedback. There could potentially be several ways to measure quality as well as ways for creating systems of peer feedback and there would be a need to address other variables, such as timeliness and format, which could impact this process.

In their second research question, Ertmer et al (2007) asked how students perceive “the value of receiving peer feedback” and how this compares to their perceptions of instructor feedback (Ertmer et al. (2007), p 416). The third research question considers the “students’ perceptions on the value of giving peer feedback” (Ertmer et al. (2007), p 416). These questions would be testable by establishing a research design which could collect pre-feedback perceptions of students’ values of both peer and instructor feedback. This could be done using surveys and/or interviews. Then after receiving both peer and instructor feedback, students could reflect on their experiences with these two forms of feedback through surveys and/or interviews. Since issues of ordering, timeliness and quality of feedback and overall motivation and past experiences may impact this perception, this design would need to address these variables.

4. Critique the author’s conceptual framework.

A conceptual framework, as proposed by Maxwell (2005), is the basic model about what a researcher plans to study “and of what is going on with these things and why” so as to create a foundation of a tentative theory which can “inform the rest of your design,” (p 39).  Primarily within their introduction and literature review, Ertmer et al. (2007) outlined a conceptual framework that ties discussion, higher order thinking, feedback, and perceptions together and examines them within an exploratory case study framework.

The authors started from the vantage point that there is a consensus among faculty and students that student discussions are “where the real learning take place” (p. 412). Citing Black (2005) and Lang (2005), Ertmer et al. (2007) shared that discussions create learning opportunities as they engage the student in a “dialogical process that leads to increasingly sound, well grounded, and valid understandings of a topic or issue” and “have the potential to motivate student inquiry and to create a learning context in which collaborative meaning-making occurs” (Ertmer et al., 2007, p 413). But given this, they returned to Black (2005) in reflecting that there is little evidence that “the critical level of learning desired” is a natural outcome of student discussions (Ertmer et al., 2007, p 413).

In suggesting more is needed to promote higher-order thinking, the authors looked to feedback as a means of providing this stimulus.  Citing Higgins et al. (2002), Ertmer et al. (2007) offered that “feedback that is meaningful, of high quality, and timely helps students become cognitively engaged in the content under study as well as in the learning environment in which they are studying” (p 413). The authors commented that feedback is critical within the online environment. Referencing Ko and Rossen (2001), the authors noted that “students in online courses are more likely to disconnect from material or environment than students in face-to-face courses” when there is a lack of feedback (Ertmer et. al., 2007, p 414). Furthermore, Ertmer et. al (2007) indicated that student perceptions of feedback are significant as Schwartz and White’s research indicated that “students expect feedback to be 1) prompt, timely and thorough; 2) ongoing formative (about online discussions) and summative (about grades)’ 3) constructive, supportive and substantive; 4) specific, objective and individual; and 5) consistent” (Ertmer et al., 2007, p 414). However, in looking to Dunlap (2005), the authors surmised that the ability to provide this level of feedback is problematic to the instructor’s workload.

Consequently, the authors offered peer feedback as an alternative since, as Corgan et. al (2004) noted, peer feedback “offers a number of distinct advantages including the timeliness of feedback, providing new learning opportunities for both givers and receivers of feedback, humanizing the environment and building community” (Ertmer et. al., 2007, p 414-415). However, the use of peer feedback is not without issue. In citing Palloff and Pratt, the authors shared that “the ability to give meaningful feedback which helps others think about the work they have produced is not a naturally acquired skill” (Ertmer et al., 2007, p 415).  This coupled with “overcoming anxiety about giving and receiving feedback…ensuring the reliability of the feedback” and addressing how the online environment affects communication means that there is no guarantee that peer feedback will help develop higher level thinking. (Ertmer et al., 2007, p 415).

To this end, they propose testing the connection between peer feedback, higher-order thinking (as demonstrated by the quality of discourse), and perceptions about feedback with this exploratory case study. The authors outlined that the use of a case study was an appropriate avenue for inquiry but not until their methods section. According to Yin (2012), “case studies are the preferred strategy when ‘how’ or ‘why’ questions are being posed, when the investigator has little control over events, and when the focus is on a contemporary phenomenon within some real-life context” (p. 1).

Overall, the authors offered a clear and easily read body of information as to how they were constructing their research, and for the most part were consistent in connecting the varying aspects of their reasoning together.  There were a few issues this reviewer noted that could use additional investment for clarification.  Within the introduction, Ertmer et al (2007) indicated that discussions are not enough to get to “critical level of learning desired” (p. 413) but they are rather vague as to what that level of learning is exactly and why achieving higher order thinking is important within the online environment.  In addition, the authors are not clear enough in explaining how this is a problem indicative of the innate aspects of discussions and not reflective of a problem found within online learning in general. At this point within their article, the authors inserted a separate paragraph on how the “use of discussions in online environments is supported by the socio-cognitive perspective” and referenced Vygotsky’s Zone of Proximal Development (Ertmer et al., 2007, p 413). This separation is a bit confusing as the prior paragraph is also discussing the importance of discussions, either online or face-to-face, as being important for learning. By giving it its own paragraph, it initially led this reviewer to consider that they would be using this theory as part of the conceptual framework since this could connect into why peer feedback (and scaffolding it) may be effective towards higher order thinking. However, after this section, this theory is not eluded to in any way. This reviewer took this to mean that the intention was only to use this theory as a means of additional research support but that it was not a significant part of the conceptual framework.  Given this, perhaps incorporating these ideas within the prior paragraph may be warranted.

5. How effectively does the author tie the study to relevant theory and prior research? Are all cited references relevant to the problem under investigation?

Within their article, Ertmer et al. (2007) cited numerous prior research studies in support of their conceptual framework and there is relatively good connection between these sources and the points the authors are making. They provide references to studies on why discussions are an important part of online learning, what role feedback plays within instruction, what makes good feedback, and what expectations students and faculty have about online feedback. However, most of these are of singular citations to each sentence which is much fewer than this reviewer has seen in other papers. This may indicate a clear and directed focus by the authors on the most relevant research (as some authors superfluously cite references to convey scholarly aptitude) or a lack of available studies as the authors mention later in their purpose of study.

In addition, there are some gaps in their research at points.  For example, there is no citation to their comment that “lack of feedback is most often cited as the reason for withdrawing from online courses” when one would expect a citation to support that statement since it is different in scope to the prior sentence prior mentioning student disconnectedness due to lack of feedback (Ertmer et al., 2007, p 414). In reflecting on their feedback, most of their commentary centers on overall critical components of feedback when given online and not much research is given specifically to feedback that occurs online or specifically within discussions which is the central focus of their research study.  This likely due to the overall lack of research available in these areas, as the authors note in their purpose of study. However, a note to this effect would clarify this within the earlier section. Thirdly, while the authors discuss the potential student benefits of giving and receiving peer feedback, the issues students may have with giving feedback, and student expectations of feedback, there is little that is discussed regarding how students perceive of giving and receiving peer feedback even though this is a focus of two of the research questions they present.

In examining relevant theories, Ertmer et al. (2007) specifically only reference Vygotsky’s Zone of Proximal Development but overall nothing with this theory throughout the rest of the article.  In this case, the authors seem to use this more as additional support to why discussions can be important avenues for learning rather than as a specific theory underpinning their research design.  As such, it seems a bit disingenuous in its usage as they have several references which already address this idea.

6. Does the literature review conclude with a brief summary of the literature and its implications for the problem investigated?

Unlike other articles this reviewer has encountered, Ertmer et al. (2007) structured their article such that the literature review is presented not specifically as one. Following their introduction, the authors presented sections outlining the role of feedback in instructions, the role of feedback in online environments, the advantages of using peer feedback and the challenges of using peer feedback. Within each of these sections they presented literature to support their ideas and connect the reasoning behind their conceptual framework. Thus, it is left to the reader to surmise that these sections were intended as the literature review. A simple header of “Literature Review” prior to the first section could clarify this more. In seeking a brief summary of the literature, there is none where one would expect to be between these sections and the purpose of study. Rather, within each section the authors summarized the main points of that area within the final paragraph or few sentences. The inclusion of a summary to pull these salient ideas together from their prior sections and transitions into the purpose of study would clarify this more for the reader.

7. Evaluate the clarity and appropriateness of the research questions or hypotheses.

In this exploratory study, Ertmer et al. (2007) offered three research questions within their purpose of study that were appropriately related to the research problem they stated. These were specifically constructed to fill in the research gap by assessing the “impact of using peer feedback to shape the quality of postings” and to examine “student perceptions on the value of giving and receiving peer feedback regarding the quality of discussion postings” (Ertmer et al., 2007, p. 416). While for the most part these research questions are very clearly written, the wording within the second part of research question one is problematic. Within research question one, the first part questions the “impact of peer feedback on the quality of student of student postings in an online environment” (Ertmer et al., 2007, p 416). This is very clear and measurable. The authors then questioned if “the quality of discourse/learning can be maintained and/or increased through the use of peer feedback” (Ertmer et al., 2007, p 416). This is confusing as written since it conflates quality of discourse and the quality of learning within a single entity. As these are two different variables to be assessed in this research question, these may be differently affected by peer feedback and thus combining them within a single question may not yield clear answers.

8. Critique the appropriateness and adequacy of the study’s design in relation to the research questions or hypotheses.

Ertmer et al. (2007) approached their research from a case study framework and utilized the collecting of qualitative and quantitative data from student discussion postings, surveys and interviews in order to build their dataset. A case study is form of “empirical inquiry about a contemporary phenomenon (e.g., a “case”), set within its real-world context—especially when the boundaries between phenomenon and context are not clearly evident” (Yin, 2009, p. 18). According to Yin (2012) a case study approach would be appropriate when the researcher is asking descriptive or explanatory questions, is collecting data in a natural setting, and/or is concerned with an evaluative process that is occurring.  Since the research was on “describing the process of giving and receiving peer feedback within an online course” (Ertmer et al., 2007, p. 416), offered several descriptive research questions, and focused on the collection of data within the “natural” setting of the discussions that were occurring within the class, the use of the case study framework is appropriate.

By using the combination of qualitative and quantitative data, the authors may be attempting to balance the strengths and weaknesses of each form of data. As Ertmer et al, (2007) commented within their study, “limited research has been conducted that examines the role or impact of feedback in online environments in which learners constructs their own knowledge, based on prior experiences and peer interactions” (Ertmer et al., 2007, p 416).  As Hoepfl (1997) noted qualitative research “can be used to better understand any phenomenon about which little is yet known” and are “appropriate in situations where one needs to first identify the variables that might later be tested quantitatively” (p. 48-49). Therefore, the focus on qualitatively derived data collected from surveys and interviews is an appropriate choice since there is little prior work that has established student perceptions on peer feedback and how this would impact quality of work within the online context and their research questions (RQ2 and RQ3) are specific to this.  Ertmer et al. (2007) also chose to collect quantitative data by statistically analyzing student posting quality (based on scoring) after receiving peer feedback. This is an appropriate methodology since their first research question was designed to determine if there is a relationship between these peer feedback and posting quality is specifically asking if change occurred.

9. Critique the adequacy of the study’s sampling methods (e.g., choice of participants) and their implications for generalizability.

Based on the descriptions of their context and procedures within this case study, it appears that rather than randomly sampling from several classes, Ertmer et al. (2007) utilized purposeful sampling to focus intensively on a small group of fifteen students (10 females and 5 males) who were enrolled in a single course. Hoepfl (1997) noted that “purposeful sampling is the dominant strategy in qualitative research method” as it seeks “information-rich cases which can be studied in depth” (Hoepfl, 1997, p. 51). However, there is little to indicate if these fifteen comprised the whole class or why this class was specifically selected for this case study. Such information is needed to ascertain if the authors were selecting out of convenience which, as Patton (1990) noted “saves time, money, and effort” but has the “poorest rationale; lowest credibility” and yields “information-poor cases” relative to other purposeful sampling strategies (p. 183).

While small sample sizes are problematic for research for quantitative studies, Patton (1990) remarked that,

There are no rules for sample size in qualitative inquiry. Sample size depends on what you want to know, the purpose of the inquiry, what’s at stake, what will be useful, what will have credibility, and what can be done with available time and resources” (p. 184)

However qualitative researchers must still be aware of the potential for sampling error when using purposeful sampling. As Hoepfl (1997) noted, sampling errors may be introduced into purposeful sampling when there is insufficient breadth in the sampling, there are “distortions introduced by changed over time” or they lack depth of data collection within each case (p. 52). In the Ertmer et al. (2007) study population, the 15 participants were drawn from only a single class during a single term. Within this group, 12 were either educational administrators or educators and 14 were pursuing advanced degrees.  Given the short period (one term) there is likely little distortion due to changes over time and the fact that they collected multiple forms of data from each participant means there was a depth to their data. Perhaps the greatest weakness in the study sample lies in the potential lack of breadth due to the common backgrounds within this small group (educational administrators or teachers pursuing advanced degrees). Patton (1990) remarked that purposeful samples should “be judged on the basis of the purpose and rationale of each study and the sampling strategy used to achieve the study’s purpose” (p 185). In applying Patton’s measure, Ertmer et al. (2007) used Bloom’s taxonomy and one could suggest they rationalized the sample selection since these participants “were familiar with Bloom’s taxonomy or assessing levels of questioning and determining instances of critical thinking” (p. 417). This indicates that this lack of breadth is not a source of sampling error however it does hold issue for generalizability.

In commenting on their study limitations, Ertmer et al. (2007) remarked that sample size did “limit the results of this study” (p. 428). This comment may be reflective of this issue of sample breadth. As Ercikan and Roth (2014) denoted that as long as qualitative studies “take into account the contextual particulars relevant to the manifestation of the generalization” (p. 17) they can offer aspects of generalizability. In examining the population within this study, the relative homogeneity of this population does represent that the results of this study may not be as applicable to groups which lack this similar occupational and educational composition but the Ertmer et al. (2007) are candid in describing the population parameters recognizing this in their section on study limitations.

10. Critique the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).

Ertmer et al (2007) elected to collect data through scored ratings of student’s postings, participant interviews, and participant surveys. In order to collect the data, the researchers utilized a group of 7 graduate students and 2 faculty members. This team collaboratively created the data collection instruments and “each team member took primary responsibility for collecting and analyzing the data from a subgroup of two participants” (Ertmer et al., 2007, p 416-417). They also indicated that “each member of the research team interviewed two participants via telephone or in persons (Ertmer et al.,2007, p 420). However, as there were only 15 participants and 9 members of the research term so there may be some clarification needed here in how this procedurally worked.  To address how data would be analyzed, several well-designed protocols were established by the researchers to address observer bias in scoring discussion postings and in coding interview responses.

Within this study, feedback students viewed was given on each discussion postings as both a score and descriptive comments.  For the first five week of the course, the two instructors of the course provided the feedback to the students. It was unclear if these are also members of the research team. Beginning in week seven and for the next six weeks (ending week 13), the students provided peer feedback to two classmates with peer review assignments being rotated on a weekly basis. At some point within this peer feedback period, interviews were conducted however it is unclear specifically when these started and ended as Ertmer et al. (2007) only denoted in the data analysis that the interviews were conducted “several weeks after the peer feedback process had started” (p. 420). Further clarification of this timing would be beneficial towards understanding the study timeline. Three weeks after this peer feedback period had ended period (week 16), the students then completed a post survey for final perceptions on both the instructor and peer feedback they received. It is unclear why there is a three-week gap between when the peer feedback period ended and when the surveys were administered. This could have potentially impacted the survey data as students were recollecting their perceptions rather than providing them in-the-moment.

For a scoring rubric, the researchers used Bloom’s taxonomy to create a 0-2 point scale for students and researchers to use evaluating posting quality. The selection of the Bloom’s taxonomy was appropriate as this is one their education students should have some familiarity with to some degree but this a very narrow scale considering the number of levels within Bloom’s taxonomy and the desire by the researchers to measure a change in quality.  The study would be improved with a larger scale more reflective of the actual structure of the taxonomy and with more ability to measure quality change over time. In addition, while the instructors modeled scoring feedback through the rubric for the first five weeks, there was very little evaluation of the students’ ability to effectively evaluate postings based on the rubric prior to its implementation. Ertmer et al. (2007) mentioned that students were provide examples of possible responses and explanations for these but there was no demonstration within the study that the participants could effectively apply it in giving peer feedback. Incorporating a scaffolded approach to this peer feedback wherein, after modeling the rubric use, the instructor offered feedback to individual students on their effective use of the form in giving peer feedback would have established a better foundation for students effectively using it. Overall lack of training could be one of the reasons why peer feedback was viewed as less preferred than instructor feedback by participants within this study. The author’s seemed to be aware of this after the fact and discuss this as part of the limitation of their study

Since the scorings were used for grading, the procedures of the study required that all peer feedback be passed through and reviewed by the instructor before being sent on. This was a thoughtful step designed to address issues anonymity and any problems that might arise. However, this likely impacted the study’s outcome. Students within the class saw the scores and comments from their instructors very soon after their submissions so they were useful in how they responded on a subsequent discussion. However, peer feedback was moderated by the instructor and resulted in up to a couple week delay on response to the student.  This meant students would potentially have no recent peer feedback to use for improvement on subsequent boards.  This is likely one of the larger issues in this study’s design and may have impacted not only student performance on postings but students’ perceptions on peer versus instructor feedback.

To determine the change in quality of student postings when given peer feedback, the researchers did not rely on the actual instructor and peer feedback scores that were given but rather used scoring of all postings provided by the researchers using the same rubric the students and instructor used. This was done to “provide a better indication of the changing quality of the responses,” to “ensure consistency in scoring student’s online postings” and to address the incompleteness of the student dataset due to the design of the class (Ertmer et al, 2007, 418). While these are all valid reasons for doing this, some data analysis of the actual peer feedback scoring would have been helpful to support the need for an alternative measurement of quality than what students received and based posting improvement on during the class duration.

11. Critique the appropriateness and quality (e.g., reliability, validity) of the measures used.

As Drost (2011) indicated “reliability is the extent to which measurements are repeatable” (p 106).  The research team protocols for addressing interobserver biases in scoring student postings as well as in standardizing the interview protocol and coding of interview data were evidence of their effort to provide reliability of their measurements.  According to Drost (2011), “validity is concerned with the meaningfulness of research components” and “whether they are measuring what they intended to measure” (p.114). One way to address validity is to use measurement tools which have been validated in their use by prior studies.  As Ertmer et al. noted, the use of Bloom’s taxonomy “provided a relatively high degree of face validity” as it was familiar to the participants and researchers (p. 421) and “had been successfully implemented by the researchers in a similar graduate course” (p 417). Validity was also addressed by the authors triangulation between the sources of data such as the survey results and the individual interviews.

12. skipped per faculty instructions

13. Critique the author’s discussion of the methodological and/or conceptual limitations of the results.

Ertmer et al. (2007) properly acknowledged several issues within their study that were linked to their methodology. Some issues were addressed within the limitations and suggestions for further work area. These included the small sample, the short duration of the study, and evaluation scale. However, Ertmer et al (2007) noted several specific issues which likely impacted the study outcomes only within the analysis discussion session. These included:

  • Use of discussion questions that were not of the caliber to be conducive to “higher-level responses,” (p.426)
  • Time delay in receiving peer feedback due to faculty moderation and review
  • Inclusion of general interpersonal and motivational postings in the analysis even though they were not likely to ever reach the upper levels of the scoring taxonomy.

These highlight three critical issues within the study design that, as the authors rightly noted, likely had an impact on the results.  First, given that this study was designed to evaluate quality changes in postings when given peer feedback, it is concerning that there was not significant effort to evaluate the questions prior to the study start to confirm they would elicit the desired level of student response. A follow-up analysis to see if the results vary when question design is directed toward high-order thinking would be useful. Secondly, since their own literature research stressed the importance of timeliness in student perceptions of feedback, it was concerning that they selected a class to study wherein peer feedback was intentionally delayed. A follow-up analysis to see if students who received timelier peer feedback perceived of this differently than those who did not would also be useful. Finally, the inclusion of motivational and interpersonal postings was likely affecting their dataset since they were using averaging scores between the two feedback periods. The authors indicated that they did not remove this from the dataset as they did not know which ones post-hoc the students intended to be counted and that if they had removed these there would have only been 160 postings to analyze rather than 778 which they felt “would limit our ability to measure change in posting quality” (Ertmer et al., 2007, p. 426). The first reasoning is a non-issue. Since the authors were not using the actual scores the students were providing to one another but their own scoring, the intention of the student is irrelevant. Since these non-content postings accounted for 79% of the total volume of student postings scored to evaluate for quality change, inclusion of them likely impacted their data since they acknowledged that these likely would not have likely scored high. Given that they knew of the actual number of these non-content postings (618), the authors could have considered one analysis with these included and one with these excluded to see if this impacts the quality change observed.  It is worth noting that these three issues are addressed only within the analysis discussion section and not in the latter limitations and suggested future work, even though at several points the authors indicate need to address these “in the future” (Ertmer et al., 2007, p. 426). Therefore, some reiteration of these issues within that later section – perhaps as a commentary on considerations for future study design – is warranted.

14. How consistent and comprehensive are the author’s conclusions with the reported results?

Overall Ertmer et al. (2007) offer a clear, consistent and comprehensive set of conclusions that are well supported by the data.  In addressing RQ1, the authors concluded that there was no quantitative change in the quality of postings during the peer feedback period (it did not increase or decrease). While the authors appropriately addressed several reasons why this may be, the results suggested to Ertmer et al. (2007) that once a level has been reached, peer feedback “may be effective in maintaining quality of postings” (p 422-422) and that “peer feedback is a viable alternative to instructor feedback” since there was no negative impact (p. 428).  In evaluating RQ2/RQ3, the authors found that student perceptions of the importance of feedback rose over the term and that there was perceived value in both giving and receiving peer feedback as based on survey and interview data.  This is consistent with the results of the study. In specifically comparing perceptions of peer and instructor feedback in RQ2, students perceived more value from instructor feedback than peer feedback. This was counter to what they thought would occur. While the authors reflected on several factors which could be impacting this, they also acknowledged that is consistent to what other studies had indicated.

15. How well did the author relate the results to the study’s theoretical base?

In reflecting on their results, Ertmer et al. (2007), connected and compared their results to the Ertmer and Stepich (2004) study. Overall, the authors found their results ran counter to what was seen in the prior study. In analyzing their results, Ertmer et al (2007) returned to several of the studies which formed the foundation of their original literature discussion at the start of the article including Black (2005), Ko and Rossen (2001), Palloff and Pratt (1999) and Topping (1998). This offered a well-developed connection between their theoretical basis and their results and demonstrated their interest in the continued development of this already existing body of knowledge.

16. In your view, what is the significance of the study, and what are its primary implications for theory, future research, and practice?

In this reviewer’s opinion, Ertmer et. al. (2007) provided the reader with good research on why giving and receiving peer feedback may impact student performance. Their analysis of how peer feedback benefits and challenges the learner and the perceptions that students then have of peer feedback relative to faculty feedback indicates there is more to building effective feedback systems into online courses then just creating a discussion board. In particular, the need to develop the student into providing effective peer feedback and the considerations that need to be made in how to structure that feedback are of critical importance. This requires faculty to take into consideration not only the relative newness students have in providing peer feedback but the need to acknowledge that there are issues of anxiety and responsibility which some students are unprepared to do, particularly within the asynchronous nature of an online class. The onus is on the faculty member wishing to use peer feedback to reflect on and scaffold peer feedback as a viable source of learning input for online students. This may not result in the work load decrease, Ertmer et al. (2007) hinted to, but this would provide a skill set that could serve the student well throughout their educational experience and beyond.

REFERENCES

Black, “The use of asynchronous discussion: creating a text of talk,” Contemporary Issues in Technology and Teacher Education 5 (1), 2005, pp. 5-24

Drost, E. (2011). Validity and Reliability in Social Science Research. Education Research and Perspectives, 38(1), 105 – 123.

Ertmer, P., Richardson, J., Belland, B., Camin, D., Connolly, P., Coulthard, G., Lei, K., and Mong, C. (2007). Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study. Journal of Computer‐Mediated Communication,12(2), 412-433

Hoepfl, M. C. (1997). Choosing qualitative research: A primer for technology education researchers. Journal of Technology Education, 9, 47–63.

Cobb, P., Confrey, J., Lehrer, R., & Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9-13.

Maxwell, J. A. (2013). Qualitative research design: An interactive approach (3rd Ed.). Thousand Oaks, CA: SAGE Publications

Patton, M. (1990). Designing qualitative studies In Qualitative evaluation and research methods (pp. 169-186). Beverly Hills, CA: Sage

Roehler, L. R., & Cantlon, D. J. (1997). Scaffolding: A powerful tool in social constructivist classrooms. In K. Hogan & M. Pressley (Eds.), Scaffolding student learning: instructional approaches and issues (pp. 6–42). Cambridge, MA: Brookline

Yin, R. K. (2009). Case study research: design and methods (4th Ed.). Thousand Oaks, CA: Sage.

Yin, R.K. (2012) A (very) brief refresher on the case study method In Applications of case study research (pp 3-20) Thousand Oaks, CA: SAGE Publications

 

Inviting Video Games to the Educational Table

Gee, J. Learning and Games. The Ecology of Games: Connecting Youth, Games, and Learning. Edited by Katie Salen. The John D. and Catherine T. MacArthur Foundation Series on Digital Media and Learning. Cambridge, MA: The MIT Press, 2008. 21–40

According to Gee (2008), “good video games recruit good learning” but it all rests on good design (p.21). This is because well-designed video games provide experiences to the learner which meet conditions that “recruit learning as a form of pleasure and mastery” (p. 21).  These conditions include in which providing an experience that is goal structured and requiring interpretation towards making those goals, that provides immediate feedback and the opportunity to apply prior knowledge/experiences of self and others towards success in meeting goals. If done in such a way, Gee (2008) argues that they allow the learner’s experiences to be “organized in memory in such a way that they can draw on those experiences as from a data bank”(p. 22). As Gee presented (2008), these conditions, coupled with the social identity building that good game design incorporates,  help “learners understand and make sense of their experience in certain ways. It helps them understand the nature and purpose of the goals, interpretations, practices, explanations, debriefing, and feedback that are integral to learning” (p. 23). These conditions are the key to good game design as they provide several key aspects which play into learning science.  First they create a “situated learning matrix”- the set of goals and norms which require the player to “master a certain set of skills, facts, principles, and procedures” and utilize the tools and technologies available within the game to do this – including other player and non-player characters who represent a community of practice in which the learner is self-situating (Gee, 2009,p. 25).  This combination of game (in game design) and Game (social setting), as Gee (2009) explained, provides the learner with a foundation for good learning since “learning is situated in experience but goal driven, identity-focused experience” (p. 26). In addition, many well-designed games  incorporate models and modeling, which “simplify complex phenomena in order to make those phenomena easier to deal with” (Gee, 2009,p. 37). Many good games also enhance learning through the emphasis on distributed intelligence, collaboration, and cross-functional teams which create “a sense of production and ownership,” situate meanings/terms within motivating experiences at the time they are needed and provide an emotional attachment for the player (which aids in memory retention) while keep frustration levels down to prevent them pulling away (Gee, 2009, p. 37). As Gee (2009) pointed out “the language of learning is one important way in which to talk about video games, and video games are one important way in which to talk about learning. Learning theory and game design may, in the future, enhance each other” (p. 37).
In breaking down the connections to learning which can be present within well-designed video games, Gee (2009) has not only outlined the structures through which good educational games should be built but is constructively addressing common arguments presented against using video games. Recognizing the assets well-designed games can bring to the educational table is important since, more often than not, the skills and content learned in games are learner-centered and content connected but are “usually not recognized as such unless they fall into a real-world domain” (Gee, 2009, p. 27). This is likely why the discussion of the role of video games within education is necessary. As Gee (2009) commented,

“any learning experience has some content, that is, some facts, principles, information, and skills that need to be mastered. So the question immediately arises as to how this content ought to be taught?Should it be the main focus of the learning and taught quite directly? Or should the content be subordinated to something else and taught via that “something else”? Schools usually opt for the former
approach, games for the latter. Modern learning theory suggests the game approach is the better one” (p. 24)

 

 

 

Video Games as Digital Literacy

Steinkuelher, C. (2010). Digital literacies: Video games and digital literacies. Journal of Adolescent & Adult Literacy, 54(1), 61-63.

In reflecting on if educators are selling video games short when it comes to learning, Steinkueler (2010) offered the anecdotal case of “Julio”, 8th grade student. Julio spent a significant amount of free time involved in video game culture, designing and writing about gaming. However read three grade levels below where he should be and was often disinterested and disengaged from school. Even when presented with game-related readings he still also did not excel. But when given choice in reading, he selected a 12th grade reading that appealed to his interests and managed to succeed despite the obstacles this reading presented him. Steinkueler (2010) argued it was the action of giving him choice to select something that appealed to his interest that increased his auto-correction rate and thus gave him persistence to overcome and meet the challenge.  Steinkuler (2010) opined that “video games are a legitimate medium of expression. They recruit important digital literacy practices” (p. 63) and as such may offer an outlet for the student, particularly disengaged males, to engage in learning that may otherwise be unmet through traditional structures.

The efforts the author highlighted Julio engaged in– writing, reading, researching for gaming — certainly suggest that video games may offer a way to engage between new and traditional literacies as Gee (2008) suggests.  However, this is but a single example and alone offers very little in terms of tangible data to rest any confirmed ideas about the important of video gaming in education. However, it does offer the notion of considering how video games present as new literacies which can open doors for expression of meaning and ideas particularly those who my feel marginalized within traditional curriculum plans and by those who consider video games a “waste of time”.

The appeal of the qualitative analysis approach to investigating how students view and experience the use of gaming with education is especially appealing given this case of Julio. Would he have seen that is outside activities were translatable into educational acumen? Would his teacher or parents? There is so little in this small single case study to say much but it does give one ideas.

Gee, J. Learning and Games. The Ecology of Games: Connecting Youth, Games, and Learning. Edited by Katie Salen. The John D. and Catherine T. MacArthur Foundation Series on Digital Media and Learning. Cambridge, MA: The MIT Press, 2008. 21–40.

 

Twitter and New Literacy

Greenhow, C., & Gleason, B. (2012). Twitteracy: Tweeting as a New Literacy practice. The Educational Forum, 76(4), 464-478.

Just how useful can social media be to promoting learning?  According to Greenhow and Gleason (2012), microblogging, through technologies such as Twitter, opens up the opportunity for students to connect to “the kinds of new literacies increasingly advocated in the educational reform literature” (p. 467).  New literacy is ” dynamic, situationally
specific, multimodal, and socially mediated practice that both shapes and is shaped by
digital technologies” (Greenhow & Gleason, 2012, p.467) As such is allows meaning and learning to stretch into both formal and informal interactions and to be responsive to relationships that develop within these settings such that authorship is neither singular or static but is constantly being created and re-created and expressed through new means of combining text, images, sound, motion, and color. To examine how microblogging through social media such as Twitter connects to learning and new literacy, the authors conducted a literature search of journal articles to answer questions such as:

  • How do young people use Twitter in formal and informal learning settings, and with what results?
  • Can tweeting be considered a new literacy practice?
  • How do tweeting practices align with standards based literacy curricula?

According to the study, the authors found that studies show  “Twitter use in higher education may facilitate increased student engagement with course content and increased student-to-student or student–instructor interactions—potentially leading to stronger positive relationships that improve learning and to the design of richer experiential or authentic learning experiences” (Greenhow & Gleason, 2012, p. 470). However, at the time of their research, few studies had examined the use of Twitter as a new literacy practice. Looking to research on literacy practices and social media, Greenhow and Gleason (2012), suggested that “youth-initiated virtual spaces,”  such as fan-fiction sites, Facebook and MySpace, afford students “allow young people to perform new social acts not previously possible”, and they demonstrate the new literacy practices (p. 471). Tweets, Greenhow and Gleason (2012) argued, offers similar themes and opportunities since it they are:

  • “multimodal, dynamically updating, situationally specific, and socially mediated” (p. 472)
  • offer “unique combinations of text, images, sound, and color that characterize teens’ self-expressions on social network sites, individual tweets and retweets typically comprise a multiplicity of modes” (p. 472)
  •  develop into “constantly evolving, co-constructed” conversations that require the participant to have understood the situational context of the conversation and the conventions within it in order to participate thereby demonstrating a “. (p. 472)
  • show “a use of language and other modes of meaning” that is “tied to
    their relevance to the users’ personal, social, cultural, historical, or economic lives.” (p. 472)

As a result, the Greenhow and Gleason (2012) argued that, when considering curricula, tweeting creates “opportunities for their development of standard language proficiencies” and “encourage the development of 21st century skills, such as information literacy skills” (p. 473-474).  However, the need for further research is important to addressing how to best addres this as a new literacy within traditional educational practices. Due to the paucity of research, the authors recommended more large-scale and in-depth studies of how students of varying subgroups use Twitter as well research specifically focused on :

  • tweeting practices and “the potential learning opportunities that exist across school and non-school settings,”(p. 474)
  • how learners frame and come to view their experiences and place within the Twitter community
  • developing pedagogy for analyzing social media communications to understand socio-cultural connections
  • how teachers are incorporating social media into secondary and higher education

Given the generally negative perceptions many parents and districts have of student use of social media, along with the hurdles of “authority, control, content management (e.g., managing what is shared, received, tagged, and remixed), security, and copyright,” Greenhow and Gleason (2012) caution that such research will likely focus on higher education until there is “an accumulation of evidence that suggests that the benefits of social media integration in learning environments outweigh the costs” (p. 475).

As Greenhow and Gleason (2012) literature research suggested, there can be a lag between when technology is introduced, when it becomes used in education, and when research strategies are targeted towards understanding its placement and performance in promoting learning among various student populations. At the time of their research,  the authors were only able to locate 15 studies which met their broader search criteria of social media and new literacy, and only 6 that specifically discussed microblogging.  In a more recent literature study, Tang and Hew (2018) found 51 papers which specifically examined microblogging and/or Twitter that were published between 2006 and 2015.  While microblogging platforms, such as Twiducate, have been offered to make microblogging more K-12 friendly, the question of whether or not the use of Twitter has reached its full potential is less certain. Tang and Hew (2018) suggest that Twitter and similar technologies are most often being used for assessment and communication and that more professional develop is needed to make faculty more adept at using and designing learning activities through Twitter as well as in training students to effectively use Twitter and lessen the distractions social media presents to them.  As Tang and Hew (2018) remarked still more research is needed “in how different students experience Twitter and are engaged by it” (p. 112).

Tang, & Hew. (2017). Using Twitter for education: Beneficial or simply a waste of time? Computers & Education, 106(C), 97-118.

 

ARLE’s & VRLE’s: Horizons for Learning

Today’s classroom is so much larger than four walls and a white/chalk board.  The opportunity for the educator to take their students to to new worlds or to help them see new aspects within everyday landscapes is vast.  Virtual reality learning environments (VRLE’s) are 3-D immersive experiences that can be accessed through the desktop or involve more specialized hardware such as goggles.  Augmented reality learning environments (ARLE’s) combine virtual objects (2-D and 3-D) within the actual environment of the user in real time. Often these two are seen as occupying various aspects of the reality-to-virtuality continuum. Each of these presents new opportunities and challenges for educators .

In examining virtual reality environments, Dalgarno and Lee (2010) pinpointed that representational fidelity and learner interaction are key characteristics of VLRE’s which, through interaction with the learner, allows for the “construction of identity, sense of presence and co-presence” within the virtual space (Dalgarno and Lee, 2010, p 14). This creates that sense of immersion which can be impactful to the learner. Representational fidelity relates to environment of interaction. Critical aspects of this include how realistic a display of the environment is presented, how smooth the display of view changes and how motions are handled, how consistent object behavior is within the environment, is their spatial audio, is there tactile force and kinaesthetic feedback and users representation through the construction of an avatar .  Learner interaction is how the user interacts and is displayed within the environment. Aspects of this include aspects of embodied action, verbal and non-verbal communication, object interactions and control of the environment.  These key aspects come together to create the ways in which VRLE’s can potentially impact learning, Dalgarno and Leed (2010) outlined five affordances that VRLE’s facilitate. These include spatial knowledge representation, experiential learning, engagement, contextualized learning and collaborative learning.  However, Dalgarno and Lee (2010) suggested that in order to  assess how to use 3-D VRLE’s in “pedagogically sound ways” that more meaningful research was necessary (p 23).  They offered several recommendations of research to consider including studying basic assumptions held in VRLE’s and linking characteristics to the affordances they outlined. They also argued that research needs to be done to establish guidelines and best practices when it comes to VRLE’s implementation.  They also appropriately recommended that this not be done through comparison of 2-D to 3-D as these would be “contrived examples in inauthentic settings” (Dalgarno and Lee, 2010,p 25). Given that this was more a call-to-arms than a general “what can VRLE’s do for you” presentation, it is not surprising that there is little discussion of the challenges that implementing and using VLRE’s in education. As Salmon duly noted in her five-stage for scaffolding learners into multi-player virtual realities, there is a need to recognize and structure for the challenges that VRLE’s present (Salmon et. al., 2010). This requires recognition of technological and educator interventions needed to support the learner

When it comes to comparing VRLE’s and ARLE’s, Dunleavy et. al (2009) offered that, when considering affordances, there may be greater representational fidelity to ARLE’s due to they natural overlaying into the real world which allows for greater feel, sights and smells in the experience . In addition, the ability to talk face-to-face as well as virtually may allow for easier collaboration between users. However, the authors noted that within VRLE’s each actions by a user is “captured and time-stamped by the interface: where they go, what they hear and say, what data they collect or access” and as such this allows for greater visualization of “every aspect of the learning experience for formative and summative assessment” (Dunleavy, 2009, p 22). When considering the limitations of ARLE’s, the authors recognized that there was need for considerations of hardware and software issues in implementing ARLE’s and that, much like VRLE’s, there is a need for logistical support and lesson management during activities which utilize ARLE’s. In addition, Dunleavy et. al. (2009) found that student’s expressed cognitive overload due to both the newness of the experience and confusion with what was to be done. They recommend than significant modeling, facilitating, and scaffolding is needed when using ARLE’s .

Given my interest in using both AR and VR learning experiences in my courses as a means for providing realistic training opportunities which would otherwise be limited, the affordances and issues outlined by these authors offer consideration of what potentials and problems exist. However as neither article offered much in terms of direct assessment of what specific impacts AR and VR have on student outcomes, social connection and motivation and offer no specifics which link to the specific aspects of design and implementation, these articles represent only a starting point.

Dalgarno, B., & Lee, M. J. W. (2010). What are the learning affordances of 3-D virtual environments. British Journal of Educational Technology41(1), 10–32.

Dunleavy, M., Dede, C., & Mitchell, R. (2009). Affordances and Limitations of Immersive Participatory Augmented Reality Simulations for Teaching and Learning. Journal of Science Education and Technology, 18(1), 7-22.

Salmon, G., M. & Nie, P., (2010). Developing a five-stage model of learning in Second Life. Educational Research52(2): 169-182

 

 

The Importance of Peer Feedback in Online Education

Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., et al. (2007). Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication, 12(2), 412-433.

The importance of feedback to students in education is a known factor, yet the research of how feedback impacts the online learner is something Ertmer and colleagues (2007) find little studied. In their article, Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study, Ertmer et. al (2007) sought to investigate the impact peer feedback has on both the overall quality of student posts and on their perceptions of the value of both giving and receiving feedback as part of an online class.

To frame their need for a study, Ertmer et. al (2007) presented a literature review. Feedback, they note, serves to help the learner evaluate their knowledge and can assist them in altering their viewpoints when presented new information. To do this, good feedback, according to the authors,  should help to “clarify what a good performance is” for the learner, assist the learner in developing their ability for “self-reflection and assessment,” help the learner gain “high quality information” about their learning, focus faculty-student interactions towards learning, increase motivation and self-esteem for the learner, help the learner “close the gap” between current performance and their final performance goals, and assist the educator in accessing information towards improving the quality of teaching (Ertmer et. al. 2007, 413-414). When it comes to online education, Ertmer and colleagues (2007) emphasized how instructor feedback can act “as a catalyst for student learning” and argue that, to be most effective, studies showed that online feedback should be timely, specific and consistent and can extend from formative to summative formats (p. 414). However, good feedback can overload the time and effort abilities of  the faculty member. To offset the increased workload good online feedback requires, Ertner and colleagues (2007) proposed investigating utilizing peer feedback as part of their instructional design. The advantages to this, they argued, is that it could increase the timeliness of feedback while helping the student to be part of the community of learners.  Studies summarized by Ertmer et. al (2007) indicate that the giving and receiving of feedback helps to increase collaborative meaning construction, increases the overall quality of discussions, and can give the learner greater “understanding and appreciation for their peers’ experiences and perspectives” (Ertmer et. al., 2007,p. 415). This, Ertmer et. al. (2007) can increase student motivation and satisfaction with a course and can increase learner autonomy. However, the authors also outlined that the literature indicates several drawbacks in using peer feedback. These included increasing student anxiety about participating in peer feedback, inexperience of students in providing quality peer feedback, and general negative perceptions about the overall quality of peer feedback.

Based on this information,  Ertmer et. al (2007) developed three research questions.These questions were (p. 416):

  1. What is the impact of peer feedback on the quality of students’ postings in an online environment? Can the quality of discourse/earning be maintained and/or increased through the use of peer feedback?
  2. What are students’ perceptions of the value of receiving peer feedback? how do these perceptions compare to the perceived value of receiving instructor feedback?
  3. What are students’ perceptions of the value of giving peer feedback?

To answer these which they utilized a case study of peer-feedback in a graduate level course. Within this course which ran a single semester, 15 students were required to post to weekly discussion questions and comment on the post of one classmate. For the first 6 weeks, only the course instructors provided feedback to each student. This feedback on their posts consisted of a numerical score relative to what level of Bloom’s taxonomy they demonstrated on their post, and overall comments regarding the quality of the posts made. After week six, students then were asked to provide feedback on two peers postings for each discussion using the same system as demonstrated by the faculty member earlier in the class. This feedback was moderated and collected by the faculty member, anonymized, and given to the posting student usually within two weeks of the original post deadline. The authors gathered qualitative and quantitative data from interviews with participants, scored ratings of the students’ weekly discussions, and the students’ pre- and post surveys.  The pre-surveys occurred at the end of the faculty feedback period and the post-survey following the end of the peer feedback perio. It is also worth noting that the authors opted not to use the students’ peer feedback scoring for their analysis as they felt there were inconsistency in scoring and instead two of the project researchers scored all the student posts and these were used to assess the first research question.

In evaluating the impact of peer feedback on the quality of student postings. Ertmer et. al (2007) compared the average scores on postings during the instructor feedback period (M=1.31) to that during the peer feedback period (M=1.33).  The data showed no significance difference in the quality of postings . Quality did not improve or decrease when compared between the forms suggesting that “peer feedback may be effective in maintaining quality…once a particular level of quality has been reached” (Ertmer et. al , 2007, p 421-422). These results were then compared to qualitative data from interviews conducted during the peer feedback period. Student interview responses, the authors argue, showed that students felt they used the peer feedback to improve their writing. In their discussion of their paper, the authors present that the lack of change in quality may be attributed to several factors including the variability in posting quality between required and optional posts (all of which were included in the tabulations), use of a limited post scoring system (two levels only), and discussion questions which were not designed intentionally to elicit higher-level analyses.

In evaluating the perception of the value of receiving peer versus instructor feedback as well the value of giving feedback, came from data of the pre- and post surveys as well as the interview responses. Overall the authors noted that students surveys showed a significant increase in the perceived importance of feedback from the start of the course to the end. But that overall feedback from the instructor was perceived as being more important than peer feedback at both the beginning and end of the course. Interview responses indicated to the authors that this may stem from student perceptions of the overall quality peer feedback and the belief that bias may be present in peer feedback. however despite this, students within the study did feel as if peer feedback was valued. This is reflected also in the fact that students valued giving and receiving feedback as part of their learning process. In discussing the greater perceived value for instructor feedback,  the authors noted that this was what was seen in other studies but that factors such as the up to two week delay in getting peer feedback (due to instructor intermediate processing) may have contributed to students perceiving this as being less timely and therefore of less value. In addition, concern about impact on grades (since peer feedback was incorporated into peer grades for the boards) may have also increased the anxiety associated with providing and receiving peer feedback.

In reflecting on the limitations of their study, the authors acknowledge the issues presented by a small study of relatively short duration and consider that future work should include more training for students on both the benefit of peer feedback and how to effectively rate peers feedback.  From this the authors conclude that feedback in general is of value in online courses and that, based on interview, students valued and learned from peer feedback even if their perceived of instructor feedback as being more important. They wrapped up their paper, by providing some general recommendations faculty could consider when trying to implement peer feedback in their online courses.

In examining this work, the use of both qualitative and quantitative data to reflect on three specific research questions was well situated.  The fact that this is an exploratory study suggests the authors are seeking to improve upon the general development of assessing feedback. Given this, there were several aspects of the study which were problematic which I think could be better addressed in future studies. In considering the impact of feedback, the fact that feedback was delayed in reaching the recipient (for up to two weeks) makes one question how much was the actual scoring of posts a measurement of the impact of the feedback the student received from peers (since these were delayed) versus the general progress of the student in developing their own self-regulating abilities after having received faculty feedback for the first five weeks of the term. Secondly, in reflecting on the discussion question design, the author’s noted that issues of construction may have limited how much students were prompted to approach the higher levels of Bloom’s taxonomy in their responses since many of the questions “were not particularly conducive to high-level responses” Ertmer et. al , 2007, p 426).  I would have liked to have seen a breakdown data between the differing discussions to see if perhaps there is a pattern that is being lost due to some underperforming questions. In addition, when reflecting on use of feedback and the scoring system, the authors assumed familiarity with using Bloom’s taxonomy  given their study population but, given that several indicated problems in using the system to score posts, there may be underlying population variations which could perhaps be impacting their data. Overall I would be interested in seeing if a expanded version of this study addressing the limitations noted by the authors and above could be applied to a larger population of undergraduates to see if these patterns hold true to lower level students.

 

New Literacies: Risks, Rewards, and Responsibilities

“To be literate tomorrow will be defined by even newer technologies that have yet to appear and even newer discourses and social practices that will be created to meet future needs. Thus, when we speak of new literacies we mean that literacy is not just new today; it becomes new every day of our lives” (Leu, 2012, p. 78)

New literacies are the “ways in which meaning-making practices are evolving under contemporary conditions that include, but are in no way limited to, technological changes associated with the rise and proliferation of digital electronics” (Knobel and Lankshear, 2014, p. 97). It involves examining how, through the use of digital technology, the learner of today can come to identify, understand, interpret, create and communicate knowledge in novel and often unconventional ways.  While the incorporation of new literacies allows the educator to meet students where they are at, to engage and enliven learning through the relevancy and interest of the learner, restructure the power dynamics of learning, and to extend learning beyond the classroom, the approach of the educator towards engaging with new literacies is often a daunting undertaking.  In her article, Hagood (2012) highlights the processes by which teachers were introduced to and implemented new literacies into their classrooms. Working with a group of 9 middle school teachers during bi-monthly meetings over the course of a year, the author (2012) provided them with a three phase process towards introducing new literacies. These phases included an introduction phase to learn about new literacies, an exploration phase of the skills and tools necessary for new literacies, and a design and implement phase. The output was an inquiry-based project incorporating new literacies the educators could use in their classes. Using the participants’ reflections on this process, Hagood (2012) outlined their takeaways towards the implementing new literacies so as to lessen push-back, increase interest for participation and overall increase teacher satisfaction with incorporating new literacies. These included starting small and learning to implement new literacies through pre-existing assignments,  test trying new literacies to facilitate learning when traditional avenues fail, and expecting to fail and retry as part of the process for developing their educator skills with new literacies. Hagood (2012) noted that while many of the participants recognized the fact that students were well ahead in their connectedness to digital technology, that this was not the motivator for their implementation of new literacies. Rather it was the fact that many of the participants felt invigorated by what they saw their students capable of producing, the increased engagement of their students, by their own personal growth, and by their renewed enjoyment of teaching through new literacies. In addition, the educators felt that they developed a collaborative network which not only pushed them to stay on task but also made them feel more invested in sharing what they had learned thereby reiterating the connectedness to context and people that comes with new literacy.

While this article lacks in any quantifiable data with regards to how implementing digital literacy impacted student and teach motivation and student success within these classes, the incorporation of the teacher’s voices in reflecting on what resulted carries great weight in thinking about how this introduction of new literacies must be transformed into workable practices for the educator. This was a single small group in a single school from a single training year and Hagood (2012) presents no follow-up or check-in to see how these teachers are fairing in their use of new literacies in the following years. Have they expanded their incorporation of new literacies beyond the one inquiry-based project and how did they do this? Or perhaps they limited themselves to the one project, change projects, or abandoned new literacies altogether? What obstacles came about over time which impacted how they developed their skills and their overall implementation of new literacies? These are questions this article doesn’t address but are of interest when thinking about how to aid educators in exploring and adopting new literacies. What did their students think of these new literacies

In thinking about research, the above questions bear greater examination.  It would be interesting to expand upon this towards examining the best processes for implementing new literacies by examining outcomes such as motivation, efficacy, self-directedness, and overall success for both student and teacher.

Hagood, M. C. (2012) Risks, Rewards, and Responsibilities of Using New Literacies in
Middle Grades. Voices from the Middle, Volume 19 Number 4, May 2012

Leu, D. J., & Forzani, E. (2012). New literacies in a Web 2.0, 3.0, 4.0, …∞ world. Research in the Schools, 19(1), 75-81

Knobel, M., & Lankshear, C. (2014). Studying new literacies. Journal of Adolescent & Adult Literacy, 57(9), 1-5

 

 

 

Digital Games, Design and Learning: A Meta-Analysis

Clark, D. B, Tanner-Smith, E.E, and Killingsworth, S.S. (2016) Digital Games, Design and Learning: A Systematic Review and Meta-Analysis. Review of Educational Research 86(1):  79-122.

Within this article, Clark, Tanner-Smith and Killingsworth (2016) offer a refined and expanded evaluation of research on digital games and learning.  To ground their study, the authors summarize three prior meta-analyses of digital games. It is from these three studies and their findings that the authors develop a set of two core hypotheses about how digital games impact learning  that were tested in their meta-analysis. These two core hypotheses were further examined for that the authors term as moderator conditions and from this the authors developed sub-theories for each core theory to also test in their meta-analysis. Utilizing databases spanning “Engineering, Computer Science, Medicine, Natural Sciences, and Social Sciences” the authors sought research published between 2000 and 2012 to identify studies which examined digital games in K-16 settings, which addressed “cognitive, intrapersonal and interpersonal learning outcomes”(p. 82) and had studies which either had comparisons of digital games versus non-game conditions or utilized a value-added approach (something the prior meta-analyses ignored) to compare standard and enhanced versions of the same game. In addition they required a set of criteria for these studies to meet which included specifics on game design, participant parameters, and pre and post testing data which could be used to assess change in outcomes. Overall, they identified 69 studies which met the parameters outlined in their research procedures. From this population they discerned the following signficant patterns:

  1. In studies of game versus non-game conditions in media comparisons, students in digital game conditions demonstrated signficantly better outcomes overall relative to students in the non-game comparisons conditions (p. 94). This was significant for both cognitive and interpersonal outcomes (p.95). The number of studies with interpersonal outcomes was too small for statistical significance.
  2.  In studies of standard game and enhanced game versions through value-added comparisons, students in enhanced games showed “significant positive outcomes” relative to standard versions (p. 98). While overall there were too few studies with specific features for cross comparisons, the one feature of enhanced scaffolding (personalized, adaptive play)was present in enough studies and showed a significant overall effect (p. 99).
  3. Overall in examining game conditions, games which allowed the learner multiple play sessions performed better than those of single game play when compared against non-game conditions. Game duration (time played) seemed to have no impact on overall impact. (p. 99) These results did not vary even when considerations of the visual aspects of the game were measured.
  4. Despite what was seen in previous meta-analyses, there was no difference in outcomes for games paired with additional non-game instruction versus those without the additional non-game instruction. (p. 99)
  5. There was significant differences with player configurations within games. Overall, single player games had the most signficant impact on learning outcomes relative to group game structure and these outcomes were higher in single player games with no formal collaboration or competition. (p. 100). However games with collaborative team competition had signficantly larger effects on learning outcomes when compare to single competitive player games.
  6. Games with greater engagement of the player with actions within the game had greater impact than those with only a small variety of actions of the screen which did not change much over the course of play.
  7. Overall the visual and narrative perspective qualities of the games both simple and more complex game designs showed effectiveness in learning outcomes but overall schematic (schematic, symbolic or text-based) games were more effective than cartoon or realistic games

In reflecting on their findings, the authors recognized some limitations present based upon both their search parameters and their methodological breakdowns for analysis and encourage further examination of studies which fell outside of their range (for example simulation games) and greater examination of the subtleties of the individual studies included within their analysis before any larger generalizations can be made as to the specifics of best practices for game design.

Perhaps the most interesting aspect of this study is not the outcomes it presents for future study (even though these are great food for thought about intentional game design for educational purposes) but the proposition it makes that educational technology researchers should “shift emphasis from proof-of-concept studies (“can games support learning?”) and media comparison analyzes (“are games better or worse than other media for learning?”) to cognitive-consequences and value-added studies exploring how theoretically driven design decisions can influence situated learning outcomes for the board diversity of learners within and beyond our classrooms” (p. 116).