First Critical Research Review

For part of my doctoral program I was asked to examine an article and provide a critical review of the publication based on a series of questions. The content below corresponds to the questions asked.

  1. Overall Erhel and Jamet (2013) poorly specified the problem for which their research was designed. Generally, a clear statement of a problem appears in the introduction section of the paper but, in this case, the authors did not clearly state any problem in their introduction. The clearest statement of the objectives of the paper were found in the second to last paragraph of the paper which indicates that “one of the objectives of the present study was to answer the question “Is deep learning compatible with serious games?” (Erhel and Jamet, 2013, p. 165). This objective was not clearly stated anywhere in the prior nine pages although there is a hint to this issue when the authors generically comment in their introduction that “the use of games to teach educational content inevitably raises the question of their compatibility with deep learning” (Erhel and Jamet, 2013, p. 156). In addition, while they indicated there are two objectives, they do not outline the second objective in any overt way. This leaves the reader responsible for trying to ascertain if and what the second objective of the study may be. The authors did state in the introduction that the question of deep learning compatibility “has prompted many researchers to investigate the actual benefits of digital games in terms of learning and moitvation” (Erhel and Jamet, 2013, p. 156). However, given that a value-added approach was used in this study (see question four for a discussion of that framework), the second objective does not seem to be to determine if digital games are an effective means for impacting learning and motivation since the assumption underlying that approach is that these are effective. Thus, based on what is presented in both the literature review and the experiment design, the second objective (what is called the assumed objective) may be that they are seeking to determine the effectiveness of elements (instructions and feedback) within game design in promoting learning and motivation. Unfortunately, this second objective was not specifically stated by the authors as one would have expected in an academic research paper and is only assumed. Based on how the literature review and research was conducted, one could consider that even the authors were unclear on both objectives within their research design and may have only thrown the idea of two objectives in at the end to try and tie their research together based on the results seen.


  1. As noted in question one, there was very little clarity of the actual objectives of this research. If one were to work from inferences of what the author’s may have intended but poorly expressed, then there was one known stated objective — to assess if deep learning is compatible with serious — and one assumed objective — to understand how elements of game design impacts overall learning and motivation. If these were the actual problems the authors wished to address, then there is a need for this study as one of the issues in understanding the use of games in education is to determine how they connect to learning and how the aspects of the actual game may impact effectiveness. In addition, Erhel and Jamet (2013) also used this study to present the value-added approach as a mechanism for assessing effectiveness. While there are issues with how this was implemented within this study (see question four), the idea that disciplines should develop, evaluate and improve upon conceptual models for testing is not a new one nor out of place since testing different conceptual models to identify the effective ways to measure learning within educational research are needed.
  2. Based upon the stated and assumed objectives (see question one), there is a potential for this problem to be researchable but not within a single study as presented here and not in how the authors designed their study (see questions 4 through 14 for more on the issues of their study). Since the authors first objective was assessing learning through “serious games” one would first have to define what constitutes a “serious game” in terms of design elements and goals. The authors did not offer much in terms of defining what specifically constitutes a “serious game” within their study. They suggested that how the learner approaches the game (as either entertainment or as learning) impacts their goals (performance or mastery) but they did not specify that which defines a serious from non-serious game. Once this idea of what is a “serious” versus “non-serious” game was defined, then one would need to identify or design a serious game with the required characteristics that allowed the researcher to assess the depth of learning (surface versus deep). This would likely require further refinement for understanding the hallmarks of deep versus surface learning (and why this is significant to educational research) and how these are assessable (qualitatively and/or quantitatively). To address the second problem of how elements of digital game design impact learning and motivation would require many additional experiments whereby design aspects, such instructions and feedback, were assessed independently (qualitatively and/or quantitatively) from one another for impacts on motivation and learning before assessing them within groupings as the authors do. Therefore, the intended objectives are researchable even if that is not what was done by the authors.
  3. The conceptual framework underlying Erhel’s and Jamet’s (2013) research was found in their discussion of the benefits of digital learning games compared with conventional media (p.157). Within section 1.3, the authors outlined that this study was designed to illustrate an alternative approach to how to assess digital game-based learning (DGBL) and, through this alternative method, identify specifics of game-based design which improve motivation and learning. This is done in such a way that it rests on an initial belief that digital games are an effective medium for learning and that they are only testing to determine what specifically impacts that effectiveness. In testing this idea, the authors argued that prior media comparison approaches to digital game-based learning have been ineffective in being able to reach any concrete conclusions as to whether digital games can be an effective learning medium. This they felt was due to a general vulnerability in the media comparison approach in which “many confounding factors (e.g. format, pace, educational content, teacher’s social presence), …prevent us from clearly identifying the factors responsible for the benefits of DGBL” (Erhel and Jamet, 2013, p. 157). However, the authors failed to address specifics as to how these “confounding factors” impacted the outcomes in the prior studies they cited and why they were mitigated within a value-added approach. Thus, in response to these studies, the authors designed this research to utilize a value-added approach. This framework relied on determining a difference in scores between a start and end points wherein the only a single variable between those two scoring events has occurred. This approach also works best when applied to random samples or, if nonrandom, to samples which have been statistically controlled for populational variance, such as prior experience and learning level (Douglas, 2017). Given the sampling and procedural issues discussed in questions 9 and 10 below it is unlikely that the sample was randomly constituted which also means that additional measures should have been implemented to determine the degree of non-randomness of variables the authors do not account for – such as sex, major, and year of schooling. Since these other variables either are not collected or are suspect given numerical inconsistencies, this questions how effective the application of the value-added approach would be for this research.
  4. Overall Erhel and Jamet (2013) offered a rather rambling combination of theory and research as the foundation of their research and, as a result, there seems to be no central theory underlying their research. They went so far as to state “whichever interpretive theory we choose to apply” suggesting they are not subscribing to a single theory at the start of the paper (Erhel and Jamet, 2013, p. 158). What does underly their research is their use of the value-added approach. This approach rested on the idea that it digital game-based learning is assumed to be effective, despite the inconclusiveness of prior studies, and that they only needed to assess how that effectiveness varied which changing conditions (instructions and feedback). Overall, they did not effectively present why the value-added approach was a better approach to the media comparison approach nor did they address any limitations and criticism of the value-added approach.

When examining their literature review, the connection of the research they presented to the only stated research objective and to the specifics of assessing digital game-based learning are tenuous. The first section (1.1) was meant to outline what digital game-based learning is. However, the literature sources selected dd not offer much concreteness in differentiating digital games from other games and specified little about what a digital game is beyond that is for “entertainment” and results in “cognitive changes.” Since not all cognitive changes represent actual learning processes, this lack of clear definition is problematic. In section 1.2, the authors attempted to lay out how games impact motivation by linking how a learner’s goals of mastery or performance (these seem to be two be presented as mutually exclusive goals) are connected to the entertainment and educational dimensions of games. To do this, the authors examined literature on general motivation and mastery and performance goals and offered no concrete explanation of research which has assessed the linkage between games and learner goals. The only offering of actual digital game research indicated that there is a positive linkage between intrinsic motivation and scoring but nothing about how this connects to learner goals. In section 1.3, the authors sought to present the benefits of DGBL games compared with conventional media but overall end up presenting that DGBL games are a mixed bag and instead the authors used this section to present their value-added approach without very much literature discussion to ascertain why this is better and what caveats structure such an approach. In section 1.4, the authors introduced the concept of instruction design to improve DGBL game effectiveness. They presented they are addressing this as it is an area not studied before within DGBL. To address how important instruction design may be to DGBL, the authors then proceeded to offer research only on the how text-based instructions impact how someone reads and approaches a learning item without addressing how reading on screen is cognitively different than reading from a document. They then transitioned to a discussion of literature which outlines how having no instructions prior to learning from a text document (what they call incidental learning) promotes surface learning whereas clear instructions promote deeper learning (what they call intentional learning). This is even though none of their experiments used either a text document nor had a situation of no instructions prior to learning.

  1. Erhel and Jamet’s (2013) literature review offered no summary of the literature and did not specifically direct any summary to addressing the only stated research objective of the paper. Instead the authors offered a prediction that incidental (surface) learning is likely when “the instructions given to learners encourage them to play rather than learn” suggesting to them that “when the emphasis is placed on the playful components of a digital learning game, learners may fail to put in the effort required for learning” (Erhel and Jamet, 2013, p. 158). This lack of connection to the only stated research objective may stem from this objective having been an afterthought since it is only stated at the end of the paper and may not have been used to build the literature review in any meaningful way. Overall the authors did not offer any concrete information regarding what are “serious games,” the literature on how serious games may connect to surface and deep learning (they only present on how the presence of written instructions in documents may connect to surface or deep learning) nor did they offer much on the specifics of game design which would examine the effectiveness of the elements within it such as instructions. Since most of their references applied solely to text-based documents, one questions the ability to transition cognitive associations with paper documents to that of digital game-based learning. Most interestingly, even though they undertook an experiment in which feedback was used as a tool of game design, they did not offer any references within their literature review which addressed how feedback plays into either motivation or learning. The only discussion of this occurred within the context of the experiment introduction and existed almost as if experiment two was an afterthought done only when experiment one did not present the desired results.
  2. In examining the general foundations for their research, it is not surprising that the Erhel and Jamet (2013) failed to offer clear and specific research questions or hypotheses which connect to their stated objective of the paper. In experiment one, the authors first indicated that the study was done “to ascertain whether the effects of instructions given during the readings phase that have been observed for text-based learning would also manifest themselves during DGBL” (Erhel and Jamet, 2013, p. 158). However, they stated at the end of the discussion of the experiment that “the first experiment was designed to assess the effects of instructions on learning quality and motivation in DGBL” (Erhel and Jamet, 2013, p. 161). These are two different research questions, so it is confusing as to which one they were evaluating with their experiment. The first would suggest they were replicating studies conducted in text-based learning to see if the outcomes also hold true for DGBL. The second suggests that were testing the relationship between instructions and learning and motivation. Setting aside the later research question and looking only at the first, the authors created two assumption to test at the start of experiment two. The first is that “entertainment instruction would improve our participants subjective experience and be reflected in significantly higher intrinsic motivation scores” (Erhel and Jamet, 2013, p. 158). For their first assumption they turned to a metadata analysis by Vogel et al. (2006) for support but this is problematic as the conclusions reached by this study were examining digital simulation and game learning to traditional learning and not specifically how instructions impacted learning for digital games. The second assumption was that “participants in the entertainment instruction condition would achieve a higher learning outcome” (Erhel and Jamet, 2013, p.158). This is based on research conducted by Liebermann (2006) – a reference they give numerous times without any description as to the actual study and results. In doing this, Erhel and Jamet (2013) failed to specify what is the measure of the higher learning outcome that they are basing this assumption on. For the study, they used scoring on the questionnaires, but it is unclear if that was similar to what was done in Lieberman’s study. Overall it is unclear how this research question and assumptions stated are connected to the stated objective of “Is deep learning compatible with serious games?” (Erhel and Jamet, 2013, p. 165).

The second experiment outlined that Erhel and Jamet (2013) “set out to determine whether the presence of KCR [knowledge of correct response] feedback in DGBL quizzes can influence the types of learning strategies induced by the instructions” (p. 162). They predicted that “the addition of feedback containing the correct response would reduce redundant, superficial cognitive processing, thereby making learning more relevant in both the entertainment and learning instructions conditions” (Erhel and Jamet, 2013, p. 162). Like the first research question, the connections of the one stated research objective and the experiment conducted around it are not clearly connected. In addition, there was no mention of using feedback for learning in either the introduction or the literature review prior to experiment two meaning that this experiment was completely disconnected. Overall, experiment two was never actually part of the original research design as the authors clearly indicated that experiment two “was designed to overcome the problem” they found in experiment one – namely that the outcomes did not match the assumptions (Erhel and Jamet, 2013, p. 164). This they felt was from the entertainment perspective in which “the instructions failed to engage a sufficiently strong effort to trigger the central processes needed for learning” so to compensate they added an additional dimension of feedback to change the outcomes (Erhel and Jamet, 2013, p. 164).  This indicates they were more concerned with figuring out when their predictions would be correct than addressing the research objective stated. Overall there is a general lack of cohesiveness between what the introduction states this paper is about and what the actual experiments are designed to answer relative to any stated objectives for the research.

  1. In examining the two experiments conducted, there are some concerns about the actual study design relative to the research questions presented. In experiment one, Erhel and Jamet (2013) indicated that the study was done “to ascertain whether the effects of instructions given during the readings phase that have been observed for text-based learning would also manifest themselves during DGBL” (p. 158) and from this they create two hypotheses. The first was that “entertainment instruction would improve our participants subjective experience (although they do not specify how they are measuring that) and be reflected in significantly higher intrinsic motivation scores” (Erhel and Jamet, 2013, p. 158). The second was that “participants in the entertainment instruction condition would achieve a higher learning outcome” (Erhel and Jamet, 2013, p.158). These assumptions were built from the data of the literature review. However, in reflecting on the results they shared for text-based study of instructional conditions (entertainment versus study) in the literature review, the data was collected via a think aloud. This allowed the researchers in that study to observe the process of learning between the two conditions of the entertainment versus learning. There were no mechanisms for this type of data collection in this study and thus the current study is not collecting comparable data to that one. This would mean the ability to apply conclusions from that study to this one is problematic. In addition, the authors included two different set of questions – paraphrase and inferential – in assessing learning outcomes and offered that one was an example of deeper learning and the other of surface learning without offering evidence to support that assumption.

In examining the design of experiment number two, the research question was “set out to determine whether the presence of KCR [knowledge of correct response] feedback in DGBL quizzes can influence the types of learning strategies induced by the instructions” (Erhel and Jamet, 2013, p. 162). Now the addition of this experiment to a study which a) did not have it included from the start (as evidenced by the authors’ own words) and b) for which the experiment was designed to prove their predictions were correct by changing the conditions of the experiment is problematic towards their overall purpose. Aside from the reasoning given for experiment number two, the general design was that this experiment was done by adding feedback responses to the ASTRA simulation while that the same time continuing with the two instructional conditions of entertainment or learning. Thus, the authors were not only testing the presence of the feedback but also that of the instructional condition even though they said they are only testing the instruction condition (Erhel and Jamet, 2013, p. 162). Overall the study’s design relative to the actual research questions asked is less than ideal and suggests that, at least in the case of experiment two, there was significant bias on the part of the researchers towards a particular outcome which resulted in modifications of overall research purposes.

  1. Overall the sampling methods presented by Erhel and Jamet (2013) within their two experiments were poorly explained. The authors indicated that students were recruited from a pool of students but did not indicate how the original pool was established, how big this pool was to begin with, and how the actual participants were pulled from that larger pool for both experiments. In looking more closely at experiment one, the authors indicated that they randomly assigned participants to the two experimental conditions (learning versus entertainment instructions) but since they showed the same uneven numbers of males and females to each of these groups (9 men 15 women) it would suggest they actually did a stratified random sample to evenly distribute the numbers of each sex that had available between the two experimental conditions . Furthermore, they failed to examine the specific of the populations within each of the two conditional states beyond their general age mean within experiment one and they did not offer any breakdowns of the populations, such as by sex or age within experiment two. Erhel and Jamet (2013) did mention that for both experiments they excluded students based on their enrollment in medical or allied health programs. However, they did not offer any breakdown of the background majors of those that did participate within the study. This could have created another dimension to their data which may have been relevant between the two experiments and may impact results. Because of the weakness in the description of the populations participating in the study and overall lack of explanation of their actual sampling methods, the overall generalizability of the study’s results is limited for at least two reasons. First, since they did not explain their populations parameters, they cannot offer any intra- and inter-group analyses within and between the two experiments which would help the reader to understand if the results from the comparisons between these two populations are valid. Secondly, since they offered no population parameters, the ability to apply what they discover to “like” populations is not possible since the “like” is undefined by the authors. While this could allow make one think that overall generalizations would be safe due to lack of specific population parameters, the opposite is true. Since they offer no discussion of population data by which to breakdown their analyses, it may be that there are underlying specifics to the populations that influenced the results which they have not accounted for. For example, perhaps there were differences in distribution of majors between the two experiments which influenced why one group scored better on the inferential questions than the other. Unfortunately, the authors, for reasons unknown, did not even consider these issues of sampling when outlining what are limitations of their research.
  2. In evaluating the adequacy of Erhel and Jamet’s (2013) procedures there are several issues evident. The first issue raised regards the actual number of participants in their study. In experiment one, the authors indicated a total of 46 participants (22 men and 24 women) but when they broke the data down by the two groups they indicated that each group had 9 males and 15 females for a total of 18 men and 30 women participating in experiment one. They only mentioned omitting one male due to having scored too high on the pretest. This means they either miscalculated their original number of study participants, they removed more males and added additional females without explanation as to when, how and why this occurred within the experiment procedures, or that they padded their data to reach a desired result.

In examining the procedures outlined for experiment one, the authors indicated there were five experimental phases but only explained three phases (pretest, simulation, and questionnaires) in their descriptions of their procedures. It is unclear if they miscounted or failed to properly divide the phases within the writing.  Beyond the actual number of phases, within each phase they did explain there are concerns which can be raised. The first phase – a pretest phase – involved the participants completing a questionnaire on prior knowledge. This pretest was on “medical notions” and according to the authors “they would not help the learners answer the quizzes or the knowledge questionnaire.” (Erhel and Jamet, 2013, p. 159). However, despite this statement that it would not impact what the participant would be exposed to later, the researchers used the score on this pretest to eliminate persons from participation for possessing “too much prior knowledge” (Erhel and Jamet, 2013, p. 159) without evidence within the data that these individuals skewed the results with this prior knowledge.

In the second phase of the experiment (where they tested the two experimental conditions between the two groups), the authors indicated that distinctions between these were a “learning condition in which the instructions stresses ASTRA’s playful dimension, presenting it as a game” whereas the other stressed “the educational dimensions, presenting it as a learning module” (Erhel and Jamet, 2013, p.159). In translating the two examples provided in French by the researchers, there are some concerns. Despite their description that one was set of instructions was for learning and the other for a game, the phrase “helps you to learn” was evident in both sets of instructions suggesting that the distinction between game and educational instructions were not complete between the two sets of instructions. In addition, there was an additional variation in the wording used to emphasize the game (“be challenged to answer quizzes”) versus the learning module (“be introduced to quizzes) – see underlined in Box 1 and Box 2. The word selection of the passive “be introduced” for learning and the active “be challenged” for gaming may be more influential in framing the instructions when viewed by the learner and may be indicating more about word selection in instructions than the framing of a game versus learning module. Further into during phase two, the authors indicated that they used a single room with 6 booths which would indicate that they had to break each experimental condition set up into at least four groups to work through the ASTRA simulation. There was no indication of how close in time these rotations occurred, data collected on time for completion, nor any analysis of variations in results based on these differing simulation runs.

In the third phase they indicated they gave a motivational questionnaire and then a knowledge questionnaire. Unfortunately these questionnaires were not provided for review in this paper as one would have expected especially since they do not originate in full from any previously published motivational questionnaire. In the motivational questionnaire, the authors indicated that there were 12 learning goal questions and 3 motivational questions assessed but only 5 were shown and deconstructed within the paper for how they addressed aspects of performance goals, mastery goals, and intrinsic motivation.  It would have been more appropriate to show the entire questionnaire deconstructed to these specific categories. The knowledge questionnaire contained “four paraphrase-type” questions and four “inference-type questions” with the written response to each of scored on as 0, .5, 1 or 2 points based on degree of accuracy. However, the authors did not indicate who did the scoring (one author, both authors, or another), when this was done (all at once or over time, blind to group assignment or by group) or if scoring bias was assessed to address issues in the collection of scoring data.

In examining experiment two, the authors offered no specifics of procedures beyond that they adjusted the ASTRA simulation to allow for a correct or incorrect response with correct answer to be given to the learner. No discussion of data collection procedures was denoted beyond the phrase “exactly the same as Experiment 1” (Erhel and Jamet, 2013, p. 162). This is problematic since like in experiment one, there is a numbers issue with participants. The authors indicated they had 16 men and 28 women for a total of 44 participants in experiment two and that 4 were excluded after the pretest phase. Despite this exclusion, their numbers still all ran with 44 participants through the analysis of the different data from the ASTRA quizzes and the motivational and knowledge questionnaires.

  1. In examining the appropriateness and quality of the measures used by Erhel and Jamet (2013), there are some issues evident in their work. Given the participant numbers issues noted in both experiment one and two as well as the aspects of the scoring for the knowledge questionnaire as discussed in question 10, it is uncertain how reliable their actual data is from which they draw their conclusions. In addition it was noted in that in the second experiment they were only testing the single instructional independent variable (the instructions) but in reality they had layered a second variable – that of the feedback – into this experiment. Given that the results of the instructions testing in experiment one was not in support of their expectations, it seems unusual that the authors would layer this additional variable without testing how it did in its own experiment (i.e. does knowledge gained response (KGR) feedback improve performance regardless of instruction style). Thus they should have run a prior experiment before including the KGR feedback with instructions that would allowed them to assess what degree of impact the feedback alone.
  2. Overall in examining the data analyses presented by Erhel and Jamet (2013) there are several critical issues seen. First is the overall lack of information about how the sampling was done. Since the value-added approach relied on either random sampling or control of the variation of the non-randomness of the sample one would have expected specifics of this process having been discussed and a greater breakdown of the data by variables such as sex, age, major and level of schooling would help verify the use of the dataset relative to this approach. Second, there is an overall lack of clarity and possibly inaccuracy in the size of the populations reported and those that were analyzed. Both experiments indicated the total number of participants but since pretesting was part of the experiment this would suggest that these numbers are what occurs at the start of the procedures. In both experiments they dropped individuals (due to high pretest scores) but the numbers do not change when examining any of the phase three scores. This could mean that the authors were not using pretests as part of the actual experiment phasing but as part of the populational sampling design, so these individuals were dropped prior to the participant count being finalized. However, that is not how it is presented in the actual procedures of the experiment. In addition, in experiment one, the actual number of the participants reported varies from 46 to 48 and back to 46 and the numbers of males and females do not match to what is initially reported and it is not corrected when dropping of participants due to pretest scores is considered. Given these small samples (both under 50) and the unclear numbers of actual participants, even the addition or loss of one or two individuals could shift the significance of the conclusions reached.
  3. Erhel and Jamet (2013) did outline three limitations to their study. The first they denoted was in the selection of a less than interactive game. Since they stated that their research objective was assessing “is deep learning compatible with serious games” (Erhel and Jamet, 2013, p. 165), the actual game design would have been something that should have been considered at the start of the research design. The authors did not mention why the ASTRA simulation was selected over other games which would have explained some of this issue and should have been discussed. The second limitation they noted was that since overall scores were high in the in game quizzes and thus there was very limited feedback given to participants. Since their entire second experiment was based on the importance of feedback it is surprising that they say that even though it was a limiting factor that the data shows it was “beneficial factor” (Erhel and Jamet, 2013, p. 165). They should have assessed solely the impact of feedback alone without the conditions of the instructions to understand what this meant in greater detail. The third limitation they mentioned was that their use of asynchronous data from the actual game play meant the data received was not a reflection of actual play but a recollection regarding it. They mentioned they could have collected in-time data but fail to explain why they didn’t. Overall, the authors are correct in that these are three limitations of the research, but they failed to see the other larger issues present in this study with regards to research design, experiment procedures and sampling, and overall populational controls (see prior questions). Since many of these call into doubt their overall results, it is not surprising that these are not mentioned as they would undermine the entire reasoning for publishing the study.
  4. Research design and procedural issues aside, Erhel and Jamet (2013) offered some conclusions in their final discussion which seem to be making leaps from the actual results of the study. In experiment one, the Erhel and Jamet (2013) presented that there was no significant difference between the instruction modes and responses to the paraphrase type questions but that participants in the learning module “performed significantly better that those in the entertainment instruction condition” on inferential questions (p. 161). By the time this reached the final discussion of the paper, these results are expressed as coming “out against the entertainment instruction” since it “failed to trigger the central processes for learning” (Erhel and Jamet, 2013, p. 164). However, that is not what the results indicate since both modes were effective for learning but not for the same kinds of question. Whether or not these questions are demonstrated examples of learning processes is not evident within the experiment design provided by the authors. Thus, the authors showed an assumption of reasoning behind outcomes without any foundation within the actual research conducted.

In the second experiment, Erhel and Jamet (2013) concluded that the entertainment instruction group “performed better on comprehension (inference) questions than those in the learning group and that this was caused by being “less frightened of failure” (p. 164) and that this support the notions of cognitive load that “adding features such as feedback to an educational document can trigger deep cognitive processing that contributes better to learning” as well as extending “findings that adding feedback in DGBL enhances memorization” (p. 164). However, since as the experiment offered no measurement of cognitive load within the actual experiment and it is only discussed cognitive load in consideration of experiment two and not experiment one. Therefore, it was premature of the authors to suggest that this supports that theory to the exclusion of others. In addition, since the authors connected memorization aspects to the paraphrase questions in the last paragraph of section 3.3 and comprehension to the inferential questions (even though they do not concretely demonstrate this is the case with their study), their second experiment actually indicates that feedback serves those in the learning condition more for memorization than it does for those in the entertainment condition.

  1. Since the overall theoretical base of this study is not very apparent within the confines of the introduction and the only overarching statement the authors make with regards to underlying paradigms is that of the use of the value-added approach, the ability of the authors to relate their findings to a specific overarching theory is not easily discernable. The authors did make mention of several articles which offer up reasonings as to the various results of their findings, but this is not conveyed that they were subscribing to a specific theoretical foundation. Rather they seemed to be picking those ideas which best match their overall results rather than designing their research towards a theoretical perspective.
  2. While the overall idea of assessing how digital games connect to learning and the specific conditions which impact different kinds of learning through digital games is of importance within educational technology research, the value of Erhel and Jamet’s (2013) results are lessened significantly by the overall problems in their research design and numerical controls. The overall lack of information on sampling methods and population parameters means that understanding the representativeness of this population for generalizability is limited. Given that this paper was cited in over 168 articles (based on Research Gate results), quite a few studies have used this study as part of their research. This is problematic given the extensive issues present in this study design and the overall lack of cohesive research objectives and research design which undermine the quality of Erhel and Jamet’s results. It would be very beneficial to examine other studies of this field to determine what additional research examines the foundation of learning through digital games and then identify the critical components of study for digital games which should be examined within future studies.


Douglas, D (2017) The Value of Value-Added: Science, Technology, and Policy in Educational Evaluation, CUNY Academic Works. Retrieved from

Erhel, S and Jamet, E. (2013) Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education,67(C), 156-167

Vogel, J. J., Vogel, D. S., Cannon-Bowers, J., Bowers, C. A., Muse, K., & Wright, M. (2006). Computer gaming and interactive simulations for learning: a meta-analysis. Journal of Educational Computing Research, 34(3), 229–243.