The goal of the present study was to determine whether an active learning/teaching strategy facilitated with mobile technologies can improve students' levels of memory retention of key physiological concepts. We used a quasiexperimental pretest/posttest nonequivalent group design to compare the test performances of second-year medical students (n = 311) taught by conventional didactic methods (traditional group) with those involved in a case-based problem-solving learning approach facilitated with mobile phones as web-based “clickers” (experimental group). Using their cell phones, students answered the same questions about the key physiological concepts three times. A pretest to determine their baseline knowledge was followed by two followup tests after 1 wk and 2 mo, respectively. The experimental group scored a mean of 93.2% correct items after 1 wk and 84.8% correct items after 2 mo [95% confidence intervals: (89.4, 97.0) and (79.4, 90.3), respectively]. Compared with their colleagues in the traditional group who scored 33.3% [95% confidence interval: (18.9, 47.8)] and 38.5% [95% confidence interval: (23.6, 53.4)] correct items, respectively, this was a significant increase of ∼50% (P < 0.0001). Furthermore, for the experimental group, Cohen's effect size (d) values of d = 1.67 (1-wk posttest) and d = 1.38 (2-mo posttest) suggested a very high practical significance. In contrast, in the traditional group, Cohen's d values of d = 0.04 (1-wk posttest) and d = 0.15 (2-mo posttest) assumed a very low practical significance.
- active learning
- problem solving
- personal response system
the purpose of medical education is to prepare medical students to become competent physicians, critical thinkers, problem solvers, effective collaborators, and self-directed lifelong learners. Yet, our way of teaching very often does not effectively support these goals. How we teach is a product of institutional contexts: norms, culture, incentives, and the value placed on effective teaching (1). Most of the time during the course of our long education, most of us, today's teachers, were taught by our teachers in a traditional subject-centered teaching style. Because it worked for us and because it feels comfortable, we rarely deviate from that mode of instruction. This tradition is going on hundreds of years all over the world of education, and the University of Zagreb Medical School is no exception. Students are expected to absorb the material at the scheduled time of lectures, seminars, and practical exercises with little attention paid to the long-term retention of knowledge (12). Such an approach fosters passivity and the perpetuation of so-called “bulimic learning,” a seemingly endless cycle of memorization, regurgitation, and forgetting (38).
However, in the past two decades, medical science has undergone rapid and unprecedented changes because of the exponential growth of new biomedical facts, diminished lifespan of useful information, and an increasing complexity of practicing medicine, which requires constant adaptation of health care delivery models. Digital technology dramatically disrupts everything it touches because it promotes a profound change that combines inner shifts in people's values, aspirations, and behaviors with outer shifts in processes, strategies, practices, and systems. In this context, the word “disruptive” does not mean to cause disorder or to be a threat but rather is the trigger to help us move forward. Today's students come to our classrooms with powerful mobile technologies with enormous computing and networking capabilities. In such a constantly changing environment, evidence shows that lectures are not effective in solidifying long-term knowledge acquisition and that most students who take conventional lecture-based courses don't end up with a good understanding of the fundamental concepts (13, 17). Contemporary educational psychology has identified a variety of teaching strategies that can be used to provoke active learning in students, which has been described as the process of having students engage in an activity that forces them to reflect on ideas and how they are using those ideas (27). When students are actively involved in learning, they retain information longer than when they are passive recipients of instructions (9). Active learning approaches are effective in improving learning outcomes in medical education (16, 27). Among the various strategies proposed to promote active learning, case-based teaching is considered to be effective in stimulating students to synthesize, apply, and integrate basic knowledge in the face of real-life situations and has been recommended for a variety of clinical subjects, especially in preclinical training years (28, 29, 31). Another innovative contemporary approach to make learning more active, to facilitate increased attention and interactions and to provide assessments, is the use of a personal response system (PRS). This is an instructional technology tool composed of a software program installed on the facilitator's computer or an online application and remote handheld control units (clickers) that students use to respond to questions posed by the teacher (22, 30).
Thus, responding to changes in science, technology, and medical practice, we transformed the way we teach medical students for practice in the 21st century. Rather than worrying about covering content, we designed activities to focus student learning on how to use scientific knowledge to solve key physiology questions. The aim of our study was to test the hypothesis that linking problem-solving minicases with mobile technology in a face-to-face physiology class can improve students' levels of memory retention of key physiological concepts in comparing with traditional content/teacher-centered educational scenarios.
Since classes were already assigned and schedule was set, we used a quasiexperimental pretest/posttest nonequivalent group design to compare the test performances of second-year medical students taught by conventional didactic methods (traditional group) with those involved in a case-based problem-solving learning approach coupled with a web-based PRS (experimental group).
Second-year medical students enrolled in a compulsory human physiology course (n = 311, 59% female students and 41% male students) participated in the study as part of their normal coursework. In general, students were randomly assigned to 10 cohorts of ∼30 participants by the administration of the medical school. Depending on the central schedule of the medical school, the cohorts were taught by 15 faculty members in the Departments of Physiology and Immunology. The course followed a system-based approach divided into 28 sections, one of which was the experimental seminar on homeostasis of extracellular fluid volume and osmolality. Students were expected to come prepared for seminar sessions by reading the assigned textbook chapters (18). Standard seminar sessions were 3-h classes once or twice a week over 21 wk. The exception was the experimental seminar, which was approved by the Department of Physiology to last 1 h longer. The two authors facilitated eight experimental cohorts, whereas the remaining two groups were taught by two other teachers from our Department of Physiology who did not adopt the proposed method. The groups were chosen and assigned out of convenience, depending on the central schedule of the medical school. Besides, the authors used the described method also in all other 3-h classes for which they were responsible, but the presented results apply only to the experimental seminar.
The study was performed as part of a program evaluation and does not contain individually identifiable student data. Our institution does not require ethical committee review for this type of project.
A PRS comprises hardware and software that is used in conjunction with face-to-face educational processes to support, deepen, and enhance learning by promoting greater interactions between all those engaged in a learning activity (26). For this purpose, we used Socrative software (34), a free online PRS that allows students to use their personal mobile cell phones, laptops, or other smart devices to submit answers issued by teachers. All students need is a web-enabled electronic device and a connection to the internet. Internet access was provided by the institutional free-of-charge wifi network. Teachers login through their device and select an activity that controls the flow of questions. Students logged in anonymously with their personal devices to interact in real time with the content. Their responses were visually represented for multiple-choice, true/false, and short-answer questions. The histogram of student responses was immediately displayed on the screen. Teachers collected the reports online as a Google spreadsheet or as an e-mailed Excel file.
Scenario of the seminar session.
We combined a few instructional approaches that fall under the general heading of active learning: small-group case-based problem solving and assessment as learning. We divided the seminar sessions into four parts with one 15-min break after the second part of the class.
The goal of the first part was to get students thinking about the main topic of the class, to define the learning outcomes, and to present a brief overview of new material for that session. The session started with a formative pretest composed of questions on key physiological concepts with the aims to 1) connect the actual topic with prior knowledge, 2) enlighten the existence of possible misconceptions, and 3) elicit expanded thinking about key concepts that students would be expected to know long term. Students had to generate answers from memory in a short-answer format. Table 1 shows some example questions of the knowledge test used in the seminar on the regulation of extracellular fluid volume and osmolality. The histogram of student responses was immediately displayed on the screen, but the correct answer was not indicated. This allowed teachers to quickly assess student feedback and immediately make the necessary teaching adjustments to achieve better student learning effects. This was followed by a brief period of whole-class discussion in which students explained why they chose their answers.
The second part was composed of a succinct overview of the concept followed by a short presentation of a problem related to that concept in a form of a clinical minicase study. Each student received a printed version of the clinical case history or situation of medical importance modified from existing cases found in different case-based workbooks (3, 10) or inspired by Advances in Physiology Education papers (2).
In the third part, students collaborated in four small groups (n = 6–8) to solve the problem without any prior instruction of how to do it. Namely, it was assumed that students learn better when they wrestle with new problems before being shown the solution rather than the other way around (6). The teacher's role was to facilitate the process of students' problem solving by keeping the discussion ongoing. This was achieved by different techniques: the teacher moved around the room while engaging the students and was active in inquiring, responding, role playing, and facilitating discussion. During the small-group discussions, teachers gave feedback and discussed the misconceptions revealed by the formative pretest.
The final part of the class called for the synthesis, clarification of misconceptions, relating parts to a larger whole, and extracting the more important points from the less important background.
We collected multiple forms of data. Learning effectiveness was measured in terms of student learning outcomes and satisfaction. The measurement was built around the assessment of key physiological concepts and evaluation. In addition, we also collected some background information on the participants in our study, such as sex, their success on the midterm test, and information about prior similar interventions they have received.
Since classes were already assigned and the schedule was set, we used a quasiexperimental pretest/posttest nonequivalent group design to compare learning outcomes. Students took the same knowledge test three times. The pretest before the seminar session was to determine students' preparedness for the class and baseline knowledge against which we measured learning effectiveness. The first followup test was ∼1 wk later, whereas the second posttest was ∼2 mo after the initial teaching session. Students were not previously informed about the two posttests.
Student perceptions toward the experimental teaching strategy were collected by a voluntary, anonymous survey at the end of the class. The evaluation questionnaire, which was composed of seven closed questions and one open question, was designed to gather information about students' satisfaction, enjoyment, and perceived utility of the experimental active learning technique.
We used χ2-statistics to test the relationship between nominal variables for statistical significance. To describe the results regarding measures of magnitude of the difference between the two student groups, we calculated Cohen's effect size (d) values.
In the free text survey field for comments and suggestions, students were free to write anything they wanted, which made it arduous to analyze with a tool such as a spreadsheet. Therefore, to summarize the free text field survey results, we used Wordle (36), a web-based tool for visualizing text. It generates “word clouds” from filled-in texts in such a manner that words and phrases that appear more frequently in the source text get greater prominence.
Of the 311 second-year medical students enrolled in the compulsory human physiology class, 300 students (96.5%) were present in the classroom, of whom 238 students (79.3%) were in the experimental group and 62 students (20.7%) in the traditional group. Although all students possessed cell phones, some technical problems (e.g., low wifi signal and/or capacity in some classrooms; empty cell phone batteries) decreased the response rate to 69.3% (n = 165) in the experimental group and 66.1% (n = 41) in the traditional group. The sample size to achieve α of 0.05 and a power of 0.80 was approximately n = 54 in the traditional group and n = 148 in the experimental group.
To check the randomness of student distribution between the traditional and experimental groups, we analyzed their scores on the midterm physiology exam. All values in the text are expressed as means ± SE. Out of a possible 60 points, the number of correct answers (percentage of correct answers) of students in the traditional group (67.5 ± 1.3%) and experimental group (67.7 ± 1.1%) was not statistically significant.
Knowledge assessment and memory retention.
Figure 1 shows a comparison of the percentages of students' correct responses on the pretest and two posttests in the traditional and experimental student groups. Values are expressed as confidence intervals (CI) on a proportion mean for a confidence level of 95%. On average, students in the experimental classes obtained a higher percentage of questions correct than did students in traditional classes without significant sex differences. On the initial pretest, students answered an average of 30.7 ± 3.2% correctly [95% CI: (23.7, 37.7)]. In the two followup posttests (1 wk and 2 mo after the initial class), the traditional group scored an average of only 33.3 ± 7.1% [95% CI: (18.9, 47.8)] and 38.5 ± 7.9% [95% CI: (23.6, 53.4)] correct items, respectively. On the other hand, the experimental group did well on the 1-wk followup posttest, with a mean score of 93.2 ± 1.8% correct, as well on the 2-mo followup posttest with a mean score of 84.8 ± 2.7% correct items [95% CIs: (89.4, 97.0) and (79.4, 90.3), respectively]. Compared with their colleagues in the traditional group, this was a significant increase of ∼50% (P < 0.0001). The difference in prechange to postchange between the traditional and experimental groups was statistically significant at the P < 0.00001 level (χ2 with four degrees of freedom = 240.66, P < 0.00001). Moreover, for the experimental group, Cohen's d = 1.67 (1-wk posttest) and d = 1.38 (2-mo posttest) suggested a very high practical significance. In contrast, the traditional group Cohen's d = 0.04 (1-wk posttest) and d = 0.15 (2-mo posttest) assumed a very low practical significance. In other words, 2 mo after the initial treatment, 92% of the experimental group (d = 1.38) was above the mean of the pretest control results, whereas this was true for only 58% of the traditional group (d = 0.15).
The experimental teaching strategy evaluation form included overall satisfaction, satisfaction in overall teaching methods, and self-evaluation on personal engagement and learning effectiveness. Of the 238 students in the experimental group, 150 students (63.0%) participated in the evaluation survey (Fig. 2). The general attitude toward the experimental teaching strategy was high. The majority of students (86.2%) never/almost never experienced this type of active class. Most students (98.0%) enjoyed the experimental class and wished to repeat the experience (95.7%). More than half of the participants (53.6%) assumed they were more active during the experimental class than in the traditional class, and more than two-thirds (69.3%) agreed that the experimental approach was a more effective learning modality than the traditional seminar. On a scale of 1–5, with 1 being “poor” and 5 being “excellent,” most students (89.3%) rated their overall satisfaction with the seminar as “very good” or “excellent,” with an average grade of 4.3.
The survey also had a free text field for students' comments and suggestions. Figure 3 shows a visualization of student opinions (n = 71) as a word cloud.
Overall, most students had no objection or commended the experimental approach. Some students viewed it as hard work, requiring a substantial time commitment, and more explanations and formative assessment.
Our study revealed that the described technology-enhanced case-based instructional strategy increased students' engagement in the class and improved their scores from the pretest to posttests in the experimental group compared with the traditional group. This implies that our experimental approach, in relation to conventional didactic methods, has the potential to substantially improve medical students' levels of memory retention of key physiological concepts. In addition, the experimental approach led to high levels of student acceptance and satisfaction with their new learning experience. We suggest that this is due to the active nature of the experimental approach, which helps with comprehension and retention. There are large bodies of evidence from a number of different fields supporting the effectiveness of active learning (14, 32). However, active learning doesn't just happen; it occurs in the classroom when the teacher creates a learning environment that makes it more likely to occur (27). Since technology has the potential to bring a new dimension to the learning environment, our pedagogical approach to active learning was to merge the case-based teaching method with technology-enabled formative tests.
Case-based teaching is a common form of teaching and learning in medical education where learning activities are commonly based on patient cases. Case-based learning appears to foster effective learning in small groups, possibly through the effect of having more engaged learners but perhaps also through having more structured learning activities closely linked to authentic clinical practice scenarios (35). Case analysis helps students deepen and solidify their understanding of physiological facts, concepts, and principles (8). It is effective in inculcating critical thinking, problem solving, and other higher-order cognitive skills (24). Making material harder to learn can improve long-term learning and retention (4). More cognitive engagement leads to deeper processing, which facilitates encoding and subsequently better retrieval (11). In addition, a great number of studies have found that students overwhelmingly enjoy case-based learning and think that it enhances their learning (35). There is also abundant evidence that formative testing promotes learning (5, 33). Formative assessment is primarily concerned with feedback aimed at prompting improvement. Providing students with meaningful feedback can greatly enhance learning and improve student achievement (19). Furthermore, research within cognitive science suggests that the “testing effect” is based on the phenomenon that repeated retrieval of memories promotes better long-term retention and a slower forgetting rate than the repeated study of the same information. (21, 23). Today's technology makes testing possible in everyday teaching practice since students bring to classrooms their own mobile devices they already own that can serve as a PRS so that no institutional standalone response system is needed. Mobile phones as web-based clickers easily allow open-ended responses and therefore have a comparative advantage over traditional handheld clickers, which are largely restricted to multiple-choice or numeric answers (20). A wealth of information has accumulated regarding the benefits of using PRSs to improve pedagogy and student learning on the level of assessment, classroom environment, and process of learning (15, 37). The assessment benefits imply the possibility to get instant feedback, to check student comprehension of key learning points, and to clarify misconceptions and modify instructions when necessary. Classroom environment benefits involve attention, anonymity, and student participation, which together increase the levels of student engagement and peer discussion. Finally, learning benefits implicate interactions and discussions. We believe that these benefits positively affected student satisfaction: around or above 98% of students who responded (n = 150) typically gave approval ratings when asked if using their mobile phones as clickers were enjoyable, helpful, or should be used. This reflects the overall trend in the literature that most students like using clickers (7).
There are some possible limitations that our research may have faced. Since our experimental scenario was a combination of several instructional methods, each of which that had the potential to improve students' levels of memory retention by itself, we were not able to differentiate between the contributions of individual methods but only evidence their cumulative effect. Researchers who work exclusively from the scientific paradigm could see this as a problem. However, educational researchers often take the opposite view. For them (us), it is brilliant if the tested teaching process is contributing to the effectiveness of the learning intervention. As usual, it is limited by the institutional and cultural context in which it was conducted. While useful for pedagogical development, the findings may not transfer well off to a different context. Also, in the same university context, it is difficult to isolate student groups from each other. Thus, problems of diffusion of treatments could exist. Different teachers instructed different student groups at different times in different locations. All of these could represent potential confounding variables. Because there was no opportunity for initial pretesting of the traditional group, we applied a quasiexperimental pretest/posttest nonequivalent group design where the pretest results of the experimental group were used as the baseline knowledge (control results) for all. However, we think this approach was acceptable for three reasons: 1) it comprised the majority of students, 2) students were randomly assigned to different seminar groups depending on the central schedule made by the administration of the medical school, and 3) there was no difference between the two groups with respect to the midterm exam performance. Therefore, we were reasonably confident that the pretest results of the traditional group would not significantly differ. Yet, another problem with this design is that it does not allow to judge whether the process of pretesting actually influenced the results, because there is no baseline measurement against groups that participated in traditional classes. Possibly, students given a pretest may be motivated to engage more and would outperform their colleagues not given a pretest. And, finally, possibly there are research participation effects that are unavoidable bias in educational research (25). These effects have the potential to affect study outcomes in ways that undermine the validity of inferences the research was designed to permit. It suggests that regardless of the experimental manipulation of research subjects, study outcomes seem to improve. In addition, students may have inflated their evaluations of the quality and value of the teaching to please the teacher because of their enthusiasm with the new teaching method. We tried to factor the effect in the research design by involving two researchers/teachers.
Although the results of this kind of research are difficult to generalize, they can act as a starting point for further study. Our experiment could serve as a good example of how relatively small and inexpensive educational interventions can make a substantial impact on students' retention of important physiological concepts and raise their academic achievement by an appreciable effect size. We conclude that a thoughtfully designed technology-enhanced active learning strategy can create an effective educational approach for students and their teachers that can help them to create an educational environment conducive for learning. That is why we hope it will encourage our colleagues to try some similar interventions in their own teaching practice.
No conflicts of interest, financial or otherwise, are declared by the author(s).
S.K.T. and M.T. conception and design of research; S.K.T. and M.T. performed experiments; S.K.T. analyzed data; S.K.T. and M.T. interpreted results of experiments; S.K.T. prepared figures; S.K.T. drafted manuscript; S.K.T. and M.T. edited and revised manuscript; S.K.T. and M.T. approved final version of manuscript.
The authors thank the students who participated in the study for their enthusiasm and effort.
- Copyright © 2016 The American Physiological Society