The course “Management of Fluid and Electrolyte Disorders” is an applied physiology course taught using lectures and paper-based cases. The course approaches fluid therapy from both basic science and clinical perspectives. While paper cases provide a basis for application of basic science concepts, they lack key components of genuine clinical cases that, by nature, are diverse, change over time, and respond in unique ways to therapeutic interventions. We developed a dynamic model using STELLA software that simulates normal and abnormal fluid and electrolyte balance in the dog. Students interact, not with the underlying model, but with a user interface that provides sufficient data (skin turgor, chemistry panel, etc.) for the clinical assessment of patients and an opportunity for treatment. Students administer fluids and supplements, and the model responds in “real time,” requiring regular reassessment and, potentially, adaptation of the treatment strategy. The level of success is determined by clinical outcome, including improvement, deterioration, or death. We expected that the simulated cases could be used to teach both the clinical and basic science of fluid therapy. The simulation provides exposure to a realistic clinical environment, and students tend to focus on this aspect of the simulation while, for the most part, ignoring an exploration of the underlying physiological basis for patient responses. We discuss how the instructor's expertise can provide sufficient support, feedback, and scaffolding so that students can extract maximum understanding of the basic science in the context of assessing and treating at the clinical level.
- mathematical model
at the college of veterinary medicine of Cornell University, the course “Management of Fluid and Electrolyte Disorders,” colloquially known as the “Fluids” course and taken by second-year students, endeavors to extend and integrate basic science concepts to provide students with a rational basis for the clinical application of treatments aimed at preventing or resolving water and electrolyte disorders. Historically, this goal has been achieved through the use of paper-based problem sets that rely heavily on clinical case material to apply and integrate basic physiological concepts. Thus, the “Fluids” course is an applied physiology course. It is not sufficient for students, particularly those destined for clinical practice, simply to be familiar with physiology; they must also know how to use basic science concepts in the context of real-life situations, including disorders involving fluids and electrolytes (5).
In the course of teaching “Fluids” over the years, we have encountered two problems. First, paper-based cases do not breathe, bleed, or die and therefore lack important features that professionals encounter in real-life situations. Paper cases do not allow students to experience the impact of their clinical decisions and judgments or to bear the consequences of poor understanding of basic concepts. Second, paper-based cases are static, whereas real patients change over time. Moreover, treatment alters a patient's status over time, and the patient modifies the treatment itself, a fact that is usually overlooked. The response to treatment by the student cannot be included in a paper case at all because therapeutic choices made by the student were unknown at the time the case was written. Perhaps even more important is the fact that the physiological processes that are the essence of fluid and electrolyte balance are dynamic. Learning to work with a complex dynamic system is extremely important in the development of competence and expertise in the execution of fluid therapy.
These and similar challenges are widely appreciated in medicine and have led in recent years to the development of a host of ever more sophisticated simulations (13). Dynamic models have been used in education and research for many years, in both business and science (11, 14, 19, 22). In the context of medical education, such models have several advantages over real patients. Models die, only to live again; they can be run repeatedly under variable conditions; and neither cost nor patient use or abuse are at issue. Here, we describe the development and validation of a dynamic mathematical model using STELLA software that simulates fluid and electrolyte balance in normal and abnormal dogs. We also report on feedback from students regarding the usability of the software, how students interacted with and responded to the simulation, and its use in a teaching environment.
Description of the Underlying Model
The “fluid therapy simulation” (hereafter, the simulation) is based on the dog for two reasons. Fluid therapy is instituted very commonly in small animals, particularly dogs. More importantly, though, the dog has been the model for the study of water and electrolyte balance for most of the last century, providing an extensive database of knowledge about this species.
We modeled water, sodium, potassium, and glucose homeostasis, with contributions from plasma protein, blood urea nitrogen, and unnamed intracellular osmolytes. Figure 1 shows the sector of the STELLA model for plasma water. In constructing this model, we adopted an approach that avoided the incorporation of so much detail as to obscure the overall structure and function of body systems. By no means have the physiological intricacies of these systems been ignored, since an adequate representation of reality would only be possible to the extent that the model contains functional detail. This was accomplished by recognizing that body systems operate on the basis of feedback mechanisms. In Fig. 1, as an example, note that upon the intravenous administration of water by way of the flow “administering water,” the volume of the plasma compartment increases. This volume is compared with “normal plasma volume,” and the computed difference (“volume of water to excrete”) then determines the rate of flow via “regulated water loss.” The latter flow (excretion) is not instantaneous, but occurs over time, at a rate determined by the time constant.
A system whereby an increase in plasma volume provokes increased excretion until the plasma volume returns to some setpoint represents a classical negative feedback mechanism. The roles of antidiuretic hormone (ADH), baroreceptors, and renal tubular function are all included in this feedback mechanism without referencing them explicitly. Some important details have been purposely omitted, however. For example, water that is consumed by drinking normally would not appear immediately in the plasma compartment, as the model structure suggests. Because the model intentionally simulates adipsic animals, this omission is immaterial.
All sectors of the overall model are interconnected. For example, in Fig. 1, plasma volume is influenced by factors that affect plasma solute, most of which is attributable to sodium. Thus, the amount of sodium in plasma (a separate sector of the model) plays a significant role in determining the volume of the plasma compartment. This interconnectedness is really a statement about complexity. When the system is perturbed at one point, the rest of the system is impacted.
The simulation does not model hormone concentrations, per se. Note that Fig. 1 refers not to the ADH concentration but to the “ADH effect,” where the absolute concentration of ADH is unknown and not important. This approach allowed for the possibility that a given hormone concentration might have a different effect under various clinical circumstances or that a hormone would have dissimilar effects on separate sectors of the model. For example, the effect (in terms of millimoles of ion transported per unit time) of a given concentration of aldosterone in the regulation of sodium and potassium can be quite different.
Validation of the Model
If students are to use a simulation in their medical training, then a key question remains as to whether or not the underlying model adequately represents reality. We approached this question in two ways. The first way involved using the simulation to reproduce published experiments using dogs. We chose experimental protocols that were restricted to the manipulation of water, sodium, or potassium. The first was an experiment conducted by Keck et al. (16) in 1969 and later described and summarized by Daniels et al. (7). In brief, a control group of dogs was fed 14 meq/kg sodium per day for a week. In an experimental group, peritoneal dialysis was used 3 days before the experiment to reduce extracellular fluid (ECF) sodium by 20%, followed by a sodium-free diet. On the experiment day, dogs were given a bolus of sodium, and sodium excretion was then measured for 24 h. The simulated dog received the same preparation. ECF sodium content in the simulated dog was reduced by 18%. Simulated data as well as mean values for the published data of Keck et al. are shown in Table 1. The response of the simulated dog was similar to that of the dogs in Keck et al.'s experiments. Sodium retention by Keck et al.'s sodium-depleted dogs ranged from 54 to 148 meq sodium. Retention by the simulated dog was well within these limits.
A second experiment, conducted by Thrasher et al. (26) in 1984, examined sodium balance and ECF osmolality and volume during water deprivation and rehydration. Dogs were fed a ration containing 0.17 meq/g each of sodium and potassium. They were water deprived, but fed, for 24 h and then allowed access to water for 24 h. Water intake during rehydration averaged 50.9 ± 4.5 ml/kg during the first 60 min, with an additional 63.3 ± 4.8 ml/kg consumed over the next 23 h. During both dehydration and rehydration, the simulated dog performed similarly to those used by Thrasher et al. (Table 2). Quite conceivably, biological variation may have accounted for the slight disparity between the simulation and data collected by Thrasher et al.
The simulation was evaluated further by focusing on the potassium sector of the model. In an investigation of the role of insulin and glucagon in potassium homeostasis, DeFronzo et al. (8) quantified the disposal of a bolus injection of potassium. Potassium chloride was infused over a 4-h period into dogs that weighed between 18 and 30 kg. Mean body weight was not recorded in the report; the simulation was conducted on a 25-kg dog. The urinary excretion rate of potassium and the plasma potassium concentration were determined. The quantity of potassium translocated into cells was calculated by DeFronzo et al. based on these data (Table 3). The simulation sustained an increase in ECF potassium concentration that was statistically identical to that observed by DeFronzo et al. Quantities excreted or redistributed were also similar. While potassium retained in the ECF was slightly less in the simulation than the amount estimated in DeFronzo et al.'s dogs, the fact that the ECF potassium concentration was indistinguishable suggested a trivial difference in the quantitative aspects of water balance in the simulation versus the experimental group.
The second form of validation of the simulation consisted of establishing face validity. When the simulated dog was provided water, sodium, and potassium in quantities recommended by the National Research Council, the model maintained normal hydration, and plasma sodium and potassium concentrations remained within the normal reference range over a 6-mo trial period (23a). In a further effort to establish face validity, one of us (R. E. Goldstein, who was board certified in internal medicine) interacted with multiple cases that manifested various pathologies. The presentation, progression of the disease process, and response to treatment were consistent with his extensive experience with live patients.
Description of the Fluid Therapy Simulation
STELLA is a software tool for creating mathematical models, and, in some circumstances, these models have been used to good effect (11). However, students do not encounter the simulation at the level of the mathematical model any more than they would in a clinical situation. Rather, we created an interface that attempts as much as possible to mimic a clinical setting. Indeed, Alessi (1) has argued that there is benefit to students encountering a model that has already been constructed and is opaque, to some degree, to the user.
On the opening screen, students are introduced to each case with a short history, including a tentative diagnosis for each patient. The “treatment area” (Fig. 2) provides data relevant to assessment as well as access to fluids and supplements. Students make an initial assessment of the patient's condition based on an examination of physical parameters, including an evaluation of skin turgor, eyes, mucous membranes, heart rate, etc. Since the user cannot see a literal patient, the “health-o-meter” provides a proxy for a subjective assessment. The information displayed by the health-o-meter is based on the additive effects of six major variables and essentially answers the question “How does the patient look?” The colors used by the health-o-meter (red, yellow, and green) have their expected cultural implications.
An abridged chemistry panel is provided. All of the data in the treatment area are updated on an hourly basis, where 1 h of simulated time passes for every 4 s of real time. The user can alter this pace or pause the simulation at any time. Students combine physical examination findings with laboratory data to make an assessment of the patient and to devise a treatment strategy. The latter is applied (and potentially adjusted over time) using input boxes in the center of the treatment area.
A set of four standard graphs (Fig. 3) records treatments applied (if any) and the subsequent responses of body water, sodium, and potassium. Graphs of any of ∼290 model variables can be added to the standard set by the instructor as the specific case and teaching objectives dictate. Individual cases (disorders) are created by “breaking” the model at one or more points. For example, a case of Addison's disease (hypoadrenocorticism) is created by altering the effect of aldosterone. To date, five disorders have been created, including Addison's disease, diabetes insipidus, urinary obstruction, water deprivation, and diarrhea. By altering body weight between 2 and 100 kg, these five disorders potentially represent a very large number of cases.
Fluid Therapy Simulation as a Teaching Tool
Historically, the teaching approach in the “Fluids” course has included lectures, case presentations, and paper-based problem sets that include short clinical case scenarios along with a set of questions that students must work through as homework and then discuss weekly in 4 groups of ∼20 students, each with a faculty facilitator. In 2008, we introduced the simulation as part of the preparation for these small-group discussions. Each case was a realistic, complex environment that was interesting, intellectually challenging, and motivating. Students developed a sense of ownership: the student became the veterinarian to an extent unachievable with paper cases (12). Sufficient data were available to make a good assessment of the condition of the patient, and the user could monitor changes in the patient over time. Because the simulation was built on a dynamic model, the “patients” responded to decisions made by the user and this, in itself, encouraged experimentation. Treatment could be applied to stabilize and improve the patient's condition. If no treatment or poor treatment were rendered, the patient's condition would worsen and, given enough time, would die. Through encounters with multiple cases or repeated runs of the same case, students could practice assessment (determining the percent dehydration, evaluating electrolyte concentrations, etc.) and fluid therapy protocols (choosing fluid types, calculation of administration rates, etc.) In this sense, the simulation supported a constructivist view of learning (23).
As with the paper cases, each simulation was accompanied by a set of instructions/questions. The instructions we provided were not strictly prescriptive but were usually open ended, such as “What fluid would you use? What do you predict will happen to the sodium concentration during treatment? What actually happened to the sodium concentration and why? What if you tried an alternate fluid? Why was the result different?” Answers to questions were obtained or derived from data presented in the chemistry panel or in the graphs. When students met in their groups of ∼20 students, they discussed these simulated cases as they did the paper-based cases.
Formative evaluation of the simulation was conducted during the development phase (2007) by way of a focus group consisting of five students. This feedback established that the simulation was engaging, and it allowed us to optimize the user interface and overall usability. The simulation was used for instructional purposes for the first time in a class of 79 students in 2008 during the “Fluids” course, as described above. This allowed us to gain experience using the simulation at a teaching tool, gather feedback from a large group of students, and compare the final examination scores for groups of students who had or had not used the simulation. At the end of the course in 2008, a focus group of five students was assembled to obtain more indepth feedback on the simulation. The end of course evaluation in 2008, completed by 36 of 79 students, also included a section relating to the simulation. Comments from students referenced below represent majority or consensus views.
There was no significant impact on student performance on the final examinations. The mean score in 2007, 86.6 ± 5.9 (median: 87.1) was not significantly different (P < 0.05) from the mean score in 2008, 85.2 ± 5.3 (median: 86.0).
Students found the simulation realistic and engaging. They liked the way it elicited strong emotions pertaining to how their patient responded. In this regard, the simulation had a “game-like” quality that put the student in a position of being the primary caregiver. The extent of the realism presented significant challenges for these second-year students, however. Students thought that the simulation displayed too much information, as illustrated by the following statements: “While doing physical exams, there is so much information that you have to develop a system to go through all that information” and “You should have plenty of time to think. This is why I kept stopping [the simulation].”
Experimentation was an important aspect of the simulation. For good reason, it is impossible to experiment on real patients, and trying different approaches is not an option with a paper case. Students had a good sense for the capability of the simulation to withstand repeated attempts at treating the same patient, as illustrated by the following statement: “I made a lot of changes with fluid and [did a lot of experimenting]. Once I did a case at a slow pace and got the dog in the green, then I wondered “what if I gave way more fluid than I should have, would that kill the dog? Or merely change its urine output?” so I’d check that out and speed it up to see what changed. And then I’d wonder, “Well, what if I put potassium in?” and then I’d work with that…It was useful that way. That I, as the doctor, could [make changes], because the simulation was great for being able to ask, “How bad can the dog get?” Something you can’t do in real life. And then I’d wonder more about the urine output [in contrast to traditional homework] and its relation to the amount of fluids I was giving.”
Although students found the simulation challenging, they found the interface (input and output devices) easy to navigate and seemed to naturally take on the role of the primary clinician while maintaining a sense of being a clinical investigator. However, the simulation did not connect with students on a very important dimension. Because the “Fluids” course is an applied physiology course, it is important that the simulation be seen as a tool for learning basic science as well as clinical approaches to fluid therapy. Students clearly enjoyed experimenting beyond the bounds of the written instructions we provided, but their efforts seem to have been restricted to “what works and what doesn’t.” Sometimes, this led to unexpected outcomes that, from the students’ perspective, were inexplicable. The inability to extract explanations from the graphs conspired to block student understanding. They apparently needed more explicit feedback and coaching, as shown by the following statement: “It was frustrating that someone wasn’t there for you, say when your dog was not improving. Someone by your shoulder saying, ‘Hmm, have you thought about…’” Lack of sound feedback led students to draw the following interesting conclusion: “Sometimes I believe there isn’t a physiological reason for my results. I tend to think that I am right, so I’m quick to conclude that something is wrong with the simulation.” Given the extent of the validation that we carried out, this conclusion, while conceivable, is unlikely.
Discussion and Future Directions
Simulations have been used for many years in a wide variety of disciplines, including flight simulators, war games, management games, technical operations in industry, and education. Many of these simulations, including those used in the education of medical professionals, focus on the development or assessment of technical skills (13). Simulations have also been developed to aid students in the learning of basic science, including gas exchange, ventilation, intestinal absorption, and neurophysiology (3, 9, 15, 18). The fluid therapy simulation combines both approaches, which is appropriate given the mandate of the “Fluids” course.
Based on student feedback, the simulation has proven successful from several standpoints. It is realistic in the sense that the cases represent credible manifestations of disease, disease progress, and response to treatment. The emotional intensity that students experience is not unlike that associated with encountering a real patient. Time is relentless, and an abundance of data must be sifted and synthesized in the process of diagnosis and treatment. The simulation puts the student in a position where he or she becomes “the person who must solve the problem or suffer the consequences” (12). Once they recognize the full capability of the simulation, students move on from determining the best treatment for the patient to experimenting with alternate approaches, playing “what if” games that they could not enjoy with real patients (25).
The types of problems that students encountered while using the simulation have been documented. Goss (10) reported that students tend to invoke a “painstaking, invariant search for (but paying no immediate attention to) all medical facts about the patient, followed by a sifting through the data for the diagnosis.” Students felt overwhelmed by the amount of information that they had to process, and “stopping time” was their way of addressing this reality. Although students using the simulation were initially overwhelmed, the feedback indicated that they ultimately discovered a systematic approach that allowed them to deal with what they perceived as information overload in a constantly changing environment.
The specific structure of the user interface, where the treatment area and graphs appear on separate screens, was a conscious and logical decision on the part of the design team. This distinction, interestingly, may provide a visual representation of the segregation that is potentially present regarding the clinical and basic sciences that are embedded in the simulation. In the minds of the instructors of the “Fluids” course, clinical and basic science aspects of the simulation are seamlessly and highly integrated. Students apparently do not appreciate the simulation in this way but rather tend to focus on the mechanics of fluid therapy, with only superficial reference to the underlying basic mechanisms, as if responses at the treatment level can be adequately understood without a commensurate grasp of associated pathophysiological processes. Therein lies the principal deficiency of the simulation as currently implemented.
One student reported that, “I thought that the simulation would be very useful, but in the end, I found that it didn’t illustrate any points any better than the lectures did.” Others viewed the graphs as being too complicated. Without looking at the graphs, however, it is unlikely that a student would possess the innate capability of thinking deeply about mechanisms. Apparently, the written guidance that accompanied the cases was insufficient help. Perhaps students approached the written directions as simply a task to be completed, in which case an understanding of basic mechanisms would not necessarily follow (21). Students’ failure to sufficiently engage the basic science issues in the simulated cases is consistent with observations of Pilkington and Parker-Jones (20), who noted that students often focus on manipulating simulation objects without generating a deeper understanding of the model or underlying principles.
Use of the fluid therapy simulation in teaching is based on constructivist theory, but it also represents an instance of situated learning. The latter theory holds that it is easier for students to learn and apply concepts if they are encountered in the course of performing real-world tasks or simulations of them (5). Herein lies the potential promise of the simulation. There is little question that the simulation engages students in tasks that are quite good representations of a real clinical setting. The deficiency, then, exists in the extent of support, feedback, and scaffolding that is provided to students (24).
Reflecting on a conclusion drawn by Thomas and Milligan (25), that there is clearly no right way to present a simulation, it seems likely that the usefulness of the simulation as a learning tool can be increased significantly by altering the level of support provided to students. To date, that support has come in the form of accompanying paper-based exercises. In the next iteration of the “Fluids” course, we will provide much more hands-on guidance. In the context of a large class broken into working groups of two to three students (“turn to your neighbor”), the instructor will work through a simulation, interacting with the class by assigning specific tasks to the small groups for internal discussion and then requesting feedback to the class from those groups (6). Students will be asked to recommend fluid types and calculate rates but, more importantly, will be asked to predict outcomes and explain responses. The instructor, who is skilled at graphical representations of data, can point to specific displays and ask targeted questions that will promote deep thinking about features that students would not have noticed without help (4). Using this approach, the instructor can also link observations and concepts across graphs and across cases, a strategy aimed at helping students learn how experts think about cases involving fluid and electrolyte disorders (17). With that initial, deep contact as a preparation, students working with the simulation outside of class may be in a better position to explain, at the basic science level, the observed responses to their own therapeutic interventions.
This project was funded, in part, by the Faculty Innovation in Teaching Program, Office of the Provost and the Office of Educational Development, College of Veterinary Medicine, Cornell University.
Present address of M. E. Dispensa: Information Technology Services, 102 Muller Faculty Center, Ithaca College, Ithaca, NY 14850.
- © 2009 The American Physiological Society