The use of virtual patients in medical school curricula

Juan Cendan, Benjamin Lok


The demonstration of patient-based cases using automated technology [virtual patients (VPs)] has been available to health science educators for a number of decades. Despite the promise of VPs as an easily accessible and moldable platform, their widespread acceptance and integration into medical curricula have been slow. Here, the authors review the technological underpinnings of VPs, summarize the literature regarding the use and limitations of VPs in the healthcare curriculum, describe novel possible applications of the technology, and propose possible directions for future work.

  • medical education
  • computerized patients
  • patient simulation
  • clinical simulation

virtual patient (VP) representations were described in 1971 by Harless et al. (15). Despite the fact that the concept has been around for 40 yr, few medical schools have incorporated it into the educational paradigm. In fact, Huang and Candler (16) found that only 24% of medical schools in the United States and Canada were using VPs in their curricula in 2007. Cost is cited as a significant concern as schools struggle to meet a reported average $10,000–50,000 cost of developing even one VP scenario, not including the associated maintenance costs. However, there is reason to remain optimistic about the use of VPs. They facilitate the provision of feedback and represent a venue for safe and repetitive practice as well as a model where progressive clinical variation and difficulty can be presented. These features offer a significant contribution to the medical curriculum and mirror those observed in other high-fidelity simulation platforms (17). Here, we review the technological underpinnings of VPs, summarize the literature regarding the use and limitations of VPs in the healthcare curriculum, describe novel possible applications of the technology, and propose possible directions for future work.

VP Technologies

We refer to VPs using the terminology presented by the American Association for Medical Colleges and echoed by Cook and Triola to be a “specific type of computer program that simulates real-life clinical scenarios; learners emulate the roles of health care providers to obtain a history, conduct a physical exam, and make diagnostic and therapeutic decisions” (1, 9). The main components of VPs include interactivity on the learner's part (as opposed to passively watching videos), the simulation of medical conditions, and the visual and/or physical presentation of the conditions. The manifestation of the VPs can differ greatly and include 1) case studies presented on webpages or CD-ROMs, 2) immersive virtual reality simulations, and 3) robotic human-scale mannequins (Fig. 1).

Fig. 1.

Virtual patients (VPs) can present a wide spectrum of experiences, technical infrastructure requirements, and learning objectives. Left: a Web-based VP case using a keyboard interface. Middle: a virtual reality system using speech and tracking. Right: a human patient simulator (image courtesy of Dr. Samsun Lampotang) that uses real medical devices to interact with the VP.

Most systems currently in use represent a linear experience where the user is guided through a scripted dialogue but has the potential to ask questions and receive feedback at each step, although a smaller number of systems allow for any question to be asked at any time in a format that is reminiscent of real conversations. Interaction with these systems also varies greatly. In particular, and of substantial consequence when planning for a curriculum, human-scale mannequins require substantial technical support during activities and great acquisition, storage, and maintenance costs.

Simply stated, VPs are a computer-based simulation of a patient and are typically composed of three components: inputs, simulation, and outputs. VP inputs are the interface mechanisms that the learner uses to interact with the system and include standard computer interfaces like a mouse and keyboard. Some VPs require engineered solutions such as force sensors in mannequins.

Given the inputs, the VP simulation engine processes and generates a patient response. The simulation is software that attempts to model a component of the patient. The simulation usually relies on an underlying model, such as physiological or pharmacological models, physics, and social/conversational models. The conversational model is the easiest to develop but is dependent on the author of the scenario to provide anticipated outcomes for each possible response, as opposed to a VP that uses a real-time physiological simulation engine.

VP outputs present the visual, auditory, and any mechanical output of the simulation results. The VP could be shown as presenting a response (including speaking a response to an input, changing facial expressions, and performing a gesture) along with simulation information (e.g., blood pressure and heart rate). The fidelity and realism of the interaction will vary given the infrastructure (such as computing, space, time, and equipment).

Motivations for VPs

The original motivation for the creation of VPs was summarized in an article by McGee et al. (23). In one of the earliest references to this technology, the authors (23) recognized that multiple nonacademic drivers were conspiring to minimize clinical experiences for students such that a serious gap in clinical experiences was developing. These included the push for higher clinical throughput in academic teaching environments, societal push back to clinical practice on real patients, and higher documentation requirements for procedural skills of all levels, including basic doctoring skills, such as communication (9). The coincident development of multimedia technology provided a reasonable approach for filling this experience gap with educational cases that simulated a real patient, but in a computer-based environment.

Clinical learning experiences are difficult to standardize and, in the case of conditions that require urgent intervention, nearly impossible to schedule in a reproducible manner. Standardization is difficult both intrainstitutionally and interinstitutionally, and it is not uncommon for students in a given institution to experience a different array of clinical cases and conditions from their peers, making it difficult to guarantee exposure and mastery of concepts. The Liaison Committee on Medical Education recognized this concern with an organized system-wide push for advanced student experience documentation (22) and permission to incorporate simulated patient experiences as “real” experiences (22).

These simulated experiences may include work with standardized patients (SPs). SP experiences are now commonplace in medical training and assessment and are attractive because of their relative low cost and communication fidelity; however, they can be limited by physical examination representations of abnormal findings. Additionally, SP experiences are subject to variation due to fatigue, memorization, and biases, which carry some reliability concerns (21). It is worth stating explicitly that VPs could be complementary to SP experiences and not considered as a replacement for SP programs.

VPs can provide exactly the same experience repeatedly. VPs are available in simulation centers or continuously online and could provide students with opportunities for repetition, individualized feedback, and opportunities to revisit the actions taken during the interaction, allowing comparison with best-practices protocols. VPs can demonstrate simulated physical examination findings that SPs cannot portray (cardiac murmurs, abnormal breath sounds, neurological findings, etc.).

VP interactivity also leverages Kolb and Fry's experiential learning theory, motivating the learner to actively participate in the educational process (19). By interacting with VPs under certain conditions, it is expected that learning will be more efficient and complete, with a higher retention rate over passive education approaches (19). This theory presents a cyclical model of learning, consisting of four stages, which tend to follow this sequence: 1) concrete experience (“do”), 2) reflective observation (“observe”), 3) abstract conceptualization (“think”), and 4) active experimentation (“plan”).

By presenting clinical variations, VPs add to the general knowledge that a student can draw upon when confronted with a similar case, whether virtual or real, through pattern recognition. Pattern recognition is largely nonanalytical and unconscious and builds upon prior exposure to the clinical concern (2). VPs offer the possibility to address this component of the decision-making infrastructure by adding to the user's clinical foundation and reinforcing knowledge structures (11). Our simulation team has been working under this particular assumption, and we are developing a series of similar but contrasting VPs. Our goal is to push the student to consider alternative diagnoses and reflect on key features of each of the VP cases. We describe this research in further detail below. VPs can also provide students opportunities for self-directed learning, which leads to reflection (5, 6), self-driven change (26, 30), and more insight into performance (13).

Current VP Educational Research


The available data surrounding the advantages of VPs in the health sciences have been critically reviewed by Cook et al. (8) and Ellaway et al. (11). Cook and Triola (9) provided a research agenda for the developing investigative field. The data propose that VPs are useful compared with no intervention, thus suggesting that VPs can, in fact, facilitate learning. However, compared with other interventions, such as SPs, it is harder to justify the advantages presented by the VP. In fact, there are relative weaknesses to the VP, notably the limitations to the representation of affective skills in the VP environment. Future research in VPs will focus on defining the conditions where the development of a VP is warranted by virtue of limitations posed by other existing modalities, such as SPs, and the benefits obtained from continuous or distance access to the technology and related to comprehensive logging of the interactions. Of great interest is the potential for VPs to facilitate decision-making skills by providing an opportunity for the student to develop a mental model of the presenting symptom and a clinical approach to its evaluation.

The manner in which the VP is used is also of great interest and can affect its value. Friedman and colleagues (14) performed an early randomized trial using the same simulated case in three formats: a “pedagogic” format, which presented explicit educational support, a “high-fidelity” format, which attempted to model daily clinical reasoning, and a “problem-solving” format, which required students to express a diagnostic hypothesis. Rising third-year medical students differed significantly by educational format in knowledge acquisition. Students exposed to the pedagogic format acquired more information (more proficient) but were able to do proportionately less with it (less efficient) (14). Their results suggest that the format of computer-based simulations and VP simulations are an important educational variable when developing VP cases.

VPs have proven useful in medical education in the arenas of communication skills and medical history. Bearman et al. (3) indicated that, in terms of communication skills, the VP was more effective when the structure of the experience was narrative as opposed to “problem solving” in design. One early concern in the VP literature was that students would not respond correctly or emotionally to the VP; however, Bearman et al. (3) also addressed this issue and found that well-constructed computer-based interactions can have a substantial emotional effect on medical students. Building on that early work, others (27) have gone on to prove that dental students exposed to VPs ask actual SPs more relevant, as well as more general, questions than students that have no exposure to a VP. An additional study (29) revealed that clinical motor skills can also be enhanced with exposure to a VP patient alongside SPs.

Researchers have also evaluated student interactions with VPs compared with SPs to establish validity, how student express empathy with VPs, and whether the ethnicity and race of the VP impacts interactions. These studies (10, 18, 27) determined that while VPs have limitations compared with SPs, the VP interactions do reflect student abilities overall and, when applied to preparation for proper patient scenarios, VPs provide significant educational benefits.

The first practical application of VPs in the medical curriculum appears to be in the form of replacement of paper patient-based learning cases. A number of centers have reported a move away from traditional patient-based learning to the use of VPs, which are presented as an unfolding clinical scenario (2). This model is well liked by its users and also has the secondary effect of clearing some curricular footprint time, since these cases can then be experienced away from scholastic time. The majority of the VPs available for student use are represented in this category and consist of relatively low-fidelity technology, with 83% using still images and videos coupled with text (16).

Developing and sharing VPs.

There are several significant considerations if the curriculum is pondering the integration of VP cases into a course related to authoring, technical fidelity, and understanding the impact of VPs.


The creation and generation of VP scenarios remains a significant undertaking in terms of financial and time costs. Scenario development costs tens of thousands of dollars and at least 12 mo of development per case.

There has not been uniformity in VP development practices; however, recently established tools should facilitate the development and sharing of VPs. A number of European groups have joined forces to establish the electronic VP group (, which now has 320 VPs available in its database and is actively investigating and sharing their experiences. The Karolinska group shares its platform gratis with developers. In the United States, the University of Pittsburgh has developed the vpSim engine, which is being used to replace patient-based cases at a number of medical schools ( Not-for-profit groups, such as physioSim, are working to develop platforms that incorporate physiological engines into the VP environment ( Given the number of teams working on VP development, it is notable that a centralized group known as the MedBiquitous Consortium was created to define the manner in which these systems will share VP cases. They now publish a set of guidelines that may become central in the process of open sharing (23).


As with all simulations, there are approximations that are a part of the construction process. These approximations manifest as VPs that are clearly computer generated, and no one would mistake them for a real person. The artificial intelligence that drives the VP simulations constrain the scenario; for example, can the users talk to the VP or do they simply select from a list of possible choices? Can the user perform physical exams? Each of these design decisions incurs a tradeoff between feasibility and error rates (e.g., speech recognition and understanding). Educators should consider the learning objectives that they are trying to achieve as to motivate the design decisions in the creation of VP simulations.


As VP technology is still very new, the literature surrounding the best practices of VP use is in its infancy. There are few longitudinal studies of VP, little work into optimal forms for integrating VPs into a course or curriculum, and few explorations into retention, learning, or behavior changes.

It is noteworthy that much of the published literature on VPs is limited by virtue of study design. Cook et al. (8) presented an exhaustive review of the literature in this field, and of the 151 articles that were found to be potentially relevant for a meta-analysis, only 48 articles qualified for a full review of data. The exclusions were based on lack of rigorous qualitative outcomes (98 studies) and duplicative reports (5 studies). Of the 98 studies excluded for lack of comparison or outcome data, 60 studies included no outcomes whatsoever, whereas the remainder included postintervention assessments with no preintervention data (8).

New findings and future directions.

understanding clinical decision making.

Automation of the interaction between the student and the VP allows the educator to document, precisely, how the user reached a diagnosis. This particular functionality may facilitate understanding of the clinical decision-making process. For example, the VPs that emanate from our laboratory are dialogue driven, and a student can type or speak a question that is then met with a response; the response can be spoken or be a component of the physical exam as appropriate. Our VPs allow the exploration of the clinical condition with no presupposed clinical track.

This model is in contrast with the more “choose your own adventure” approach to VPs, which presents a list of actions via a menu or list of hyperlinks or buttons that present a “decision tree.” It is our laboratory's position that the ability to ask any question or perform any examination maneuver, at any time, reflects the reality encountered in a clinician's office. Because every question and examination maneuver is tracked by the system, we are then able to analyze the path taken by a specific student to reach a conclusion.

We have created a tool that allows us to visually inspect this process. For example, in the case of a student attempting to understand why a VP has developed diplopia, we are able to graphically represent the manner in which s/he reached the final diagnosis. As shown in Fig. 2, we can evaluate the manner in which the student approached this problem. Figure 2A shows all of the interactions that the user had with the system, be they history taking or physical exam. In the case of user 3, the educator can see that the student proceeded in the traditional manner of first recording the patient's history (seen as blue buttons) followed by a physical examination (seen in orange). Further analysis of the examination yields a more detailed log of the exact physical examination maneuvers performed and their sequence. At any time, the educator can float the pointer over a button and learn what interaction occurred at that exact moment, as shown in Fig. 2C. With this tool, we are able to quickly identify outliers (those that took too long, asked fewer questions, etc.) and seek patterns in a manner that would have required detailed and time-consuming video analysis in the past.

Fig. 2.

Visualization tool to allow the instructor to identify the manner in which a student using a VP has reached a final diagnosis. This system allows the researcher to analyze the subgroups of the data represented in the preceding layer. For example, follow the letters AC to reveal a subgroup of examinations of the eye (A) using the eye chart (B) and including the level of the exact question posed by the user (C).


With current educational approaches, there are classes of conditions and concepts that are difficult to adequately present to students. Examples of these concepts include neurological conditions, dyskinesia conditions, and pharmacological interactions. For example, SPs cannot present cranial nerve (CN) palsy, and thus the result is minimal curricular exposure to students; VPs can present these complex topics using interactive, self-directed learning experiences (20).

Figure 3 shows a VP presenting with CN3 palsy. The student can use a variety of neurological exams to diagnose the patient. This platform, called the Neurologic Exam Rehearsal Virtual Environment (NERVE), allows the student to interact with 5 VPs presenting with double vision caused by either a CN abnormality (CN3, CN4, CN6, or CN7) or two other conditions. It is also possible for the VP to present an instructor-controlled background, including ethnicity/race, weight, age, and sex. Variations of clinical presentation can be systematically catalogued and delivered to the student.

Fig. 3.

Left: a VP presenting with cranial nerve palsy. Right: a VP of a different ethnicity and sex can present with the same condition. The student must use a variety of neurological exams and speak to the patient to determine the diagnosis and understand the underlying pathology.

As an educational research tool, our group has used NERVE to understand how students learn and interact with VPs. We identified that students interacting with NERVE working in groups of three students learn more than students working alone (Fig. 4). Additionally, we hypothesized and determined that this performance improvement was related to a lower cognitive load effect on the group users (7). This type of information is crucial as we plan the inclusion of these activities in the curriculum; because of these findings, NERVE will be presented to a group rather than as an individual activity for students at our institution. This finding may not be the same for other VPs, and medical educators will need to investigate the optimum curricular deployment for VP activities.

Fig. 4.

Groups of three students learned more while using the Neurologic Exam Rehearsal Virtual Environment (NERVE) than individual users. Not all data are shown, but pre-NERVE performance was equal in both team and individual users. The cognitive load of the activity may account for the difference in learning, with individuals reporting more cognitive load from both the technology and content than those who worked in groups. These findings led us to offer this experience as a group-based activity.

In conclusion, the medical education community will need to elevate the quality of the research being published to address the underlying critical issues that limit widespread acceptance of the technology while delineating and defining the case for the use of VPs. The ability to create variations on clinical presentation supported by an underlying physiological simulation engine is a significant opportunity worthy of further investigation. The role of VP technology to help us understand how students become master clinicians and to yield insight to the process underpinning clinical diagnostic thought is one example of the technological strengths that warrants exploration.


J. Cendan and B. Lok received support through National Library of Medicine Grant R01-LM-010813-01.


No conflicts of interest, financial or otherwise, are declared by the author(s).


J.C.C. and B.L. conception and design of research; J.C.C. and B.L. performed experiments; J.C.C. and B.L. analyzed data; J.C.C. and B.L. interpreted results of experiments; J.C.C. and B.L. prepared figures; J.C.C. and B.L. drafted manuscript; J.C.C. and B.L. edited and revised manuscript; J.C.C. and B.L. approved final version of manuscript.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.
View Abstract