An active approach allowed undergraduates in Health Sciences to learn the dynamics of peer review at first hand. A four-stage process was used. In stage 1, students formed self-selected groups to explore specific issues. In stage 2, each group posted their interim reports online on a specific date. Each student read all the other reports and prepared detailed critiques. In stage 3, each report was discussed at sessions where the lead discussant was selected at random. All students participated in the peer review process. The written critiques were collated and returned to each group, who were asked to resubmit their revised reports within 2 wk. In stage 4, final submissions accompanied by rebuttals were graded. Student responses to a questionnaire were highly positive. They recognized the individual steps in the standard peer review, appreciated the complexities involved, and got a first-hand experience of some of the inherent variabilities involved. The absence of formal presentations and the opportunity to read each other's reports permitted them to study issues in greater depth.
- active learning
- inquiry-based learning
peer-reviewed publications play a crucial role in the practice of modern science. Much of the knowledge that finds its way into standard scientific texts has been gleaned from research that has been published in this format. Thus, undergraduate science students would get a richer educational experience if they had an opportunity to consider different components of the peer review process. A number of publications (3, 4, 6, 8, 10) have described possible approaches that help students in this regard. These include attempts to get them to recognize the standard format of peer-reviewed publications, provide checklists to help them critically analyze such papers, or even give them opportunities to practice peer review more directly.
In this article, I describe an active learning exercise used in an undergraduate science course. Here, students working in groups took on tasks that required them to seek, synthesize, and integrate information from a variety of sources. Their initial written reports were critically assessed by other members of their own class (peers). These reports were revised based on the critical comments made by their peers. Thus, each student in the class had the opportunity to be not only an author but also a peer reviewer. The specific question I sought to answer was whether such an approach would enhance student understanding of the peer review process. It was my expectation that they would be better able to appreciate the complexities involved. The term “appreciation” is used here in the sense that it includes the notions of “to judge or evaluate the worth, merit, quality, or significance of: comprehend with knowledge, judgment, and discrimination” (1).
The exercise was used with different cohorts of students in two Canadian universities: the University of Calgary in 2004 and 2005 and McMaster University in 2006 and 2007. Both groups of students had some prior exposure to peer-reviewed publications. Students at the University of Calgary had taken an earlier course where they were required to read the primary literature in biomedical sciences (9) and in which the general process of peer review had been discussed. Students at McMaster University had taken an earlier course where they had been required to do a critical appraisal of a peer-reviewed publication in clinical but not basic science. However, neither group had experienced the process by participating in it themselves.
The Course and the Students
Students at both institutions were enrolled in a 4-yr Bachelor of Health Sciences (Hons) undergraduate programme where admissions were based on high academic grades from high school as well as on supplementary applications. At the University of Calgary, the course was a single-term (12 wk) course in social pharmacology taken by students in the first term of their second year at the university. At McMaster University, this exercise was used in an elective course in pharmacology taken by students in their senior years (third and fourth years). The McMaster University course was a two-term course (26 wk) that included molecular, clinical, and social aspects of pharmacology. The peer review exercise was the sole evaluative exercise used for the social pharmacology component. The total number of students who participated varied between the two institutions: 104 students over the 2 yr at the University of Calgary and 44 students at McMaster University. The procedures used and expectations were substantially the same. At both institutions, I was the sole instructor responsible for grading all reports.
Since the course dealt with the social aspects of pharmacology, students were expected to 1) describe the key steps in the process by which drugs are developed for therapeutic use in humans/animals and 2) recognize the complex interactions among individuals, investments, ideals, and institutions that lead to the development of useful drugs for therapeutic purposes.
The exercise described had two principal aims: 1) it gave students an opportunity to explore therapeutic drugs in depth from different perspectives (developers, prescribers, users, and regulators) and 2) it helped students recognize the multiple steps involved in peer review by participating in the process itself.
The exercise is best described in stages (see Fig. 1).
Stage 1: formation of self-selected groups and explorations.
Students were given a list of perspectives to choose from and formed self-selected groups to deal with these. The perspectives were loosely worded and served as triggers for self-directed learning. The following generic statement served as an introduction: “You will explore the social life of drugs from the perspective of one of the following members of society. Select ONE perspective and form a group with others who have selected the same perspective”. A list of the perspectives is shown in Table 1.
This occurred in a standard classroom setting. The choices were listed on the blackboard or on signup sheets to help the students form groups. Students mingled around, talking to each other. This was an important phase, and I gave the students ample time to think and rethink. Once the session was over, the list was finalized, and students were informed by e-mail.
After the initial selection, the groups were given considerable license to use the perspectives as starting points for exploration. They were expected to set their own meeting times and work together cooperatively. I met each group at regular intervals to monitor their progress and advise them. I was also available to answer queries either in person, by telephone, or through e-mail.
An interactive session on the peer review process was set up. I gave them a working example of the peer review process by taking them through one of my own publications in experimental pharmacology. They saw the raw data on which the paper was based, the first submission, comments of the editors and reviewers, the revision, the response from the reviewers, the final submission, the acceptance letter, proofs, final publications, and citations to the work. Although this was a single case, the discussions permitted me to elaborate on the origins of peer review and the framework in which editors and reviewers operate and to provide them with ancillary reading material (1a, 2, 7, 11, 12).
Stage 2: online submission of reports for peer review.
All groups were instructed to post their reports online on a specific date. This date was given well in advance and generally came after 6–8 wk, so that students not only had ample time to prepare their reports but also to get comments from me. Groups were given considerable license, so they could choose the format that best suited their perspectives (see below). There was a clear and absolute deadline, and no submissions were accepted after that.
Once all reports were posted, each student was expected to read all other submissions and prepare a written critique of each report. They were given 2 wk. The guidelines given asked them to begin with a brief summary of the report, list the strengths and weaknesses, and make appropriate suggestions for improvement.
Stage 3: peer review session.
These reports were discussed after 2 wk, according to a prearranged schedule. At the peer review session, each report was discussed in turn. Each submission was discussed for 20–25 min. A student whose name was drawn at random served as the lead discussant and stood in front of the class read out his or her critique, beginning with a brief summary of the report. The group being assessed also went to the front of the class and answered the questions asked. The choice of who chose to answer the question was left up to the group. Once the lead discussant was satisfied, the report was open for discussion.
Each of the lead discussants received a specific mark for their performance. This mark was then also given to all the other students in the class and thus contributed to their overall assessment. This provided an incentive for the lead discussants to take their tasks more seriously as they were also contributing to the marks received by all the others. When I taught this course for the first time, I did not allot any marks to this component. However, when the performance of the students exceeded my expectations, I raised the issue of allotting a small percentage of marks to this activity. This was discussed openly in the class, and students agreed that this would be of benefit to them. This element was thus retained in all subsequent versions of the course both at the University of Calgary and at McMaster University. The students at McMaster University were more senior and had been through courses where open self-assessment and peer assessment were standard practice, and this was, to them, a variation on that process. In both programs, students had participated in problem-based tutorials where self-assessment and peer assessment were openly discussed, so distributing marks to the whole class was not seen as unusual. What was different was that they now owed a responsibility to the whole class rather than their specific tutorial group. The only issue that was discussed was whether their marks would be lower if someone had not done their work properly, not that such marks were given at all. To encourage the peer reviewers to take their task seriously, their written critiques were also graded, and each student got an individual mark for their effort.
Once the reports had been discussed, I gathered all the written critiques and, in my editorial capacity, read through all peer reviews, highlighted common elements, and wrote a cover letter drawing attention to these as well as raising issues that had not been raised.
To preserve anonymity, I coded the critiques, and students were told to note these in their rebuttals. The relative merits of anonymous and signed reviews were discussed in the class. Although the students were quite comfortable commenting on each other's reports verbally, they were hesitant to sign their reviews since they were dealing with their own classmates and preferred being anonymous in their critiques.
Stage 4: rebuttal/resubmission.
The final submission was submitted for grading within 2 wk after the peer review discussions had taken place. In their resubmission, each group had to write a formal letter outlining their rebuttal to the criticisms, emphasizing the changes they had made. If a group chose not to make the changes suggested, they were expected to clearly explain why they chose not to make the required changes. These were submitted to me separately and not in a folder for public scrutiny.
To illustrate how the process worked in practice, examples are shown in Figs. 2–4. Four students decided to form a research group seeking to develop a contraceptive vaccine to control wildlife populations. After reading the literature, they decided to develop a vaccine to control the population of rhesus monkeys in urban India. They struggled with many different approaches and finally focused on presenting their report as one to a major company seeking support by giving the company rights to their patent. Figure 2 shows the letter I wrote to the group after their initial submission, which had been discussed in class. Figure 3 shows the comments made on their report by one of the peer reviewers, and Fig. 4 is the rebuttal submitted by the group when they resubmitted their paper. In this instance, the authors chose to tabulate their responses. The numbers and brackets are the codes I used to preserve anonymity of the reviewers. Students were told to mention these in their rebuttals.
The title of the paper that was finally accepted was “The Development of a Novel Contraceptive DNA Vaccination Against the Epididymis Protein Eppin: a DNA-Polyethylenimine Complex Entrapped within a Mannose Bearing Chitosan Microsphere, for Use in the Rhesus Macaque Population.”
The peer review component was valued at 40% of the total marks (grade points) for this course. The three elements evaluated were as follows: 1) the final written report (25%), 2) the peer review reports (10%), and 3) the discussant marks (5%). Each student was expected to write one group report, a number of peer review reports (depending on the number of projects, ranging from 6 to 10), and share a component of the discussant mark. Thus, in situations where there were 10 discussants, each individual's contribution to that shared mark was relatively small (0.5%).
The criteria used for evaluation of the final written report were as follows: content (15%), clarity (5%), and corroboration (5%). These were discussed in class, and students were given some examples.
To frame their peer review reports, each student was given a booklet to use, which listed the reports they were expected to assess (excluding their own). They were told to use the same criteria (content, clarity, and corroboration) in assessing these reports. They were to use a structured format in their assessment that included 1) a brief summary of the report, 2) a list of strengths and weaknesses, and 3) suggestions for improvement. In addition, they were asked to assess the report in terms of their own learning (this was included because a lot of the material in the reports was not covered in the classroom and therefore provided each group an opportunity to teach their peers something new). When I assessed these reports, I assigned a global score out of 10, taking into consideration all the four elements listed above.
In assessing the performance of each lead discussant, similar criteria were used. These included the ability of the student to summarize the report, list strengths and weaknesses, frame suitable questions, and interact with the group being assessed.
At the end of the course, students were given a questionnaire to fill out asking them to rate a number of elements in the entire course. A series of statements was given, and students were asked to indicate their agreement/disagreement with each statement on a five-point scale (where 1 = strongly disagree and 5 = strongly agree). Several of the statements dealt with the peer review exercise. In addition, students were asked to comment on any specific elements that should either be kept or eliminated in future versions of the course. Data are shown as means ± SD as well as in modes in Table 2. In the text, I compared average scores for all items among the four cohorts using a Tukey-Kramer multiple-comparison test, with the significance level set at P < 0.05.
With one group, I included another question that related to the multiple assessment exercises I had used in the course. I asked them to assess their value as “learning experiences” on a 10-point scale (where 1 = least valuable and 10 = extremely valuable). I asked them to disregard if, at all possible, the particular mark that they received on that exercise. The peer review reports were in that list. Although I had not used multiple-choice questions, I asked them to also include that exercise since all of them were quite familiar with it. The results comparing their assessment of peer review reports with multiple-choice questions were analyzed using a nonparametric test (Wilcoxon sign rank). The significance level was set at P < 0.05. The results are shown below.
Ethical approval was obtained. Permission to conduct this study was given by the Conjoint Health Research Ethics Board of the Faculty of Medicine at the University of Calgary and Dr. D. Harnish, Assistant Dean of the Bachelor of Health Sciences Programme at McMaster University.
RESULTS AND DISCUSSION
This approach was used on four occasions in two different universities. Although minor adjustments had to be made (as mentioned below), the process itself worked well without any serious problems.
Considerable negotiations occurred, with several students each year changing groups more than once during the session. Initially I had thought of preselecting groups and assigning tasks, but I decided that it would add greater motivation and interest if the groups and tasks were self-selected. The strategies used by the students were interesting. Most often, students signed onto tasks that interested them, but there were occasions when the choices made were dictated by other considerations. Some students chose to work with their friends, whereas others chose perspectives to avoid working with those who had already selected that task! In each year, certain perspectives were more popular than others, and so group sizes varied from 4 to 10 students. I tried hard to get groups to be of roughly equal sizes, but if the students were resistant, I did not force them. Sometimes groups became quite unwieldy (>7 students), and the groups split themselves and tackled the same task but chose different approaches. In one instance, two sets of students were interested in dealing with adverse drug reactions from a social perspective. One group framed a report looking at changing policies toward reporting adverse drug reactions, whereas the other group looked at adverse drug reactions from the perspective of a journalist investigating a suspicious death. Not all perspectives were selected in any given year, and I did not force any projects onto the class.
In the interim weeks, I met with individual groups, usually on a rotation basis. At the University of Calgary, since the classes were larger, I adhered more strictly to the meeting schedule. In the first year (2004), I was a lot more prescriptive and had students frame interim reports and keep logs of their learning. This proved to be a major irritant and was not done in subsequent years. Students in that first cohort felt that what they wrote in these logs was repetitive and, since I was meeting them regularly anyway, keeping these logs was strictly not necessary. The more informal discussions worked far better, a greater degree of trust was established, and I was able to respond more quickly to problems that emerged. Most often these related to content issues as to whether one avenue of exploration was reaching a dead end or not. Based on the number of e-mails I received as well as questions asked during and outside the class, I sensed that many students were enthusiastic and highly motivated. There were occasions that I had to suggest better resources. On rare occasions, there were conflicts within the groups that had to be resolved.
All students were expected to submit their reports online by a specific deadline. The groups were given considerable license to structure their reports to suit the perspectives they had chosen. For instance, the group that explored class actions chose to write their report in the form of a play where the different parties involved discussed the issues. A group that explored equitable access to prescription drugs for seniors framed a newsletter. The group looking at drugs from the perspective of an investigative journalist wrote a series of magazine articles. The group exploring medication errors in a hospital setting wrote a report from multiple perspectives. In the example shown in Figs. 2–4, students explored the possibility of designing a contraceptive vaccine to control the population of rhesus monkeys in India. They had to use available information to frame a fictitious proposal and imaginatively interpolate available information into that proposal.
All groups had to submit extensive bibliographies and annotated references. In several instances, the references were of equal length as the main submission. This was quite a contentious issue, and several students argued that since many mainstream peer-reviewed journals did not have annotated references they should not be expected to submit them. Nevertheless, several students later admitted that they realized how useful it was for them to think carefully as to what they got out of a particular paper.
Students had only 2 wk to read and critique the reports. Again, this was a sore point. I did mention that the deadlines had been set well in advance and that it was no different from studying for a set examination. The guidelines given to them were quite helpful, and many students submitted very well thought out critiques.
The peer review sessions were quite lively. There never was a paucity of questions, and often I had to curtail the discussions. One student was struck by the fact that the same report evoked so many different queries. This was a welcome opportunity to discuss the variability in peer review. Once the peer review sessions were over, I read through all the critiques. I looked particularly at consistency in comments and paid particular attention to the suggestions made for improvement. I collated the comments, highlighted the specific comments that needed attention, and wrote a cover letter to each group recommending careful scrutiny of the comments made.
In most instances, the resubmissions were a vast improvement. The groups took great care to answer the criticisms leveled and made appropriate changes. The groups that had made the effort to submit a thoughtful, detailed report in the first instance had relatively few changes. Unfortunately, in a few instances, the groups had been rather careless with their first submission and had considerable work to do to submit the revised version. I had warned the students about this repeatedly as the resubmissions were often due toward the end of the term, when students had exams in other courses to deal with. The groups who suffered realized to their cost that they should have heeded my warnings!
The outcomes of this course can be seen from two different perspectives: those of the students and myself.
This was obtained from a questionnaire given to students at the end of the course. They were given a series of statements about different components of the course and asked to rate their strength of agreement on a scale of 1 (strongly disagree) to 5 (strongly agree). Several of the questions dealt with the peer review exercise. In addition, other comments were included in a section where I had asked them about components of the course that should either be retained or eliminated in subsequent versions. The results shown in Table 2 summarize information provided by students from all four cohorts. The mode, as shown Table 2, was 5 in each instance, suggesting that a significant number of students in the class strongly agreed with that particular statement.
The perspectives-based approach as well as the strategy of getting students to read each other's reports received high scores. The absence of formal presentations was seen to be a more engaging way to learn. Students found that critical analysis of reports was a challenging task and that they could better appreciate the complexities of peer review. Several comments were made both personally to me and through their written comments amplifying these sentiments (see below). As mentioned above (methods), one group of 28 students was asked to assess the several evaluation exercises that were used in the course. The scores for the peer review reports (8.86 ± 1.21) were significantly different (P < 0.0002) from those given to multiple-choice questions (6.61 ± 2.77). Although the sample size was small, the results provide some evidence that the students found the approach used to have some benefits for their learning.
Given below is a single quote that amplifies the data shown in Table 1: “I REALLY [bold, capital, student's note] felt that a lot was learned by reading other group reports, and that forcing me to do so through the random selection of referees was a brilliant idea. The formatting of the group reports allowed me to learn about specific topics in a mere couple of hours, as all of the research had been done and the relevant information gathered for me to read. On the other hand, had I decided to learn about one of the topics on my own, it probably would have taken me days of just looking around for different bits and pieces of information. I cannot emphasize enough how much I enjoyed the course because of this. In terms of the amount of information, and probably more importantly, the usefulness of the information, I can say I've learned more in this course than probably a few other courses combined.”
Similar comments were made by others. The peer review process received laudatory comments from 11 other students. Fourteen students made specific mention about the usefulness of being able to read other people's reports. One student said that they liked the idea of sharing their work with others in the class. The enhanced engagement fostered by not having formal presentations was specifically commented on by six students. One student noted that that s/he “most enjoyed reading all the reports and having no group presentations; time [was] better spent reading their reports than putting together a [Powerpoint] presentation.” Other students echoed these sentiments: “Best part of the course was the reading of reports, critiques and random presentations; superior way to understand content compared to [Powerpoint] presentations.” As can be shown in the data in Table 2, 60.5% of the students (89 students in all) were in strong agreement with this statement. Both groups of students had considerable computer skills and had taken courses where they had to give Powerpoint presentations, so they had first-hand experience of that approach. There was a small number of students (5 students over the 4-yr period) who did not care for this approach, and one of them noted “Do not like the peer review-reports; too long.”
The random selection of referees was a component that received the lowest average score (3.89), the highest number of 1s and 2s (10% of the class). Unfortunately, none of the students who gave this item a low score had any specific comments to make. On the other hand, one of the students who gave it a high score made the following comment: “I have never presented in an Inquiry class before. I have always admired people who can go up and speak in front of an audience without cue cards or rehearsing beforehand, this experience allowed me to be more comfortable and has given me greater confidence to hone my skills.”
The data shown in Table 2 includes information from all four cohorts. When the mean scores for all items were compared with individual cohorts, certain differences were seen. The mean scores for each of the four cohorts were as follows: 3.99 ± 0.51 (n = 51), 4.39 ± 0.53 (n = 54), 4.53 ± 0.26 (n = 28), and 4.24 ± 0.37 (n = 15). The scores for the first cohort were significantly lower than the mean scores for the second and third cohorts but not the fourth cohort (P < 0.05). The mean scores among the other three cohorts were not significantly different. The lower scores in the first year could have several possible explanations. Although groups were self-selected, there were internal conflicts, and these, to some extent, colored individual perceptions of the course. Another contentious issue that year was my insistence on students handing in a series of interim reports and individual learning logs (mentioned earlier). This was a major source of annoyance, and students felt I was being unduly prescriptive, since I was meeting them on a fairly regular basis and this was a needless burden. This component was not used in subsequent versions, and the more informal feedback sessions proved far more conducive to their learning.
From my perspective, this exercise proved quite valuable. I found that, in general, students took their task seriously, and it was a rare student who submitted a weak critique. More impressive was their ability to stand in front of the class at a moment's notice and defend their criticisms. In a number of instances, sharp exchanges occurred where the peer reviewer probed more deeply until they were satisfied. Furthermore, once the paper was open to full discussion, a forest of hands went up. In practically every instance, I had to curtail discussion in the interests of time. One major reason, of course, was that all students had read the reports carefully and could really understand the discussions that were taking place. The students themselves noted that this approach kept them more engaged than a standard seminar or talk, since they had to come to the sessions fully informed. One student was quite surprised at the diversity of the questions that were posed. To paraphrase his comments, he wondered how everybody had read the same report but took something different from it. I pointed out to him that this often happens in the real situation where reviewers disagree, often seriously, on the same submission.
The approach taken here complements several other strategies to teach students about the peer review process. Seals and Tanaka (10) prepared a checklist that would guide graduate students and postdoctoral fellows. They used the standard introduction, methods, results, and discussion (IMRAD) format as a basis and took the students through the important points that are needed to critique scientific articles. Gillen et al. (3) described an online tutorial that helped nonscience majors read the primary literature in biology. Guilford (4) described an elaborate approach that took students through the peer review process, leading to the writing of a peer-reviewed “term paper” for a course. Here, a term paper in a draft form was given to two other students who wrote reviews, and these reviews were given to the authors, who revised their manuscript. The final product was in the form of a review article to a major journal. The approach used was similar to the one reported here except that there was no open discussion and only two peer reviewers were assigned to each paper. Another point of similarity was that the reviews written by the students were also graded. Students uniformly felt that the experience of writing and revising the article was helpful and practical, and 91% of them responded favorably to the peer review process. Lightfoot (6) used a participatory approach to introduce students to different aspects of peer review. He got students to write a review of a published scientific paper and had the student reviews assessed by other students in the class, who acted as “peer reviewers” using either a double-blind, single-blind, or open review process. One of his stated aims was to get students to “appreciate the ramifications of judging others work and the process of publication.” This approach provided students with direct experiential learning and a realization of the pressures and issues in judging a peer's work. The author sought to survey the students, but only 9 of 26 students responded, although 8 of them found the process helpful. My approach was somewhere between the single-blind and open system, since the group being assessed clearly knew who the lead discussant was but had only anonymous comments from the others.
Benos et al. (1a) discussed several aspects of the peer review process and emphasized that it has both technical and ethical aspects. Much of the discussion in this exercise focused on technical aspects, although occasional ethical issues were mentioned. They also note that the reviewer has, in a sense, two obligations: one is to be an author advocate and help the author improve the paper and the second is to be an advocate for the journal to ensure that the best possible material gets into print. In this case, I took on the second responsibility but insisted that the peer reviewers make constructive useful comments and not frivolous ones. Since they knew that their comments were going to be graded, most students wrote fairly good critiques.
The approach used shares elements common to those termed andragogical (5), which are better suited to adult learners who are more independent, self-directed, highly motivated, build on past experiences, and value learning that is more problem centered rather than subject centered. The students who took this course had taken earlier courses where they had been encouraged to be self-directed, self-reliant, and task oriented. They were thus able to build on their prior experiences and drew upon not only the scientific knowledge they had gathered in earlier courses but also on some of the skills they had acquired such as the abilities to seek, synthesize, integrate, and share information. In addition, I took care to explain to them the relevance of learning about the peer review process.
Getting science students to learn about the peer review process should enhance their learning experience since much of standard scientific knowledge has been published in this format. Furthermore, students enrolled in undergraduate biomedical science courses often enter professional schools or graduate programs where they are expected to write papers that are subject to peer review. Given the nature of the tasks undertaken by the students in this course, they explored a variety of issues ranging from drug discovery and adverse drug reactions to patent law, ethics of clinical trials, and venture capital. Thus, they read articles in journals that spanned a variety of disciplines and recognized that peer review, traditionally seen as the hallmark of scientific practice, has been adopted by journals in fields as diverse as economics, history, art, law, and journalism.
In this course, more emphasis was placed on the dynamics of the peer review process (preliminary submission, peer assessment, rebuttal, and resubmission). This was fitting, given the diversity of topics explored, but the final reports were submitted in a variety of formats, which made the tasks of the peer reviewers more difficult. It would have been better had I narrowed the tasks to specific domains such as molecular or clinical pharmacology and got students to write more standardized reports. This may have made the task of peer assessment much easier or simplified. Since I took on an editorial role, I did not write my own assessment of each of the first drafts but only graded the final submission. It would have been very instructive had I assessed the first submissions as well, so that students could have compared their assessments with mine. Alternatively, I could have used an external assessor to rate their reports and critiques.
The approach used here complements several others that have been reported and can be suitably combined with some of them. The choice of the particular approach used will clearly depend on available resources and circumstances.
No conflicts of interest, financial or otherwise, are declared by the author(s).
The author is grateful to the Canadian taxpayers, who support public universities that permit teachers to have meaningful interactions with their students. This is a rare privilege that is too often taken for granted. The enthusiastic students from McMaster University and the University of Calgary provided a great stimulus.
- Copyright © 2010 the American Physiological Society