We investigated how students performed on weekly two-page laboratory reports based on whether the grading rubric was provided to the student electronically or in paper form and the inclusion of one- to two-sentence targeted comments. Subjects were registered for a 289-student, third-year human physiology class with laboratory and were randomized into four groups related to rubric delivery and targeted comments. All students received feedback via the same detailed grading rubric. At the end of the term, subjects provided consent and a self-assessment of their rubric viewing rate and preferences. There were no differences in laboratory report scores between groups (P = 0.86), although scores did improve over time (P < 0.01). Students receiving targeted comments self-reported viewing their rubric more often than students that received no comments (P = 0.02), but the viewing rate was independent of the rubric delivery method (P = 0.15). Subjects with high rubric viewing rates did not have higher laboratory report grades than subjects with low viewing rates (P = 0.64). When asked about their preference for the future, 43% of respondents preferred the same method again (electronic or paper rubric) and 25% had no preference. We conclude that although student laboratory report grades improved over time, the rate and degree of improvement were not related to rubric delivery method or to the inclusion of targeted comments.
- human physiology
- laboratory reports
- scientific writing
- writing feedback
efficient and effective writing is an integral component of higher education and throughout life after graduation. Thus, instructors should be using evidence-based strategies for improving student writing. It is essential that written feedback be explicit to guide improvement in the most effective manner. Common modes of feedback described in the literature include, but are not limited to, documenting points missed per line, points missed per page, Likert scales regarding quality of writing, underlining errors, rubrics, and written comments. Over the last two decades, the use of digital feedback has significantly increased, and the aforementioned methods of feedback have been provided in both digital and written form (3, 8, 10).
Elwood and Bode (8) suggested that students appreciate handwritten feedback due to the personal connection created between themselves and the instructor; however, issues with legibility often arise. Furthermore, the color of handwritten feedback may affect performance (7). Conversely, although digital feedback is more convenient (1) and environmentally friendly, there have been documented cases of students disliking this form of feedback due the lack of personal connection between themselves and the instructor (8). In an attempt to determine student preference, Lunt and Curran (13) provided both digital audio and written feedback and concluded that students were at least 10 times more likely to access digital audio feedback compared with written feedback. Students of today are referred to as digital natives (14), which describes the increased availability, use, and reliance on digital devices. The average American college student owns approximately three internet-ready devices (5), and, according to a 2014 EDUCAUSE report (6), 86% of undergraduates own a smart phone. Thus, as educators, we have the opportunity to adapt our feedback practices to best suit the student's preference. Furthermore, both digital and written rubrics are used to provide students with targeted feedback and prioritized information about how their performance does or does not meet the criteria, in order for them to improve their future performance (15). A rubric is a grading tool that clearly outlines performance expectations for each of the categories being assessed.
The benefit of a well-designed rubric includes clarity of the expectations as well as quick and consistent grading across students and instructors. Furthermore, a rubric divides an assignment into subcategories and provides students with a concise description regarding high-, medium-, and low-quality work within each category. Presumably, if the rubric has been well designed, it will be clear to students what changes are necessary to improve their writing on the following assignment and ultimately achieve a higher level of performance (17). Moreover, research regarding effective feedback strategies emphasizes three key components: the feedback must focus students on the key knowledge and skills you want them to learn, be provided at a time and frequency students will most likely be able to use it, and be linked to additional practice opportunities for students (9). Research suggests that deliberate practice is a central factor in the development of expert performance in cognitive tasks, such as writing (8), that guides the instruction and training of student writers (11). In addition, feedback that is specific to the processes students are engaging in has been associated with deeper learning (2).
Instructors are often compelled to provide additional comments, beyond completing the rubrics; however, the literature suggests that students rarely read comments (13). The reason instructors provide additional comments to an already explicit feedback system tends to be the instructors' desire to further assist the student. However, research suggests that excess feedback tends to overwhelm students and fails to communicate which aspects of their performance deviate most from the goal and where they should focus their future efforts (12, 16). Additional comments may be advantageous for some students (18), although it is challenging to accurately assess the types of comments provided, whether students actually read the comments, and whether additional comments actually lead to improved performance over time. Comment types vary between congratulatory, instructive, and encouraging or, at times, ambiguous and difficult to understand. Assessing the variation in instructors' responses to student writing, Zamel (18) stated that comments usually take the form of abstract and vague prescriptions and directives that students find difficult to interpret.
The purpose of the present study was to determine if paper or electronic rubric delivery or the addition of targeted written comments influence future laboratory report grades in an introductory, 300-level, Human Physiology course typically taken in the third year of the Bachelor of Science degree for Human Physiology majors. Specifically, we aimed to answer the following questions: 1) Do targeted (one or two sentence) comments beyond the rubric lead to increased success on future laboratory reports? 2) Is there a relationship between the rubric delivery method (paper vs. electronic) and success on future laboratory report assignments as determined by the laboratory report grades? 3) How does the method of rubric delivery and inclusion of additional comments affect the number of times the students report they have viewed their rubrics? 4) Is there a relationship between the self-reported rate of rubric viewing and laboratory report grades? and 5) Do students prefer paper or electronic rubrics?
Our introductory Human Physiology I, five-credit, combined lecture and laboratory course, typically taken in the third year, included 289 students in the fall term of 2014. Eighty-five percent of the students enrolled were Human Physiology majors; 59% were of junior standing and 33% were of senior standing. The large class met twice per week for 80 min, and each student was registered for a 110-min laboratory that met once per week during the 10-wk term. Before attending their laboratory section, students completed an online prelaboratory quiz in the Blackboard learning management system (LMS) relating to the laboratory activities for that week. During the laboratory, students developed hypotheses, identified variables, and performed noninvasive human experiments that were aligned with the weekly course learning objectives. After their data collection, students worked in groups to complete the prepared discussion questions, which connected the weekly course learning objectives with the experiment conducted. A two-page laboratory report was due the following week with a standard format (introduction, methods, results, discussion, and references) outlined in the two-page grading rubric (Fig. 1). The specific discussion concepts listed in the rubric changed weekly, whereas all other components of the rubric remained the same throughout the term. Students submitted their reports electronically via the course Blackboard LMS.
Students self-enrolled in 1 of the 14 laboratory sections, which were taught by 7 different graduate student laboratory instructors, with 2 sections/instructor. For each instructor, students in one section received electronic rubrics and students in the other section received paper rubrics. Within each laboratory section, half of the students received the completed rubric and half of the students received the completed rubric plus one or two sentences of personalized comments related to improvements they could make on their laboratory report. Table 1 shows representative sample comments from each of the seven laboratory instructors.
Therefore, each of the 289 students in the class were assigned to one of the following four groups: 1) electronic rubric alone, 2) electronic rubric with comments, 3) paper rubric alone, and 4) paper rubric with comments (Fig. 2). Each of the seven laboratory instructors had roughly equal numbers of students in each of these four groups (average students per instructor: 43, range: 34–47).
Electronic rubrics were available via the Blackboard LMS as soon as the laboratory instructor graded their laboratory report, which could vary from the day it was turned in through the following laboratory period. Paper rubrics were handed back to students during the following laboratory period. Table 2 shows the timing across the 10-wk term regarding the laboratory experiments, due dates of the laboratory reports, and the latest date when the graded rubric was available to the students for feedback.
During week 10 of the term, after completion of all laboratories, students were provided with a Qualtrics online questionnaire to complete. The questionnaire also served as a mechanism for students to give permission for their data to be included in this Institutional Review Board-approved study. Of the 198 students who took the questionnaire, 185 students consented to have their grades and survey answers included in this study. The questionnaire also asked students to indicate the percentage of time they viewed their graded rubrics across the term and their preference for an electronic rubic or a paper rubric.
Two-way ANOVA was used to compare laboratory report scores for laboratories 2, 4, and 7 using the between-subjects variable time and the within-subjects variable rubric modality (paper vs. electronic). A second two-way ANOVA was similarly used to compare scores for laboratories 2, 4, and 7 across time and the within-subjects variable comments (rubric alone vs. rubric + comments). A repeated-measures model was not used due to missing scores from some students.
Students who indicated on the survey in week 10 that they viewed the rubric 80–100% of the time across the term were put in a high-viewing group, whereas those that indicated that they viewed the rubric 0–60% of the time were put in a low-viewing group. A χ2-analysis was used to uncover differences in the viewing rates between the rubric alone versus rubric + comments groups. A similar χ2-analysis was used to discover if viewing rates were different in the paper versus electronic rubric groups. Two-way ANOVA was used to compare laboratory report scores across all seven laboratories between the high- and low-viewing groups.
Finally, a χ2-analysis investigated the potential relationship between the rubric delivery method that the students encountered throughout the term (paper vs. electronic) and their preference for paper versus electronic rubrics in the future. For all statistical analyses, α was set to P < 0.05.
This study was approved by the university's Institutional Review Board.
Laboratory report scores improved over time when we compared the scores from laboratories 2, 4, and 7 (P < 0.01), but there were no differences in laboratory report scores between the paper or electronic rubric groups (P = 0.86; Fig. 3). Similarly, there were no differences in laboratory report scores between the groups that received targeted comments versus no additional comments (P = 0.24; Fig. 4).
The method of rubric delivery (paper vs. electronic) did not affect student's self-reported rubric viewing rate (P = 0.15; Table 3). In contrast, the group that received targeted comments included significantly more students who self-reported viewing the rubric 80–100% of the time (high-viewing group) than the group that received no additional comments (P = 0.02; Table 4). Interestingly, students with high self-reported viewing rates (i.e., viewed the rubric ≥ 80% of the time) did not perform better on their laboratory reports than students with low self-reported viewing rates (i.e., viewed the rubric ≤ 60% of the time, P = 0.64; Fig. 5).
Student preference for the future rubric delivery method was not dependent on the type of rubric they received during the term (P = 0.72; Table 5). In fact, 43% of both students that received a paper rubric and students that received an electronic rubric wished to receive the same type of rubric the following term.
Some of the goals for our Human Physiology course is for students to be able to perform a hypothesis-driven experiment, present the data graphically, use the results to support or refute the hypothesis, and incorporate the results in a discussion of the underlying physiology, all while using appropriate scientific language and formatting etiquette. To meet this goal, students perform seven experiments during the 10-wk term and write a two-page laboratory report for each, following the expectations outlined in the grading rubric (Fig. 1). The course has 14 laboratory sections of 24 or fewer students in each. Seven graduate student laboratory instructors teach and grade laboratory reports from two laboratory sections.
Over the last 5 yr in the course, we have moved from hardcopy submission of the weekly laboratory reports (which were returned with feedback directly applied to the hardcopy laboratory report) to online submission of laboratory reports with an online rubric completed within the LMS gradebook. When paper laboratory reports were submitted, graders often spent hours providing comments in the margins only to discover the student making the same errors week after week. This experience mirrors Lunt and Curran's (13) assertion that students are not happy with written feedback applied to their papers and therefore are not incorporating it. When we developed our laboratory report grading rubric, the goal was to clearly outline expectations and decrease the necessity of additional comments in the margins of student's papers. When we moved to online student submission and rubric completion, it was not uncommon for a student to ask late in the term where to find their grade or online rubrics. Therefore, we began to wonder if the efficiency of the online grading model was leading to fewer students actually receiving the feedback, since it was up to the student to search out the feedback online outside of class time, as opposed to it being delivered to them in hardcopy and in person. Ultimately, our goal was for students to meet the learning goals described above, which we believed would occur with a combination of weekly practice and directed feedback, but we were unsure if our new online rubric delivery method or limited additional written comments was helping or hindering student performance over time.
As shown in Figs. 3 and 4, laboratory report scores improved over time, but there was no effect of rubric delivery method or additional comments. We interpret this to mean that students improved their performance with practice, regardless of feedback modality. These results suggest that we can safely select the rubric delivery method that is most efficient, environmentally sound, and timely. Anglin et al. (1) tested the efficiency of grading with an electronic rubric compared with a paper rubric and reported it was 300% faster to grade an online rubric. Our results also suggest that targeted additional comments beyond the rubric did not affect performance on future laboratory reports. As described by Elwood and Bode (8), written feedback was used by students to only a modest degree, and very few students took action based on the written feedback, even during revision.
To determine whether the score on laboratory report 1, in combination with the mode of feedback, might have an effect on the improvements in laboratory report scores over time, student scores on laboratory report 1 were divided into tertiles (low-, middle-, and high-performing groups) within each of the four feedback groups. There was no significant interaction or main effect of feedback group on scores in each performance tertile on laboratory report scores over time. However, there was a main effect of time, indicating that all students significantly improved over time, regardless of the feedback group they were placed into. Interestingly, in the low-performing tertile, when we compared the increase in laboratory report score from laboratory 1 to laboratory 2, there was a significant interaction effect (P = 0.02; data not shown). Neither this interaction nor significant main effects were noted in the middle- or high-performing tertile for laboratory 1 to laboratory 2. While this finding was interesting, students did not receive feedback on laboratory report 1 until laboratory report 2 was submitted. Thus, we cannot conclude that the increase in laboratory report score was due to the method of feedback. Therefore, for the future, we will reserve additional comments for special circumstances that warrant additional help or support beyond the rubric. Moreover, we will use the electronic rubric embedded in the LMS to provide feedback on future laboratory reports.
We were surprised to discover that self-reported viewing rates were not affected by the rubric delivery method (Table 3). One of our concerns when converting from paper to electronic rubrics was that it takes time and effort for students to access online feedback. When hardcopy laboratory reports were returned to the students during the following laboratory period, we could watch students look through their rubric immediately. We wondered if students were doing this on their own or not, since this activity happens outside of the classroom and where we cannot observe their actions. Our data suggest that our concerns were unfounded, since there were no statistically significant differences in student's self-reported viewing rates between paper and electronic rubric groups. Perhaps the digital natives that make up the majority of our students do not experience this as a barrier. As Prensky (14) and Chen and Denoyelles (5) have indicated, access to online material is not a barrier for digital natives, and most students have multiple internet-ready devices in their possession. The majority of courses on our campus use the same LMS, where students access course notes, upload assignments, and view their grades. Therefore, it is possible that students are accessing this online resource daily and find it convenient to access their rubric any time, any place, when working on their next laboratory report. Unfortunately, we were unable to electronically track student viewing rates of online rubrics due to the limitations of the LMS.
Another unexpected finding was that self-reported viewing rates were affected by the addition of targeted comments beyond the rubric. Students were asked via an online Qualtrics survey “what percentage of the time did you read over your completed laboratory report rubrics this term?.” Responses were grouped into high viewing (≥80%) and low viewing (≤60%). Students who were provided targeted comments along with their rubric were more likely to fall in the high-viewing group than students who did not receive comments. It is possible that the individualized and personal nature of the written comments were of interest to the student, and that, without them, students were not compelled to look at their rubric week after week. In the examples shown in Table 1, note that one of the comments includes an invitation to contact the laboratory instructor for more information and another includes the student's name and words of positive encouragement. Students may look at their rubric because they are curious to see what their laboratory instructor wrote about their work. A potential question for future investigation is: When students view their rubrics, do they view the entire rubric or just the targeted comments?
Regardless of the precise reason for students choosing to view the rubric more or less often, it is important to note that higher viewing did not equate to improved performance. Both high- and low-viewing groups earned an average of 90 ± 8% and 90 ± 7%, respectively, by laboratory report 3. As shown in Fig. 5, student scores plateaued after laboratory report 3. This suggests that it is not necessary to have the students write seven laboratory reports across the term, since practicing the desired skills three times leads to maximal performance. Due to these findings, we plan to reduce the number of laboratory reports to a total of four across the 10-wk term in the future. It is possible that students viewed their rubrics in the beginning of the term, when grades were lower, but additional rubric viewing throughout the term may not have been necessary to improve grades, since students had already figured out the expectations and how to be successful.
Interestingly, regardless of the rubric modality they started with (paper or electronic), when asked which they would prefer in the future, 43% of students wanted the same rubric delivery method in the future, 29–34% wanted the opposite, and 23–28% did not have a preference (Table 5). Perhaps this has more to do with human nature than with rubric delivery preferences. We suspect that, regardless of the topic, close to half of the students would like consistency from one term to the next, 30% feel like the “grass is greener on the other side,” and 25% do not care. Therefore, we think these data also support our decision to use electronic rubrics in the future.
A final benefit of using electronic rubrics is the timing of the feedback. Typically, paper rubrics are returned the following class period, at the same time the next report is due, whereas electronic rubrics can be available as soon as the report is graded. Although there were no differences between the performance of our electronic versus paper rubric groups, it is possible that an individual student who did poorly on the first laboratory report could benefit from receiving feedback before completing laboratory report 2.
In conclusion, during this investigation of student performance on weekly laboratory reports in an introductory Human Physiology class designed for third-year students, we learned a number of things that have impacted our course planning and decision making. Since there were no differences in student performance on laboratory reports between those receiving a paper or electronic rubric, we have elected to use electronic rubrics due to their efficiency. Similarly, since the addition of targeted written comments also did not affect student performance, our laboratory instructors now reserve written comments for situations they believe warrant direction beyond the feedback supplied by a completed rubric. In addition, we will continue to encourage our laboratory instructors to provide generalized verbal or e-mail feedback to their whole laboratory group regarding common mistakes encountered. Our students' self-reported rubric viewing practices indicated that the addition of written comments was associated with higher viewing rates, although this did not translate to changes in laboratory report performance. In fact, both high- and low-rubric viewing groups improved steadily across laboratory reports 1, 2, and 3 but then reached a plateau at the high-achieving average of 90%. This may have been one of the most important findings of this investigation. Based on the data we collected, instead of assigning seven laboratory reports across the 10-wk term, we now assign four laboratory reports, as our data suggest that most students will have developed the skills necessary to successfully write a laboratory report by their fourth opportunity to practice. The three laboratory reports that we removed were replaced with more clinically focused written assignments that relate to the experiments the students completed during the laboratory. Since our students are mostly prehealth, they seem to be enjoying the opportunity to explore connections to pathologies rather than have additional practice writing formal laboratory reports. Finally, we were glad to note that the majority of students were happy to continue with whichever rubric delivery method they had started with, leaving us with a clear conscience to move to all electronic rubrics in the future.
No conflicts of interest, financial or otherwise, are declared by the author(s).
Z.S.C., A.D.H., and S.M.D. performed experiments; Z.S.C., J.E.M., and S.M.D. analyzed data; Z.S.C., G.P.W., J.E.M., A.D.H., and S.M.D. interpreted results of experiments; Z.S.C. and S.M.D. drafted manuscript; Z.S.C., G.P.W., J.E.M., A.D.H., and S.M.D. edited and revised manuscript; Z.S.C., G.P.W., J.E.M., A.D.H., and S.M.D. approved final version of manuscript; G.P.W. and S.M.D. conception and design of research; J.E.M. and S.M.D. prepared figures.
- Copyright © 2016 The American Physiological Society