Aug. 23, 2019

Beyond knowledge: Study helps pave the way for maximum student learning in clinical rotations

Multidisciplinary research team tracks the assessment of Dutch vet med students
Veterinary medicine students during bovine surgery rotation
Students at Utrecht University in the Netherlands during a bovine surgery rotation. Harold Bok

A key part of teaching veterinary medicine students how to apply their learning on the job is sending them on clinical rotations where they’re assessed as they work alongside veterinarians in real-life settings.

A paper investigating how vet med students are given feedback and assessed during their clinical rotations is getting a lot of attention among medical educators for validating an important component of the process.

To explore how well those assessments help the students learn, researchers tracked 962 students at Utrecht University in the Netherlands over 124 weeks, looking at nearly 328,000 data points from 16,575 assessment forms.

“We have a good grip of how to assess knowledge in the first few years of education,” says Dr. Kent Hecker, PhD, associate professor of veterinary medical education at the University of Calgary Faculty of Veterinary Medicine (UCVM) and senior author of the study, "Validity Evidence for Programmatic Assessment in Competency-Based Education." “But what about evaluating things like clinical reasoning, clinical skills, and professionalism?  And especially how do we — and how should we — assess students for learning and provide feedback within the clinical rotations?”

Dutch vet med students examine a horse

A recent study looked at how the learning of Dutch vet med students is assessed during practicums.

Harold Bok

Assessing clinical skills can’t be done with one final exam

A “programmatic approach” to assessment includes taking multiple snapshots of performance and working with the student in the moment, giving written and verbal feedback and recognizing that different students have different learning requirements. Students need to understand their strengths and challenges to progress, and assessments can also guide curriculum.

“The biggest thing we were really questioning was can you show and can you track learning — or performance — over time within a clinical setting? In order to show somebody is competent for independent practice we need more information than just a final summative exam, a final multiple-choice question or final procedural skill evaluation,” says Hecker, who holds a joint appointment in Community Health Sciences at the Cumming School of Medicine and is a member of the O’Brien Institute for Public Health and the Hotchkiss Brain Institute.

Hecker worked with longtime collaborator Dr. Harold Bok, associate professor in the Faculty of Veterinary Medicine, Utrecht University and Dr. Thomas O’Neill, PhD, director of UCalgary’s Individual and Team Performance Lab and associate professor in the Department of Psychology in the Faculty of Arts (the two UCalgary researchers met through the Teaching Scholars program at the Taylor Institute for Teaching and Learning).

Data showed students starting with varying ability levels all progressed over time

Together they tested whether the programmatic approach to assessment in competency-based education provides meaningful feedback and maximizes student learning. They found that it did. “Statistical analyses showed the greatest amount of variance was due to the student, which is a good thing,” says Hecker. “We found that students start at different levels of abilities. But there was a progression and increase in performance over time.”

Dutch vet med students with a horse

Dutch vet med students at Utrecht University are given feedback during equine clinical rotations.

Harold Bok

The paper provides foundational evidence for a programmatic approach and is being hailed as significant by medical education groups such as the Royal College of Physicians and Surgeons and the American Association of Medical Colleges. “It is a piece, or a couple of pieces, of the puzzle, of how best to look at this to provide information for learning and decision-making to assess our students within a competency-based framework,” says Hecker. “It also gives us an indication that we could think about using something similar to this in other disciplines.”

The researchers spent more than 18 months analyzing the data. “We actually ran it in two separate sites using two different software packages in order to come up with a convergence of information,” says Hecker, who studies health professions education and the behavioural and neural indicators of performance in learning and decision-making. O’Neill, who studies organizational psychology, brought significant insights into data modelling and interpretation from his field of expertise.

And the work continues. Bok, who studies assessment and expertise development in health profession education, is exploring the possibilities of incorporating machine-learning techniques into the assessment program to effectively monitor and guide students’ performance development.

As a result of this paper, the authors were invited to join an interprofessional, global consortium in health professions education with the goal of creating a consensus statement on programmatic assessment.

Study co-author Kent Hecker

Kent Hecker wanted to know how vet med students are evaluated on clinical rotations.

Rahil Tarique