Enhancing Program Evaluation in Canadian Medical Education: The Role of Professional Evaluators

Enhancing Program Evaluation in Canadian Medical Education: The Role of Professional Evaluators

Blog post by Jenna Milosek, PhD

Program evaluation is vital in continuously improving postgraduate medical education (PGME) programs across Canada. While many PGME programs engage in evaluation efforts for accreditation and improvement purposes, various challenges often hinder the effectiveness of these efforts. Program directors, already burdened with numerous responsibilities, usually struggle with limited time, expertise, personnel, and funding. This raises the critical question: should busy program directors take the lead in evaluation activities, or would it be more beneficial to enlist professional program evaluators?

The Case for Professional Evaluators

Program evaluators, particularly those trained in educational evaluation, can significantly enhance the planning and implementation of assessment in medical education programs. Organizations like the Canadian Evaluation Society (CES), the American Evaluation Association (AEA), and the International Organization for Cooperation in Evaluation (IOCE) offer valuable resources and expertise that can help programs improve their evaluation efforts. By bringing in external evaluators, programs can gain fresh perspectives and insights crucial for a comprehensive evaluation process (Oandasan et al., 2020).

Furthermore, professional evaluators are instrumental in Evaluation Capacity Building (ECB), which helps medical education stakeholders develop the skills to conduct rigorous evaluations. ECB equips individuals with evaluation expertise and promotes a culture of ongoing learning and sustainable evaluation practices within programs (Preskill, 2008; Stockdill et al., 2002).

The Role of Evaluators in Capacity Building

Program evaluators can foster long-term evaluation capacity through various methods, including formal training workshops, technical assistance, and exposure to evaluation best practices (Cousins & Bourgeois, 2014). These efforts ensure that individuals involved in evaluation activities feel confident in their abilities and are equipped with the tools necessary for conducting high-quality evaluations (Preskill & Boyle, 2008).

However, program evaluators also face challenges when working with medical education programs. Staff resistance due to concerns over increased workload or fear of negative findings is a common obstacle (Cousins & Bourgeois, 2014). To overcome this, evaluators must demonstrate methodological rigour and transparency, boosting the credibility of evaluation findings (Cousins & Earl, 1995).

How Program Evaluators Can Enhance Medical Education Evaluation

In a recent study, program evaluators shared insights into how they currently support medical education programs and how they can help improve program evaluation capacity. This study focused on three key research questions:

  1. How are program evaluators currently supporting medical education with program evaluation?
  2. How can program evaluators help medical education stakeholders build their capacities to conduct evaluations?
  3. How can program evaluators help medical education stakeholders utilize evaluation findings effectively?

Supporting PGME Programs with Evaluation

The study found that program evaluators are critical in supporting PGME programs in various ways. Some evaluators serve as external consultants, offering guidance when requested by program directors, while others work as part of an internal evaluation unit at universities, collaborating with multiple departments. Many evaluators use flexible evaluation methods, such as rapid cycle and developmental evaluations, which can be seamlessly integrated into the existing workflow of busy PGME programs. These approaches are beneficial in environments where time and resources are limited. In addition, program evaluators often assist with accreditation efforts, which are a key priority for many PGME programs. Accreditation requirements typically demand ongoing program evaluation, and evaluators help program directors design evaluation frameworks that meet these standards.

Building Evaluation Capacity

To build evaluation capacity, program evaluators emphasized the importance of a participatory approach, where stakeholders are actively involved in the evaluation process. This approach helps stakeholders understand the value of program evaluation and empowers them to take ownership of the process. Evaluators also help stakeholders leverage existing data—such as assessments and feedback—rather than burdening programs with the need to collect additional data. This approach saves resources and builds confidence in using data for continuous improvement. Moreover, evaluators advocate for more explicit program evaluation policies from accrediting bodies such as the Royal College of Physicians and Surgeons of Canada (RCPSC). Clear guidelines would help program directors better understand the expectations around program evaluation and facilitate a more systematic approach.

Using Evaluation Findings Effectively

One of the most critical aspects of program evaluation is ensuring that the findings are collected and used to inform decision-making and program improvement. Evaluators highlighted several strategies to enhance the utilization of evaluation findings. First, involving stakeholders early in the evaluation process ensures that their needs and questions guide the evaluation. Engaging stakeholders in shared decision-making and keeping them updated throughout the review fosters a sense of ownership and commitment to using the findings. Program evaluators also recommend collaborative or participatory models for evaluation, where stakeholders are involved in interpreting the findings and determining the next steps. This ensures that findings are relevant and helps stakeholders feel invested in the evaluation process.

Finally, evaluators encourage medical education programs to broadly share evaluation findings within the institution and with external stakeholders. This accountability can lead to broader support for program improvements and demonstrate the value of evaluation in fostering transparency and continuous improvement.

Conclusion

This study’s findings underscore the significant value that qualified program evaluators bring to medical education programs. By providing expertise in implementing and utilizing program evaluation, evaluators help PGME programs overcome common challenges such as time constraints and resource limitations. More importantly, they support the development of a sustainable evaluation culture that can lead to long-term improvements in medical education.

However, challenges remain. The study identified the need for more excellent resources, transparent policies, and widespread adoption of innovative evaluation practices. Leveraging technology, fostering collaboration, and ensuring consistent funding for evaluation activities are critical steps toward enhancing the effectiveness of program evaluation in medical education. As the medical education landscape continues to evolve, embracing the expertise of professional evaluators and integrating evaluation practices into routine program activities will be essential for driving improvements in medical training and, ultimately, in patient care outcomes. By prioritizing evaluation as a continuous learning and improvement tool, medical education programs can ensure they meet the ever-changing needs of healthcare professionals and their communities.

References

Cousins, J. B., & Bourgeois, I. (2014). Multiple case study methods and findings. New Directions for Evaluation, 141, 25–99.

Cousins, J. B., & Earl, L. M. (1995). The case for participatory evaluation: Theory, research, practice. In Participatory evaluation in education: Studies of evaluation use and organizational learning (pp. 3–20). Falmer Press.

Oandasan, I., Martin, L., McGuire, M., & Zorzi, R. (2020). Twelve tips for improvement-oriented evaluation of competency-based medical education. Medical Teacher, 42(3), 272–277. https://doi.org/10.1080/0142159X.2018.1552783

Preskill, H. (2008). Evaluation’s second act: A spotlight on learning. American Journal of Evaluation, 29(2), 127–138.

Preskill, H., & Boyle, S. (2008). Insights into evaluation capacity building: Motivations, strategies, outcomes, and lessons learned. Canadian Journal of Program Evaluation, 23(3), 147–174.

Stockdill, S. H., Baizerman, M., & Compton, D. W. (2002). Toward a definition of the ECB process: A conversation with the ECB literature. New Directions for Evaluation, 2002(93), 7–26.

 


Leave a Reply

Your email address will not be published. Required fields are marked *