Author: katherinem

Enhancing Program Evaluation in Canadian Medical Education: The Role of Professional Evaluators

Enhancing Program Evaluation in Canadian Medical Education: The Role of Professional Evaluators

Blog post by Jenna Milosek, PhD

Program evaluation is vital in continuously improving postgraduate medical education (PGME) programs across Canada. While many PGME programs engage in evaluation efforts for accreditation and improvement purposes, various challenges often hinder the effectiveness of these efforts. Program directors, already burdened with numerous responsibilities, usually struggle with limited time, expertise, personnel, and funding. This raises the critical question: should busy program directors take the lead in evaluation activities, or would it be more beneficial to enlist professional program evaluators?

The Case for Professional Evaluators

Program evaluators, particularly those trained in educational evaluation, can significantly enhance the planning and implementation of assessment in medical education programs. Organizations like the Canadian Evaluation Society (CES), the American Evaluation Association (AEA), and the International Organization for Cooperation in Evaluation (IOCE) offer valuable resources and expertise that can help programs improve their evaluation efforts. By bringing in external evaluators, programs can gain fresh perspectives and insights crucial for a comprehensive evaluation process (Oandasan et al., 2020).

Furthermore, professional evaluators are instrumental in Evaluation Capacity Building (ECB), which helps medical education stakeholders develop the skills to conduct rigorous evaluations. ECB equips individuals with evaluation expertise and promotes a culture of ongoing learning and sustainable evaluation practices within programs (Preskill, 2008; Stockdill et al., 2002).

The Role of Evaluators in Capacity Building

Program evaluators can foster long-term evaluation capacity through various methods, including formal training workshops, technical assistance, and exposure to evaluation best practices (Cousins & Bourgeois, 2014). These efforts ensure that individuals involved in evaluation activities feel confident in their abilities and are equipped with the tools necessary for conducting high-quality evaluations (Preskill & Boyle, 2008).

However, program evaluators also face challenges when working with medical education programs. Staff resistance due to concerns over increased workload or fear of negative findings is a common obstacle (Cousins & Bourgeois, 2014). To overcome this, evaluators must demonstrate methodological rigour and transparency, boosting the credibility of evaluation findings (Cousins & Earl, 1995).

How Program Evaluators Can Enhance Medical Education Evaluation

In a recent study, program evaluators shared insights into how they currently support medical education programs and how they can help improve program evaluation capacity. This study focused on three key research questions:

  1. How are program evaluators currently supporting medical education with program evaluation?
  2. How can program evaluators help medical education stakeholders build their capacities to conduct evaluations?
  3. How can program evaluators help medical education stakeholders utilize evaluation findings effectively?

Supporting PGME Programs with Evaluation

The study found that program evaluators are critical in supporting PGME programs in various ways. Some evaluators serve as external consultants, offering guidance when requested by program directors, while others work as part of an internal evaluation unit at universities, collaborating with multiple departments. Many evaluators use flexible evaluation methods, such as rapid cycle and developmental evaluations, which can be seamlessly integrated into the existing workflow of busy PGME programs. These approaches are beneficial in environments where time and resources are limited. In addition, program evaluators often assist with accreditation efforts, which are a key priority for many PGME programs. Accreditation requirements typically demand ongoing program evaluation, and evaluators help program directors design evaluation frameworks that meet these standards.

Building Evaluation Capacity

To build evaluation capacity, program evaluators emphasized the importance of a participatory approach, where stakeholders are actively involved in the evaluation process. This approach helps stakeholders understand the value of program evaluation and empowers them to take ownership of the process. Evaluators also help stakeholders leverage existing data—such as assessments and feedback—rather than burdening programs with the need to collect additional data. This approach saves resources and builds confidence in using data for continuous improvement. Moreover, evaluators advocate for more explicit program evaluation policies from accrediting bodies such as the Royal College of Physicians and Surgeons of Canada (RCPSC). Clear guidelines would help program directors better understand the expectations around program evaluation and facilitate a more systematic approach.

Using Evaluation Findings Effectively

One of the most critical aspects of program evaluation is ensuring that the findings are collected and used to inform decision-making and program improvement. Evaluators highlighted several strategies to enhance the utilization of evaluation findings. First, involving stakeholders early in the evaluation process ensures that their needs and questions guide the evaluation. Engaging stakeholders in shared decision-making and keeping them updated throughout the review fosters a sense of ownership and commitment to using the findings. Program evaluators also recommend collaborative or participatory models for evaluation, where stakeholders are involved in interpreting the findings and determining the next steps. This ensures that findings are relevant and helps stakeholders feel invested in the evaluation process.

Finally, evaluators encourage medical education programs to broadly share evaluation findings within the institution and with external stakeholders. This accountability can lead to broader support for program improvements and demonstrate the value of evaluation in fostering transparency and continuous improvement.

Conclusion

This study’s findings underscore the significant value that qualified program evaluators bring to medical education programs. By providing expertise in implementing and utilizing program evaluation, evaluators help PGME programs overcome common challenges such as time constraints and resource limitations. More importantly, they support the development of a sustainable evaluation culture that can lead to long-term improvements in medical education.

However, challenges remain. The study identified the need for more excellent resources, transparent policies, and widespread adoption of innovative evaluation practices. Leveraging technology, fostering collaboration, and ensuring consistent funding for evaluation activities are critical steps toward enhancing the effectiveness of program evaluation in medical education. As the medical education landscape continues to evolve, embracing the expertise of professional evaluators and integrating evaluation practices into routine program activities will be essential for driving improvements in medical training and, ultimately, in patient care outcomes. By prioritizing evaluation as a continuous learning and improvement tool, medical education programs can ensure they meet the ever-changing needs of healthcare professionals and their communities.

References

Cousins, J. B., & Bourgeois, I. (2014). Multiple case study methods and findings. New Directions for Evaluation, 141, 25–99.

Cousins, J. B., & Earl, L. M. (1995). The case for participatory evaluation: Theory, research, practice. In Participatory evaluation in education: Studies of evaluation use and organizational learning (pp. 3–20). Falmer Press.

Oandasan, I., Martin, L., McGuire, M., & Zorzi, R. (2020). Twelve tips for improvement-oriented evaluation of competency-based medical education. Medical Teacher, 42(3), 272–277. https://doi.org/10.1080/0142159X.2018.1552783

Preskill, H. (2008). Evaluation’s second act: A spotlight on learning. American Journal of Evaluation, 29(2), 127–138.

Preskill, H., & Boyle, S. (2008). Insights into evaluation capacity building: Motivations, strategies, outcomes, and lessons learned. Canadian Journal of Program Evaluation, 23(3), 147–174.

Stockdill, S. H., Baizerman, M., & Compton, D. W. (2002). Toward a definition of the ECB process: A conversation with the ECB literature. New Directions for Evaluation, 2002(93), 7–26.

 

Using SurveyMonkey Audience to Get Respondents for Medical Education Research: A Guide

Using SurveyMonkey Audience to Get Respondents for Medical Education Research: A Guide

Blog post by Katherine Moreau, PhD

Medical education research (MER) often requires access to specific populations, ranging from medical students to practicing physicians. One of the most challenging aspects of this research is recruiting the proper respondents who are both qualified and willing to participate in surveys. Traditional recruitment methods may take time and resources. Tools such as SurveyMonkey Audience can streamline this process by providing a quick, reliable, and cost-effective way to gather data from your target audience. In this blog, I explore how you can leverage SurveyMonkey Audience to collect data for your MER.

What is SurveyMonkey Audience?

SurveyMonkey Audience is a service that allows you to survey a targeted group of people quickly. It utilizes a network of millions of respondents, and you can choose participants based on various criteria, such as demographic characteristics, location, professional expertise, and more. This feature makes it ideal for MER. 

Why Use SurveyMonkey Audience for Medical Education Research?

Here are some of the key reasons why SurveyMonkey Audience is an excellent tool for your MER:

  1. Access to a Broad Pool of Respondents: SurveyMonkey Audience provides access to respondents from diverse demographics. For MER, you can survey medical professionals, students, and faculty from different institutions and specialties without needing to recruit them personally.
  2. Targeted Sampling: One of the cool features of SurveyMonkey Audience is its ability to segment respondents based on specific criteria (e.g., educational background, age, professional experience, and geographic location). For MER, this means that you can target specific groups:
    • Medical students
    • Residents
    • Attending physicians with teaching experience
    • Healthcare professionals working in particular settings (e.g., hospitals, clinics, rural areas)
  3. Cost and Time Efficiency: Traditional methods of recruiting participants for surveys can be time-consuming and costly. SurveyMonkey Audience allows you to pay per response, which means you can control your budget. Additionally, you can quickly obtain respondents (e.g., I completed my last survey in 24 hours with over 600 responses)
  4. Data Security and Privacy: SurveyMonkey Audience complies with major data privacy regulations, including HIPAA, to protect respondent information. This is especially important if you are dealing with sensitive or personal information. 

Steps to Use SurveyMonkey Audience for Your Research

  1. Define Your Research Objectives and Target Audience
    Before you start, it’s essential to clearly understand what you’re trying to measure and who your target audience is. In MER, this could include understanding:

    • The effectiveness of a particular teaching/assessment method
    • The current challenges faced by medical students

Knowing exactly who you need to survey (e.g., 3rd-year medical students, attending physicians) will help you accurately set up the survey.

  1. Create a Survey on SurveyMonkey
    Next, create the survey using SurveyMonkey’s platform. Ensure your questions are clear, concise, and well-structured.
  2. Select Your Audience
    Once your survey is ready, it’s time to choose your respondents using SurveyMonkey Audience. You can select from various filters to ensure you’re reaching the right group:

    • Demographics: Age, gender, education level, and geographic location
    • Profession: You can choose respondents based on their profession (e.g., medical students, nurses, doctors, the general public)
  3. Set Your Budget and Timeline
    SurveyMonkey Audience allows you to control your budget by choosing how much you will pay per response. Depending on the level of targeting and complexity, the cost per respondent may vary, but you will receive transparent pricing before launching the survey. You can also specify your desired timeline to get the responses within a timeframe that fits your research needs.
  4. Launch and Monitor Responses
    After launching your survey, monitor the incoming responses and ensure they align with your target audience criteria. You can check real-time data and response rates to ensure that you’re getting sufficient participation.

 

 

The Secret Life of Assessments: What Medical Students Think!

The Secret Life of Assessments: What Medical Students Think!

Blog post by Katherine Moreau, PhD

Ah, the world of medical school: long hours, lots of caffeine, and… assessments. For anyone who’s stepped into a white coat and faced the whirlwind of clinical rotations, you know assessments are part of the gig. But what do medical students think about all these assessments? Well, buckle up because we’re about to dive into the world of Daily Encounter Cards, Mini-CEXs, and OSCEs straight from the source: the students themselves!

What Are Clinical Assessments?

Before we get into the nitty-gritty, let’s take a moment to talk about what all these assessments are. During clinical rotations, medical students are assessed in various ways using tools such as:

  • Daily Encounter Cards (DECs): A quick, everyday check-in.
  • Mini-Clinical Exams (Mini-CEx): Think mini exams focusing on one clinical skill at a time.
  • Objective Structured Clinical Exams (OSCEs): The big leagues. These involve multiple stations where students show off their clinical abilities.
  • Multisource Feedback (MSF): Feedback from a variety of sources to give students a broader picture of their performance.
  • Online Portfolios: A digital record of everything, from feedback to reflections.

Despite all the professional development clinical educators receive in giving effective feedback, little research has been done on how students feel about these assessments. So today, we are diving into what medical students think about these assessments and how they impact their education.

Three Big Takeaways from the Students

Interviews with students revealed three major themes.

  1. Clinical Assessments = A Boost for Learning and Skills

Most students think clinical assessments are helpful in learning. Here’s why:

  • Forming a Clinical Approach: Students say feedback helps them organize their thoughts and develop clinical problem-solving skills.
  • Finding Strengths and Weaknesses: Regular assessments help pinpoint what they are doing well and where they need improvement.
  • Daily Feedback = Instant Fixes: Daily encounter cards and mini-CExs give students real-time feedback, allowing them to adjust their approaches.
  • Self-Study Motivation: Assessments are not just about getting grades. They often spark curiosity and self-study, especially when a preceptor points students toward areas they should read more about.
  1. But… They Don’t Do Much for Licensing Exams (Except OSCEs)

Now, here’s where things get interesting. While assessments are helpful for learning, they don’t prepare students for licensing exams. The students in this study said only the OSCEs seemed to have any direct correlation with the licensing exams. But wait, there’s more! Here’s where the clinical assessments make a real difference:

  • Residency Applications: The feedback from these assessments is not just filed away. It plays a role in residency applications. Many students’ comments from their preceptors are summarized in a document (the Medical Student Performance Record, or MSPR) and sent to residency programs. Those seemingly small comments can influence their chances of getting into a residency program.
  • Reference Letters: Students use assessments and feedback to determine which preceptors are good reference letter writers for residency applications.
  1. Clinical Assessments Have Pros and Cons

Let’s be honest: no system is perfect. Students quickly pointed out the ups and downs of the various clinical assessment tools they used.

The Pros:

  • Fairness: They appreciate that the assessments are structured and give everyone a fair shot.
  • Regular Feedback: A formalized system for regular feedback ensures students do not go months without knowing how they are doing.
  • Peer Comparisons: Students like the assessments that let them see how they stack up against their peers.

The Cons:

  • Inconsistent Feedback: Not all preceptors provide the same quality of feedback, and students often receive vague or generic comments that do not offer much to work with.
  • Time Delays: Sometimes feedback comes too late to be useful for immediate improvement.
  • Lack of Direct Observation: Students noted that assessments may miss key aspects of their clinical performance without direct observation.

The Hidden Impact of Clinical Assessments

Overall, clinical assessments shape medical students’ learning and clinical skills. They help students reflect on their performance and may guide their studies. However, there is room for improvement. Assessment should provide constructive feedback. There also needs to be consistency across different preceptors. Ultimately, clinical assessments do not just affect how students perform on their current rotations; they also influence their future careers.

 

Hands-On Healthcare: What Montessori Brings to the Training Table

Hands-On Healthcare: What Montessori Brings to the Training Table

Blog post by Katherine Moreau, PhD

The Demand for Well-Educated, Clinically Proficient Healthcare Professionals

The demand for well-educated and proficient healthcare professionals is at an all-time high. Health professions educators are under increasing pressure to ensure that graduates achieve the goals and objectives of their programs. Today’s healthcare systems require professionals to be technically skilled, critical thinkers, problem-solvers, and multidisciplinary. Innovative teaching methods and strategies are essential. Current teaching methods and techniques may not sufficiently meet the demands of today’s healthcare systems. While many healthcare education programs focus heavily on theoretical instruction and standardized assessments, these approaches may fail to develop the clinical judgment, interpersonal skills, and lifelong learning habits required in practice. Although the Montessori approach is widely recognized for its effectiveness in various educational settings, its application in the training of healthcare professionals remains underexplored.

The Montessori Approach

Maria Montessori, the founder of the Montessori educational approach, was the first female to attend an Italian medical school and the first female physician in Italy. Her career began in psychiatry and pediatrics before she dedicated herself to education (Marshall, 2017). Originally designed for children with intellectual disabilities, the Montessori method has evolved over the past century and is now utilized globally. It features multi-age and -level classrooms, hands-on learning materials, learner-chosen activities, and a focus on social and practical life skills. While traditionally applied in early childhood education, the Montessori approach has been adapted in clinical healthcare settings with patients to enhance patient engagement and sensory perceptions.

In training healthcare professionals, Montessori principles could offer a new paradigm for developing the qualities essential to clinical practice. It could help enhance self-directed learning, hands-on skill acquisition, and collaborative problem-solving.

Montessori in Clinical Healthcare

Research has primarily investigated the Montessori approach in dementia clinical care settings. Hitzig and Sheppard (2017) conducted a scoping review and found significant variability in implementing the Montessori approach. They highlighted a lack of standardized guidelines and best practices in using the Montessori approach in clinical healthcare. Similarly, Sheppard, McArthur, and Hitzig (2016) found that while Montessori activities improved eating abilities in individuals with dementia, they had minimal impact on overall cognition. They called for further research into the long-term benefits of these activities. Conversely, other studies have shown that the Montessori approach can effectively enhance cognitive, motor, and sensory functions and social skills in dementia patients (Hanna, Donnelly, & Aggar, 2018). It also appears to foster greater clinician engagement and compassion while reducing burnout among caregivers (Judge, Camp, & Orsulic-Jeras, 2000).

Gaps in Current Research

While existing research focuses on the Montessori approach in clinical care settings, studies on its use and outcomes in other healthcare environments, including health professions education, are lacking. To fully understand and leverage this approach, researchers, administrators, clinicians, patients, and other stakeholders must examine and learn from the experiences of those who have applied or studied it in various health contexts. Applying the Montessori approach in the education of healthcare professionals could address some of the gaps in current training practices. For example, self-directed and hands-on learning could help learners develop technical competence and the critical thinking and interpersonal skills necessary for effective patient care. In a Montessori-inspired curriculum, learners could also choose projects, conduct research, and engage in interdisciplinary collaboration, encouraging them to take ownership of their learning while fostering teamwork. Many of these aspects are like competency-based medical education.

Moreover, a collaborative learning environment in which learners from various healthcare disciplines collaborate on example case studies could provide a better understanding of team dynamics. This approach would reflect real-world healthcare delivery, where interdisciplinary teams are often required to solve complex health issues. The flexibility to explore different aspects of healthcare, whether through elective courses or clinical rotations, would also allow learners to gain a deeper understanding of their field while promoting interprofessional learning and communication skills. 

Conclusion

The Montessori approach’s focus on hands-on learning, intrinsic motivation, and collaborative problem-solving has the potential to revolutionize the training of healthcare professionals. By integrating these principles into training programs, we can cultivate a generation of technically proficient health professionals who are empathetic, self-directed, and adaptable to the needs of diverse patient populations. As research into the Montessori approach in healthcare settings, including health professions education, continues to grow, it may provide valuable insights into how education can be better aligned with the real-world demands of the healthcare profession.

References

  1. Hanna, A., Donnelly, J., & Aggar, C. (2018). Study protocol: A Montessori approach to dementia-related, non-residential respite services in Australia. Archives of Gerontology and Geriatrics, 77, 24–30. https://doi.org/10.1016/j.archger.2018.03.013
  2. Hitzig, S. L., & Sheppard, C. L. (2017). Implementing Montessori methods for dementia: A scoping review. The Gerontologist, 57(5), e94–e114. https://doi.org/10.1093/geront/gnw147
  3. Judge, K., Camp, C., & Orsulic-Jeras, S. (2000). Use of Montessori-based activities for clients with dementia in adult day care: Effects on engagement. American Journal of Alzheimer’s Disease and Other Dementias, 15(1), 42–46. https://doi.org/10.1177/153331750001500106
  4. Marshall, C. (2017). Montessori education: A review of the evidence base. NPJ Science of Learning, 2, 11. https://doi.org/10.1038/s41539-017-0012-7
  5. Sheppard, C. L., McArthur, C., & Hitzig, S. L. (2016). A systematic review of Montessori-based activities for persons with dementia. Journal of the American Medical Directors Association, 17(2), 117–122. https://doi.org/10.1016/j.jamda.2015.10.006
My program is accredited – why do I need program evaluation?

My program is accredited – why do I need program evaluation?

Blog Post by Elise Guest, PhD Candidate

In the context of education, accreditation is the formal process of recognizing program quality (Harvey, 2004). Accreditation is an assessment of a program against predetermined standards and criteria. In Canada, the term is usually applied to individual programs striving to demonstrate excellence (Weinrib & Jones, 2014). Health profession education (HPE) programs in Canada have robust experience with accreditation. The Association of Accrediting Agencies of Canada (AAAC) is a community of practice that supports educational program accreditors across the country; of its 23 members, 11 come from allied health fields such as nursing, occupational therapy, and dentistry.

On the surface, many assume that accreditation and program evaluation are interchangeable processes, as they both seek to understand the nuances of a program to improve quality. Program evaluation, however, is distinct from accreditation. It’s the systematic collection of information about the intentions, operations, and outcomes of a program (Shawer, 2003); it creates new knowledge about it (Yarbrough et al., 2010).

The key to understanding the distinction between accreditation and program evaluation is scope: scope of authority, scope of intent, and scope of outcomes. With regards to the scope of authority, accreditation of healthcare programs is generally organized by members of the discipline – it’s a way for a program to confirm for its interest holders that it has been peer reviewed and meets the expectations of members of the profession. While accreditation is an external process, program evaluation is internal. The scope of authority rests within the program and so the validation that results is narrowly understood as only those with a vested interested in the outcomes have been consulted. Not that there’s anything wrong with that!  Program evaluations are critical opportunities for self-reflection and confirmation of strength or needs for improvement.

The scope of the intent of accreditation is different from the scope of the intent of program evaluation. Accreditation looks at a wide variety of elements of program delivery, from policies to program environment, to graduate outcomes. By holding all programs in a discipline to the same set of criteria, accreditation creates a series of benchmarks, elevating the discipline in question nationally. Program evaluation is an insular exercise – it’s one program looking at how it delivers itself to ensure its goals are met. Accreditation intends to show a program how it meets national standards, whereas program evaluation intends to show a program if it’s meeting its own intended outcomes.

Finally, the scope of the outcomes between accreditation and program evaluation are very different. An accredited program is responsible to the external accrediting body to ensure it meets the terms of its accreditation – this may involve interim reporting, changes in program environment and delivery, shorter or longer accreditation terms, etc. The scope of the outcome of a program evaluation depends on the people within the program – as an internal exercise, changes are only made when internal pressure requires them to be.

So why does your accredited program need program evaluation?  While accreditation and program evaluation are two distinct processes, they are not divested from each other. Many accreditors look for program evaluation plans in the programs they are assessing. There is a recognition that continuous improvement needs to be internally motivated (program evaluation) as much as it is externally motivated (accreditation). Health professional education programs should embrace both processes, while recognizing the similarities and differences, to strengthen the quality of their program. Because, after all, quality HPE programming is always the goal.

References

Harvey, L. (2004). The power of accreditation: Views of academics. Journal of Higher Education Policy and Management, 26(2), 207–223. https://doi.org/10.1080/1360080042000218267

Shawer, S. (2013). Accreditation and standards-driven program evaluation: Implications for program quality assurance and stakeholder professional development. Quality and Quantity, 47(5), 2883–2913. https://doi.org/10.1007/s11135-012-9696-1

Weinrib, J., & Jones, G. A. (2014). Largely a matter of degrees: Quality assurance and Canadian universities. Policy and Society, 33(3), 225–236. https://doi.org/10.1016/j.polsoc.2014.07.002

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2010). The program evaluation standards: A guide for evaluators and evaluation users. SAGE.