No items found.

Governance and guardrails: Generative AI for NP education

A practical, faculty-focused session on how NP programs can govern and apply generative AI to support clinical reasoning, assessment, and program-wide consistency.

Originally aired:

March 23, 2026

Recording

60 minutes

Stephen A. Ferrara, DNP, FNP-BC
Stephen A. Ferrara, DNP, FNP-BC

Professor, Founding Associate Dean, Artificial Intelligence

Olivia Livernois, MSN, FNP-BC
Olivia Livernois, MSN, FNP-BC

Family Nurse Practitioner, Nursing Program Specialist

Webinar overview

As generative AI becomes increasingly embedded in clinical practice and education, nurse practitioner faculty are being asked to determine not only whether AI should be used, but how it can be integrated responsibly into teaching, learning, and assessment. Without clear guardrails, AI risks undermining clinical reasoning, assessment validity, and academic integrity. With the right governance and instructional design, however, it can meaningfully support learner development and program outcomes.

In this session, Stephen A. Ferrara, DNP, FNP-BC, Professor at Columbia University School of Nursing and founding Associate Dean for Artificial Intelligence, will examine how NP faculty can thoughtfully incorporate generative AI into their curricula while maintaining rigor, transparency, and alignment with program goals. Drawing on his leadership in NP education, policy, and AI literacy, Dr. Ferrara will explore key principles for responsible use, including ethical considerations, acceptable use expectations, documentation practices, and governance structures that support consistent, program-level implementation.

Through real-world examples from NP programs, Olivia Livernois, MSN, FNP-BC will highlight practical applications of AI-supported learning and assessment, including AI-enabled clinical reasoning simulations and board preparation strategies. Attendees will leave with concrete approaches they can adapt within their own programs to integrate AI in ways that enhance learning while preserving the development of sound clinical judgment.

What you'll learn

Assess the risks and opportunities of generative AI use in NP education and its impact on clinical reasoning and assessment validity.

Design governance frameworks that support responsible, transparent, and consistent use of generative AI across NP programs.

Apply instructional and assessment strategies that leverage generative AI to enhance learning outcomes without undermining clinical judgment.

Webinar recording

Meet your expert speakers

Dr. Sarah Mitchell, MD, FACP
Professor of Internal Medicine, Johns Hopkins University School of Medicine

Dr. Mitchell is a nationally recognized expert in clinical reasoning and diagnostic error prevention. She serves as Director of Clinical Skills Education at Johns Hopkins and has published extensively on cognitive bias mitigation and diagnostic safety. Her research focuses on improving diagnostic accuracy through structured reasoning frameworks and simulation-based training.

Dr. Raj Kumar, MD, MPH
Associate Dean for Clinical Education, Stanford University School of Medicine

Dr. Kumar leads Stanford's clinical reasoning curriculum and oversees assessment innovation across all clinical clerkships. He is a pioneer in integrating AI-enhanced simulation into medical education and has received multiple teaching awards for his work developing competency-based assessment frameworks. His expertise spans internal medicine, medical education, and healthcare quality improvement.

Dr. Sarah Mitchell, MD, FACP
Professor of Internal Medicine, Johns Hopkins University School of Medicine

Dr. Mitchell is a nationally recognized expert in clinical reasoning and diagnostic error prevention. She serves as Director of Clinical Skills Education at Johns Hopkins and has published extensively on cognitive bias mitigation and diagnostic safety. Her research focuses on improving diagnostic accuracy through structured reasoning frameworks and simulation-based training.

Stephen A. Ferrara, DNP, FNP-BC
Stephen A. Ferrara, DNP, FNP-BC
Professor, Founding Associate Dean, Artificial Intelligence at Columbia University School of Nursing

Dr. Stephen A. Ferrara, DNP, FNP-BC, is a nationally recognized healthcare leader, nurse practitioner, and Professor at Columbia University School of Nursing, where he served as the founding Associate Dean for Artificial Intelligence. He is the immediate past President of the American Association of Nurse Practitioners and is Editor-in-Chief of the Journal of Doctoral Nursing Practice. In New York, he led advocacy efforts that helped advance the Nurse Practitioner Modernization Act, expanding practice authority for experienced NPs. He also founded AI + Nurse Academy to support responsible, evidence-based AI literacy for clinicians.

Olivia Livernois, MSN, FNP-BC
Olivia Livernois, MSN, FNP-BC
Family Nurse Practitioner, BLS Instructor, Nursing Program Specialist

Olivia Livernois, MSN, FNP-BC, is a practicing Family Nurse Practitioner, educator, and Nursing Program Specialist at Sketchy. She collaborates with nurse practitioner programs nationwide to align clinical reasoning, communication, and virtual simulation-based learning with curricular goals and competency frameworks. Her clinical background spans critical care, occupational health, hospice, and urgent care, and she currently practices in urgent care and occupational health/primary care settings, delivering accessible, real-world care to diverse patient populations. Olivia is passionate about advancing innovative, student-centered learning that strengthens diagnostic reasoning and prepares learners for high-stakes clinical practice.

Questions answered in this webinar

How should NP programs approach the use of generative AI in education today?

Generative AI is already changing how students study, complete assignments, and work through clinical thinking. In NP education, the goal is not just content mastery—it’s developing clinical judgment. That creates an important tension: where does AI support learning, and where might it begin to interfere? AI can support education by enabling case-based learning, generating formative materials, and creating variations of clinical scenarios quickly. In that sense, it offers opportunities to scale aspects of teaching and expand access to practice. However, speed is not the same as quality. AI-generated responses may sound fluent and confident, but they can still be inaccurate, biased, or misaligned with learner needs. Because of this, AI should not be adopted passively or uniformly. Programs need to take an intentional approach—recognizing both the opportunities and the risks, and ensuring that AI is used in ways that support learning while preserving academic integrity and the development of clinical reasoning.

What are the biggest risks of using AI in clinical education—and how can programs mitigate them?

AI introduces meaningful risks alongside its potential benefits. While it can accelerate content creation and provide scalable feedback, it can also produce inaccurate or “hallucinated” information, reinforce hidden bias, and create a false sense of confidence through polished, authoritative-sounding outputs. There is also a risk of over-reliance. When learners depend too heavily on AI, it can weaken the development of clinical reasoning and decision-making skills. In clinical education, this is particularly concerning, as learners may interpret AI-generated content as correct without sufficient scrutiny. Certain use cases require additional caution, including high-stakes assessments, unverified AI-generated feedback, and situations where clinical reasoning is presented without faculty review. To mitigate these risks, AI-generated content should always be reviewed before use. Educators should rely on approved or de-identified data, verify outputs against trusted sources, and assess for bias and inclusivity. It is also important to be explicit with learners about how AI is being used and to maintain clear expectations around accountability for reasoning and performance.

What does responsible AI governance look like in NP programs?

Responsible AI governance begins with recognizing that the use of AI in education is not just a technical decision—it is a program-level responsibility. Institutions must consider policy, privacy, and compliance, particularly given that many consumer AI tools are not HIPAA-compliant. This means identifiable student or patient information should not be entered into unapproved platforms, and faculty must understand what data can and cannot be shared. At the program level, governance also requires consistency. Expectations for AI use should be clearly defined and aligned across courses, rather than left to individual interpretation. This helps ensure that learners receive consistent guidance and that academic standards are maintained. Faculty play a central role in this framework as the “human in the loop.” They remain responsible for verifying accuracy, assessing appropriateness for the learner’s level, checking for bias, and ensuring alignment with curriculum and learning objectives. Just as importantly, they model how to engage with AI responsibly, helping learners develop the ability to critically evaluate AI outputs.

How can AI be used to enhance learning without undermining clinical judgment?

AI can enhance clinical education when it is used in a structured and intentional way. It can expand access to case-based learning, support repeated practice, and provide scalable feedback. These capabilities can help learners engage more frequently with clinical decision-making and reinforce key concepts over time. However, these benefits depend on maintaining a clear focus on clinical reasoning. AI should not replace the thinking process or provide shortcuts to answers. Instead, it should be used in ways that support learners in working through problems, making decisions, and reflecting on their reasoning. In practice, this means reviewing AI-generated content before use, relying on appropriate and approved data, verifying outputs against trusted sources, and ensuring that content is inclusive and aligned with learning objectives. It also requires being transparent with learners about how AI is being used and preserving accountability for their clinical decisions. Approaches that incorporate structure and oversight—such as grounding AI in faculty-designed content, aligning it to competency frameworks, and pairing it with clear evaluation criteria—can help ensure that AI supports learning while maintaining rigor. When implemented in this way, AI can strengthen clinical judgment rather than undermine it.

Ready to experience how DDx can strengthen reasoning skills?
Start your free trial