CEimpact Podcast
The CEimpact Podcast features two shows - GameChangers and Precept2Practice!
The GameChangers Clinical Conversations podcast, hosted by Josh Kinsey, features the latest game-changing pharmacotherapy advances impacting patient care. New episodes arrive every Monday. Pharmacist By Design™ subscribers can earn CE credit for each episode.
The Precept2Practice podcast, hosted by Kathy Scott, features information and resources for preceptors of students and residents. New episodes arrive on the third Wednesday of every month. Preceptor By Design™ subscribers can earn CE credit for each episode.
To support our shows, give us a follow and check back each week for our latest episodes.
CEimpact Podcast
Leveraging AI as a Precepting Tool
In this episode, host Kathy Schott and guest Andrea Sikora explore the transformative power of artificial intelligence in pharmacy education and clinical teaching. Join us as we discuss the latest trends in AI, practical strategies for integrating these tools into your teaching routine, and the challenges preceptors face in this rapidly evolving field. From selecting the right AI platforms to ensuring students critically evaluate AI-generated insights, we’ll provide actionable advice and real-world success stories to inspire innovation in your practice. Whether you’re a seasoned preceptor or just beginning your AI journey, this episode is your guide to leveraging technology for better learning and patient care. Tune in and transform the way you teach with AI!
Host
Kathy Schott, PhD
Vice President, Education & Operations
CEimpact
Guest
Andrea Sikora, PharmD, MSCR, BCCCP, FCCP, FCCM
Clinical Associate Professor
University of Georgia College of Pharmacy
Get CE: CLICK HERE TO CPE CREDIT FOR THE COURSE!
CPE Information
Learning Objectives
At the end of this course, preceptors will be able to:
1. Describe how AI is transforming pharmacy practice and its implications for clinical education.
2. Discuss methods to balance AI recommendations with traditional clinical decision-making skills.
0.05 CEU/0.5 Hr
UAN: 0107-0000-24-322-H99-P
Initial release date: 12/18/2024
Expiration date: 12/18/2027
Additional CPE details can be found here.
The speakers have no relevant financial relationships with ineligible companies to disclose.
This program has been:
Approved by the Minnesota Board of Pharmacy as education for Minnesota pharmacy preceptors.
Reviewed by the Texas Consortium on Experiential Programs and has been designated as preceptor education and training for Texas preceptors.
Welcome back to Preceptor Practice, where we dive into the latest ideas shaping pharmacy precepting. I'm your host, kathy Schott. In this episode, I'll dive into the world of artificial intelligence and pharmacy education with my guest, dr Andrea Sikora, with the Department of Biomedical Informatics at the University of Colorado School of Medicine. We'll explore how AI is transforming clinical teaching, practical strategies for preceptors and ways to overcome challenges while fostering innovation. Whether you're new to AI or looking to expand your toolkit, this episode has something for you. Let's get started. Well, hello, andrea. Thank you so much for joining me today.
Speaker 1:I am super excited about our topic. I mentioned that we actually did a topic on ai about a year ago actually exactly a year ago and at that point we were saying, oh, what is ai and what does it mean for pharmacy education? And we had our academic hats on and talking in sort of really general terms about what the implications might be. And here we are, 12 months later and we're totally in it. Many of us are using it every day, and certainly students are using it every day and residents are using it every day. I thought it was time to revisit this topic and really kind of dig in and talk about some things that are really practical. It's here to stay, so what do we do with it? How do we make it work for us as educators and mentors? Before we dive in, though, I'd love to have you just introduce yourself briefly for our listeners, and then we'll get into it.
Speaker 2:Wonderful. Well, my name is Andrea Sikora. I am an associate professor at the University of Colorado School of Medicine in the Department of Biomedical Informatics. I am a clinical pharmacist by training. I did two years and specialized in critical care and practice as an ICU pharmacist for about a decade and have worked in the kind of the academic setting as well for about that long through UGA as well as University of Colorado. So very excited to be here. I guess my team has multiple kind of funded grants in the AI and medication use space.
Speaker 1:Oh great, okay, Well, let's just start by sort of setting the background, talk a little bit about how you've seen AI transform pharmacy practice, you know, most recently.
Speaker 2:So I think the things that are coming are very exciting and as much as what's been transformed the last 12 months, let's say, is, I think, very different than even like what I project over the next 12 to 24. I think we're seeing AI-based analytics that are able to predict things like sepsis and ability to have early interventions. I saw a recent kind of advertisement for a program that uses AI to do empiric antibiotic selection and, given culture results, there's AI-based vancomycin and other kind of PK-based tools. I mean that's incredible stuff.
Speaker 1:I mean that's a lot of cognitive labor, cognitive efforts that we learned, that are now being streamlined with this type of technology. Yeah, those are huge relevant to clinical teaching. You were talking about some diagnostic things and therapeutic decision-making kinds of things, but where does all that fall into clinical teaching and what are some of the most relevant resources that you've worked with?
Speaker 2:So one thing, just to think about the transformation I remember a decade ago, when I was doing residency training, one of my preceptors was, like you know, I used to have to go to the library to do a literature search and I was like the library, wow. And I was thinking, like, I do my literature searches on Google and PubMed. But Google and PubMed use machine learning and AI-based search tools and so even something like that has obviously transformed things, and that was quite a while ago. Two things I think about in more recent years are things like slide deck generators. You can have these really, really cool programs that are able to create outlines and beautiful graphics and PowerPoint smart art, but way, you know, way better, prettier. But again, this is something that I remember of being, you know, told I needed to make my PowerPoints prettier and struggling with smart art, and now I can go on to these free generators that make outlines and that's pretty cool and interesting to think about.
Speaker 2:I also think the advent of large language models, something like ChatGPT or GPT-4, but there are multiple open source versions of these is a really interesting area where you can. There's different kind of fun stories of you put in a complex clinical case and chat. Gpt can diagnose it and things like that. It can certainly create outlines, but I think this is an important tool and application in the teaching space. It certainly remakes the concept of do a topic discussion on heart failure or something like that.
Speaker 1:Yeah, so go a little bit deeper on that. I mean, why do you think it's important for preceptors whether you're working with students or residents to adopt and maybe even teach AI tools as part of their education or part of the educational experience they're providing?
Speaker 2:So one part is there is an efficiency to these tools I actually just had. I use Dropbox for my file saving system and Dropbox now has some AI-based calendar bot thing that it wants me to try out and supposedly, if I use it, I'm going to save eight hours a week and it's going to automate my tasks and do all these different things, and, as someone who likes keeping their calendar in a pretty good space, I'm uncertain about it.
Speaker 2:But even that is an AI-based task list for your residents that have a hard time organizing calendars.
Speaker 2:I mean, that's again a really neat tool and I think, understanding that some of those efficiencies are true efficiencies, where we can now spend our time doing higher level tasks that is a good thing.
Speaker 2:I also think that there's a side to the probably most important thing I think of for a preceptor, more important than maybe learning how to dose vancomycin or use LexiCop or some other drug information resource. It's helping a trainee mold their professional identity. And your professional identity is basically it becomes who you are as a human being. You are a pharmacist or a physician or a nurse or whatever have you, and within that space, you adopt the values and the ethical practices of that particular profession. And it kind of means that, like, no matter what you're going to, like try to do the right thing for your patients, and I think that is really the role of the preceptor and as a result of that, then you say, okay, given that you have this professional identity, how are you going to interact with AI or again a drug information resource or anything for that matter, and so, to me, incorporating AI tools is more helping someone have these kind of higher level conversations about who they're going to be kind of when they grow up someday.
Speaker 1:Yeah, that is not where I thought you were going to go with that, but that makes a lot of sense, honestly. I mean, it's just another piece of the puzzle, right, of being a whole professional, and one of the things that is encompassed by that is you know how do you use tools, how do you use them to the best of your ability and keeping the best interests of the patient in mind, and all of those things. That that's the end game, right? So, yeah, that is a great call out. How do you recommend that preceptors identify what tools, or which of these tools that are available, are going to be most relevant to their teaching? Do you have any ideas or thoughts about that?
Speaker 2:This is such a wide open space. I was at a campus and one particular campus has banned chat GPT from the campus entirely, and particularly the medical campus, and I get where they're going with that. But there was a side of me that was like, ah, is that the direction you want to go, or do we need to have more of a conversation about what is it that we want the trainees to really learn? An example I used with this was in the space of writing. One of the things that large language models can do is they can generate an outline for you. They can get you started if you have writer's block. But I also think that if you're doing a good job, showing someone like a really well-written scientific paper or a really well-written grant, and compare it to what ChatGPT is putting out, you can see the difference, and chat GPT is not the one that's coming out ahead in those particular places. Similarly, I think about things like okay, you know I'm using heart failure as my example.
Speaker 2:For some reason today, I have no problem with someone saying that they wanted to use a large language model generator to come up with an outline for what they think their topic is, or to maybe pull some different relevant epidemiology or basic physiology facts, because what we are going to talk about in the cardiac ICU is like how we're going to apply these trials to our particular patients.
Speaker 2:But that type of application, that high level clinical thinking, ai is not going to do that for you. If you said that you wanted to use AI as your search tool to find the studies, that could make total sense to me. You want to have the AI tool help you summarize it into a table totally fine, but that doesn't take away the clinical thought process and the critical thinking skills. And so to me, I think, as a preceptor, in some ways you're modeling. Hey, here are some efficiency weight tools or some ways to make sure you're being thorough in your searches, but like we still need to sit down and talk about this 56-year-old female that has heart failure and how we're going to decide what to do with their metoprolol and again, ai, to me, is not able to do that.
Speaker 1:Right, right, well, and you know, we, I think, have that same mentality here in. You know, in creating education right, we're not using AI to develop education. We're using experts in the field who are doing this work every day and making those clinical decisions and working with patients. But we do use it in exactly the way you described. Use it and exactly the way you described, so generating an outline and then making adjustments from that, or summarizing a transcript into some key points, and that kind of thing. I think you calling out the efficiency component I mean that's where it's really really useful is, yeah, creating that efficiency so that you can be freed up to do other higher level thinking kinds of things. Do you have any experience, andrea, with what it looks like to incorporate AI tools into daily teaching with preceptors? Do you have any examples of having done that successfully yourself, or just give people a couple of ideas about what this could look like?
Speaker 2:Sure, I'm going to use not necessarily an AI-based example, but I think it works very well with AI as a preceptor. We had an Excel spreadsheet that would do vancomycin calculations for you, so you could put in their height and their weight and their age and their serum creatinine and it would pop out for you their volume of distribution and their KE and an expected peak and trough for different values and how excellent. I mean, those equations are kind of complicated. You can easily mistype something on a calculator, whatever, and it gives you the numbers. But those numbers you still have to think about. Does that make sense for this particular patient Just because their serum creatinine says 1.3 and you plugged it in and you get a value, but then you look and their urine output is nothing. Well, okay, is their serum creatinine really 1.3? Or do we have an acute kidney injury on our hands? And I have a couple of very good friends and we have kind of a running joke with did you use the spreadsheet or did you use your brain? You can come up with this answer and I think that's going to be true with any AI tool. So again, there's lots of AUC-based, ai-based dosing tools. I think they're going to be really excellent. They're going to do math, they're going to do kinetics for us in a way that's again more efficient.
Speaker 2:But you still need to look at what the inputs were and say do I agree with these inputs, this serum creatinine, does that reflect what's going on? Do I think this age and weight and all those things really reflects what's happening with this patient? And then look at the output and be like does this make sense? And I think that that, to me, is what the preceptor can model with literally any tool, whether it's AI, machine learning based or just any technology is what do you actually think of this?
Speaker 2:One of the things for me when I was a preceptor and I would think about it is I would say, okay, can you tell me about they're on this drug and what are you going to monitor, given that they're on this drug? So, like we're using vancomycin, monitor serum creatinine, urine output, their cultures, the levels, whatever. And then I would say what do you think about it? Like, how do you feel about this? And they'd be like what do you mean? How do I feel about it? It's like well, do you agree? Should we be on vancomycin or should we not? Cause I think you can get so caught up in the details to forget to like, take that step back, and so I really think that, like, no matter what tool you're using, it's, helping someone take that step back is what's going to be the most valuable.
Speaker 1:Right, right. Well, that's a great segue into the next kind of question I had in my mind, which there's a whole camp of folks who and you described a campus that's, you know, banned AI from, particularly from the health professions. The big fear is that we're going to stop people from thinking critically and using that higher order thinking. So what are some ways that preceptors can ensure and you've kind of described one that students are thinking critically and they're evaluating what they're consuming and, you know, for the betterment of their patients that they're serving?
Speaker 2:So one of the parts I think that's also incredibly important for a preceptor is really helping students foster a growth mindset, and a growth mindset is essentially embracing challenges and embracing learning as part of it. It's kind of like seeing the struggle as the process, all of that kind of good stuff, of an emphasis on growth process than a particular output, that you got the answer right or wrong. Again, having the right answer does matter in healthcare, but in this particular setting, I think it's much more fostering that type of mindset of love of learning or lifelong learning, however you want to think about that, and this is something that I think at times the information age, having a cell phone that has connected to the Internet, amazon with a next day package you can see that there are times when we have lower frustration tolerance than we used to, or discomfort tolerance. I saw a funny meme the other day that was talking about how how is it that back in the day I used to wait for like DSL internet with the modem, like screeching, to connect to the internet, whereas today, if a website takes like five seconds to load, I'm like Xing it out, you know, aggressively. Right, right, exactly, exactly. And I saw a thing too, about, and this is in like parenting. But in parenting they talk about like, don't steal their struggle. Like, even if your kid is struggling to like stack two blocks, don't, don't steal the struggle.
Speaker 2:But I think having an overt conversation about some of those things is really important. Which is the beauty of AI is that it can help get you started on a really tough, complex project to help you make that outline. But there is still going to be the part where you're going to be challenged and have to think and feel uncomfortable and feel uncertain, maybe even feel a little anxious, and that's okay, not even okay, it's a good thing. That is like what you're going for. I almost take this into like Maslow's hierarchy of needs of like. This is self-actualization and basically committing yourself to like a higher level of like what are you doing with your life is finding meaningful challenges and overcoming them. And I think that those are really important conversations to have, which is that these technologies are not replacing your struggle to do a good job with a patient. They're. They're tools and that's all that is. And a tool hopefully makes it easier, but at the same time, it's not going to replace the part where you have to really say like I pulled all this information in and I'm really thinking through all the different pieces and you could say so one, I think the intentionality of the preceptor saying there is going to be struggle and challenge and you have to use your brain and think about it, but on the other side, to help model those questions, so stuff like again, we could use the heart failure example here is okay, you want to do this, you think we should go up on the metoprolol dose.
Speaker 2:Is that based on the guidelines or not? And then they say yes and you say okay, well, have you read the citations that those guidelines recommendations are based on, yes or no? Okay, let's go read them. Well, did those patients? Do you think that patient population is representative of your patient? Are they representative of all patients? You know when was this done? Was this done before ACE inhibitors or after ACE inhibitors? Well, you know what. You know where do SGL2 inhibitors come into? All these things. But, like, helping them to like think about those types of questions, and I think what you're doing as a preceptor there is you're modeling critical thinking and the more that you can ask those questions, you're you're helping them to think about okay, what, what would those questions be in the next case? And so, to me, this is where you cannot. Ai is not going to remove the importance of patient discussion with your trainees.
Speaker 1:Right, yeah, yeah, I love how you're, you know, describe the importance of sort of that that culture of learning, growth mindset, critical thinking, curiosity, all of those things that really are the foundation, I think, of an effective practitioner. You're absolutely right, none of these tools are going to replace that, but maybe the impetus becomes a little bit greater. Well, the importance of fostering that right learning environment that fosters that growth mindset becomes even more important. Maybe, yeah, and it differentiates people between folks who have that lifelong learning mentality and that growth mindset against folks who are task-oriented. And if you're task-oriented, ai is amazing because you're just so focused on checking those things off the list. But as a professional, you've still got that obligation to have that growth mindset, that curiosity, that commitment to the outcome. That's great, really good insight. Well, speaking of challenges, can you talk a little bit about challenges that you've faced or observed when you're introducing or when you've introduced AI tools in a clinical setting? When you're introducing or when you've introduced AI tools in a clinical setting?
Speaker 2:maybe it was with peers or potentially with learners, but any, any insights there, yeah, so one of the things with any tool in in the healthcare setting is, just because you you have the tool doesn't mean people trust that are going to use it, and so thinking also about how to? How are you going to thoughtfully implement that? There was a thing with early warning and sepsis alert. I think everyone agrees that early warning, sepsis alerts are a phenomenal use of AI, that I could just run in the background and grab different electronic health record markers and then have an alert. But if that alert doesn't go to the right person or the right person is overloaded with too many other alerts or so forth, then it's not really doing anything.
Speaker 2:And so one of the biggest challenges that I see with AI right now is that you'll see, there's all these fun studies of like, oh, this algorithm predicts it. Earlier, the silent pilot showed X Y Z things, and that's exciting. But there's very few studies that actually show X Y Z things and that's exciting, but there's very few studies that actually show that AI improves outcomes right now. And that's because that implementation science piece is still very real, and I think that that's a really important thing, just for anyone who is evaluating AI-based literature to really think about is okay. One was this tool studied in a clinical setting? And then two what was the actual outcome as a result of that study? And then three what are like the pieces of, again like, how would we use this and how would our you know, how would this incorporate into a workflow?
Speaker 1:Yeah, no evidence that AI improves outcomes. I think that's a pretty important takeaway from that conversation. Yeah, so any specific pitfalls maybe that preceptors can avoid or should avoid when they're integrating some of these tools into their teaching?
Speaker 2:I think the biggest thing is not to create a blanket statement or, if you're going to create rules, to be very intentional in your discussion of what you want or don't want from that particular experience. It would be a little bit to me like saying you can't use AI is a little bit like saying you can't use the internet. You're going to really lose a learner pretty quickly with some kind of statement like that. At the same time, I think what you can say is it's open book or however you want to say it for this particular element. But for this part, I really want you to try to just sit there and think about it. This is a question that you're not going to find on the internet.
Speaker 2:The question I often have with trainees in the clinical setting is basically how would you design the next study? So, given that we've talked all about beta blockers and heart failure, what's the questions that we still have? Or, given that we know that we have this question of should we use beta blockers or not in someone who has heart failure after a heart attack or whatever is going on? What's the timing? That's not a known question. How would you design that study? How would you think about it.
Speaker 2:That's not something that AI is going to be able to do for you, and so I just kind of say it straight up to them is this is not something you're going to find on the internet.
Speaker 2:This is something where I want you to think about it and to give your best opinion based on what you've read and what you've compiled, and we're going to talk about it. And again, what I'm trying to do is remove the fear of perfectionism there's no right or wrong answer here and also to kind of say like this isn't something, this is a hundred percent something you cannot get online. You have to think about it and to say that kind of again thinking about making a psychological safety and that kind of stuff, like making it a really fun learning environment, that we're going to talk through those things. So to me it's like those are kind of the best practices and so the pitfalls are going to be kind of these blanket, like you can't use it at all type statements or in a situation of making them, they're afraid to fail and so they're trying to use a tool to avoid failing. That would be another pitfall, in my opinion.
Speaker 1:Yeah, yeah, no, those are great examples and, honestly, that's true in life. Right, very, very rarely is it appropriate for us to say never do that or always do this, or I never, you know. I think that makes a lot of sense. We touched on it. You mentioned you're not going to find this on the internet. You know that kind of thing, but how do you address concerns about the accuracy or biases that might come through in AI generated recommendations or content?
Speaker 2:So twofold for this. One thing that I think is relevant, for I guess when we talk about AI, I think about this a little bit in terms of like prediction or algorithm-based work, where it's like I'm going to put in your serum creatinine and your age and I'm going to spit out some output based on that, versus like generative type AI, which is like large language models it's going to give you different answers or a paragraph about the patient, and that can be two different things. So one of the things about like a prediction model is that it's only as good as the data that you've given it. So if it doesn't have so if your data was biased, the algorithm will then be biased and it can only do you know kind of what you've, yeah, what you've trained it on. And I think that's a really important thing to kind of realize is that I think anytime and really that's true of large language models too is what was it trained on? And I think the more we use data and have data science informing our decisions, the more we have to talk about what is the quality of the data that we're looking at. There's the meta-analyses. We teach everyone junk in is to junk out. This is very true with analytics as well.
Speaker 2:Then, within the terms of accuracy, I mean, we still, hallucinations are a very real concern in large language models and there's different ways that people are trying to kind of get around that with using multiple agents, where you basically open up multiple versions of the LLM and have it have an output, and then the other LLM checks the output and stuff like that. But is it ever going to get to zero? Probably not Right. How do you know? And I just and to me, but then again, I mean people also make mistakes. I think the technology is fallible, just as humans are fallible, and so what you're looking for is to say that we're not, we're not going to ever. You know, there's always going to be a teaming component between, again, the tool and that person In terms of the bias and data sets.
Speaker 2:I mean that is, it is an issue, and I don't know how else to say it other than that it's an issue. I mean it's simple things, as even, like you know, the SpO2 monitors that we were putting on people during COVID. Then it turns out that they're inaccurate for people that have darker pigmented skin. Well then, that meant that we thought that they had a higher SpO2 than they did and, as a result of that, they were not getting treatment, because they actually had a low SPF too. And then we're like, wow, they do worse. And so then we're making all these statements about like various people of, let's just say, darker pigmented skin did worse with COVID and it's like, well, no, actually we were doing a bad job treating them to begin with, but that's not what the data says. The data says that they did worse, Right?
Speaker 1:right.
Speaker 2:Interesting, right, right, interesting. Have a SABV sex as a biological variable and you have to be very thoughtful about how are you going to be looking at sex differences. For many, many years we decided that women because they have menstrual cycles. That's too many variables. We'll just exclude half the population from everything, right?
Speaker 1:right, that's a good plan.
Speaker 2:Right, you know, and it turns out that, like, maybe we should actually just be incorporating that and thinking about those things and so, and that's just a little bit the world we live in.
Speaker 2:But I also think that there's a lot of really good, like books and reading kind of on, like bias and data sets and bias and AI spaces, and I guess sometimes I think it's like recycling and it's like, you know, you find out that there's like all this plastic in the Pacific Ocean. So it's like, does what you do matter? And I, you know, I I hear that there are systemic and structural components to this that make this very difficult. But I'd also think that you can look at trials with AI based tools and say like, what, what was the training data makeup and do you agree with that training data, yes or no? And then you can also test it and say like, do I agree with these outcomes and even the sensitivity analysis of like change the gender or the ethnicity of the patient, and see if you get a different output you know can maybe be worthwhile, but I think there's something you have to be intentional about.
Speaker 1:Yeah, yeah, yeah. Well, and the biases that you're describing were there with or without AI. It's just a matter of yeah, looking at it through that window. Let's chat a little bit about a couple of success stories. I'd love to offer a couple of maybe case examples to listeners as to how AI has significantly improved patient care or student learning. In your personal experience, does anything come to mind that?
Speaker 2:you could share. There are. I've talked a little bit about the early warning systems and I really those are very cool, but there's early warning systems for sepsis as well as cardiac collapse, and you know, one of the things, if you've ever gotten ECLS certified they talk about with code blue is the best way to treat a code blue is to avoid a code blue, and we talk a lot about what does that look like? And so I do think that those systems that have been effectively implemented, we've seen reductions in mortality as a result of that and I think that is incredibly cool to think about.
Speaker 1:Yeah, yeah, incredibly cool to think about. Yeah, yeah, what's the student or resident learning process look like in that situation, or in becoming familiar and able to utilize those algorithms?
Speaker 2:Again, I think what's important in something like this is do you have an understanding of sepsis? And what I mean by understanding is one why are the components that that prediction algorithm is using like? Why do they make sense physiologically? I think understanding why that tool is working the way it does is very important, because there may be situations where it's not going to fire, but you still think that they're septic. So, like an example might be, usually a high serum creatinine would help fire that particular sepsis alert, but this patient is a paraplegic and so they have a low muscle mass and they don't have a good serum creatinine or they're already on CRRT for some other reason, and so their creatinine looks a lot better than it really is. So, understanding what's going into it and why it would show up in sepsis, but then also like why it would not in other particular patient populations, and then also understanding like, why does it matter to react or to intervene? And you could say, well, of course, sepsis, we're going to intervene on it. But understanding the fact that, like you know, there's time to antibiotics improves mortality and time to correct antibiotics and all of those, all of those components, I think that is very important.
Speaker 2:So to me. I don't imagine that my topic discussion on that disease state would change a whole lot. It would be much more than I'm just adding. Five to 10 minutes of. This is how we use this tool in this particular space. So again, I still don't see and I really don't foresee even in our lifetime where AI is taking the place of a thoughtful clinician. It's just going to make a thoughtful clinician more fail-safe than we can be.
Speaker 1:Yeah, yeah, I like that, I like that word honestly More fail-safe, maybe a little more efficient. Looking toward the future, I mean, how do you, what do you see as the next iteration?
Speaker 2:There are some really cool things that I think are coming out, and again I was talking about like the multi-agent concept, you know, and this is where you would have, you know, an LLM that's maybe trained to think like a cardiologist, or an LLM that's trained to think like a social worker, and and then you would have them kind of be in collaboration with each other to come up with outputs. One of the things that I personally, if you were to talk to, like me and three of my critical care buddies, about a particular patient in our treatment, we may come up with three different answers that are all valid in some way, and the question becomes who's the most right and what is the degree of confidence that we're using? So maybe there's me, who practices part-time in the ICU right now. Maybe you consider me less valid, but maybe I'm more valid than the person who's one year out of residency. Maybe not, we'll just pretend but maybe there's the other guy who's been practicing for 10 years and we consider him to have the most experience, or something like that, and you could say well, an LLM maybe is experience, it has the access to the internet.
Speaker 2:But how do you look at probabilities of who is right or what is the degree of certainty. So what might happen is you have the person that's one year out that says cefepime 20% certainty, and I said zosyn with a 50% certainty. And then the other guy said well, meropenem with a 90% certainty. Technically they all cover pseudomonas. They could all be reasonable agents, but you might want to go with the one that had the 90% certainty Right now. If you go in and type into chat, gpt, how do you treat this? It'll just give you an answer and that it gives you the impression that it has a lot of certainty. But I think the ability to kind of add in probabilities or Bayesian thinking or that type of stuff into the outputs, that I think is really really important, particularly in the space of healthcare where a lot of times there are multiple right answers.
Speaker 1:Yeah, that's a great example. Yeah, you're right that you put something in and you get one answer back, and that would even further promote that additional level of critical thinking too. Yeah, yeah, a hundred percent. As we kind of wrap up here, what is one piece of advice you'd give to preceptors who are just starting to integrate AI into their teaching?
Speaker 2:That is a great question. One component that I would think about is to become good at evaluating the literature that is incorporating AI. Just like how you learn how to maybe analyze a meta-analysis and you learned about the I squared or the funnel plot, or you've gotten better at understanding a cohort analysis and whether they did regression analysis or not. I think that there are certain pearls to how you analyze literature in the AI space. Was it externally validated versus internally validated? Has it been tested in a patient population? Was it silent pilot? Was it multiple centers? Is it a proprietary algorithm or is this open source? What was the data? What was the patient population? All of those types of questions. I think getting kind of good at that is really important, because a lot of I think the role of the clinician is going to be the same. Ai is a tool, in the same way that, to some extent, drugs are a tool. So just when Andrew Tenzin, too, came out and went and dissected all those studies to say, should we add a formulary or not, you're going to be doing very similar things with AI in this space. So I really think that is an important component. The other part is to keep an open mind and to be open to experimenting with it.
Speaker 2:I have, you know, like I made it a kind of a goal for a little bit where I would play around. I would use chat GPT at least once every day just to kind of mess with it. I'm not sure I use it every day in my normal life when I wasn't doing that, but I kind of wanted to see okay, what? And I couldn't. I just couldn't think of a good image idea, and so I used AI, gave it a description of what I wanted. It gave me 10 ideas. I actually didn't like any of the 10 ideas, but then it gave me an idea for the one I did want. But I don't think I would have done that if I hadn't have kind of made an intentional choice to like experiment at other points.
Speaker 1:So that would kind of be my other, my other advice.
Speaker 2:So that would kind of be my other advice.
Speaker 1:Yeah, yeah, experiment. Well, just maybe one final pearl, I think for listeners Is there the one small step that preceptors could take today to begin actually incorporating AI into their practice.
Speaker 2:Oh, listen to this podcast.
Speaker 1:I love it. Yeah, exactly, exactly.
Speaker 2:My steps would be to think about you can look up free AI tools for, and you could do to-do lists, you could do share folders, you can do organization, project management slide decks, figures, outlines, and just to look at what's something that they think they struggle with or think they would benefit from, and to go play with it. You may find that that 30 or 45 minutes doesn't come to anything particularly useful, but you may find that something that you really really like using.
Speaker 1:Right, yeah, or it may generate another idea about another way. When you were talking about using it for from an organization as an organization tool, I couldn't help but think we've we've taken on a project here to clean up all of our SharePoint files as an organization, and I'm just wondering if there is a magic bullet out there for that, because it's become a little bit more of a challenge than any of us anticipated.
Speaker 2:So yeah, I guess maybe that's. Another one is I had a trainee of mine who I gave a task and it was kind of a rote task where we needed to kind of like reorganize an Excel file to some extent and like put some new labels with it. And she's like would you mind? Like this would take me an hour to do, or maybe two. Would you mind if I take an extra day? I want to see if I can get GPT-4 to do it and I was like sure, if you would like to spend your time creating or thinking in that way, go for it. And she figured out how to do it and that ended up, even though it took us longer in that moment. We then used that for a few other things which saved us time and, if anything else, maybe it was a more fun way to spend your hour than just kind of having to do this, check this box, check this box, and so, again, I think giving yourself space for that experimentation is really important.
Speaker 1:Yeah, yeah, that's a great example. Continuing to foster that curiosity is so important. So, andrea, this was a great conversation Lots of practical, I think, advice and examples that were certainly helpful and eye-opening to me. So I really appreciate you being willing to chat with me about this, and maybe we'll wait another 12 months and have another conversation and see where we are then.
Speaker 2:I love it. That would be great.
Speaker 1:All right, thanks so much, cool, take care. Thank you for listening. I hope you gained valuable insights into leveraging AI tools to enhance teaching and patient care. Remember, adopting AI is not about replacing traditional methods, but about complementing them to prepare the next generation of healthcare professionals. Remember to check out previous episodes of Preceptor Practice and don't forget to visit the full library of Preceptor by Design courses available for you on the CE impact website. Be sure to ask your experiential program director or residency program director if you are a subscriber so that you can access it all for free. And if you are a subscriber, don't forget to claim your CE. Thanks again for listening and I'll see you next time on Preceptor Practice.