CEimpact Podcast

Artificial Intelligence: What Does it Mean for Clinical Teaching?

December 20, 2023 CEimpact
CEimpact Podcast
Artificial Intelligence: What Does it Mean for Clinical Teaching?
Show Notes Transcript Chapter Markers

Utter the words "Artificial Intelligence" to other preceptors and pharmacy faculty, and you might get a reaction that ranges anywhere from fear and anxiety to excitement. In this insightful discussion, we explore the fundamental question of what AI is and its diverse application in shaping the landscape of pharmacy education. Guest Tim Aungst delves into the challenges surrounding student and resident access to AI tools, shedding light on the challenges and opportunities that access brings.
 
Host
Kathy Schott, PhD
Vice President, Education & Operations
CEimpact

Guest
Timothy Dy Aungst, PharmD, RPh
Associate Professor of Pharmacy Practice
Massachusetts College of Pharmacy and Health Sciences

Get CE: CLICK HERE TO CPE CREDIT FOR THE COURSE!

CPE Information
 
Learning Objectives
At the end of this course, preceptors will be able to:
1. List a primary advantage of using artificial intelligence (AI)
2. Discuss ways in which access to AI should inform teaching strategies.

0.05 CEU/0.5 Hr
UAN: 0107-0000-23-402-H99-P
Initial release date: 12/20/2023
Expiration date: 12/20/2024
Additional CPE details can be found here.

Dr. Schott has no relevant financial relationships with ineligible companies to disclose.

Dr. Aungst reports he is a consultant for Otsuka and an advisor for Digital Therapeutics Alliance. All relevant financial relationships have been mitigated.

This program has been:
Approved by the Minnesota Board of Pharmacy as education for Minnesota pharmacy preceptors.

Reviewed by the Texas Consortium on Experiential Programs and has been designated as preceptor education and training for Texas preceptors.

Follow CEimpact on Social Media:
LinkedIn
Instagram

Speaker 1:

Welcome to Preceptor Practice, CDMPAC's podcast, created specifically for pharmacy preceptors. Each month, we cover a topic that is focused on helping you connect to resources and ideas that can help you improve your precepting practice, become a more effective teacher and mentor, and balance the business that comes with these additional but important responsibilities. I'm super excited about the topic today. Utter the words artificial intelligence, or AI to other preceptors and pharmacy faculty and you might get a reaction that ranges anywhere from fear and anxiety to excitement and enthusiasm. In this insightful discussion, we explore the fundamental question of what AI is and its diverse application in shaping the landscape of pharmacy education. Guest Tim Unxt delves into the challenges surrounding student and resident access to AI tools, shedding light on the challenges and opportunities that access brings.

Speaker 1:

Let's listen in Well. Welcome Tim. It's really great to have you here. I appreciate you taking time out to talk with me about a topic that is kind of a hot one lately. Could you just start by just sharing a little bit about yourself and also talk with me a little bit about you know? What got you interested in AI and why has that been a focus for you recently as a pharmacy faculty member? Sure.

Speaker 2:

Thank you for having me here. I mean honestly. For me, my background is in an area called digital health and focus on pharmacy technologies. So I would say for over the past decade, I've been focused on the role of technology and implications for patient care and how that touches on education, how that will do different things, and I'll give you an example. So probably about, let's say, circa like 2015,.

Speaker 2:

I remember being brought in a place and shown a product and it was me as a rather faculty pharmacist, such like that and we were challenged basically to ask a device, medical questions, drug related questions. So we just started asking and just ran us stuff like what's the dosing for this, what's the side effects for this, what's the interactions for this? And it was verbalizing back to us answers that were fairly correct but still has some problems with it. So this was 2015. And at that time I remember walking out of the room like I wonder who's making this thing and I had some guesses and I don't know if they were right or wrong. You know, I've talked to other people with some thoughts, but we saw like this early iteration of like basically AI generated content that was basically mining our medical literature to get these responses. So for me, ai and this concept of large language models, ai generated content, is not necessarily new. I've seen it for a long time and I've had time to reflect on it and see it in other industries and healthcare. So when chat APT from open AI rolls out its version you know now a year ago and took the world by storm, I think for me, like on my head, I was like this is a time the cost has come down significantly so it can offer the service for free. The availability for society to actually understand what they can do with this and the implications I think has come a long way.

Speaker 2:

So my interest in technology has been for a long time. Ai has always been a part of it. It's always been with some power and a lot of these things in the background. The issue has always been, like you know, access and for the majority of people, they never had to directly create something, ask it. They were given a tool that they plugged in, some information, got a response out and that was it. We never had to like, deal with and ask the question well, how does this thing really truly work?

Speaker 2:

So, watching my colleagues watching healthcare, watching other pharmacists deal with this and ask you know what is going to be the outcome, for this has been quite fascinating for me and quite challenging, because I think a lot of the questions that has been floating in the back for a long time has come to fruition. Honestly, part of me thinks that we knew the capabilities of AI. We didn't know what that was was going to be something we had to deal with, but I think the timeframe was always. People like this will be like a 2050 problems. We had 24. Right, and that's not going to be a 2020 problem, but it is now and now we have to deal with it. So, again, what people wanted.

Speaker 1:

Yeah, I think that's really, really true. It was always something out there in the future that was maybe going to happen in my children's lifetime, for example. Yeah, yeah, yeah, yeah. Well, full disclosure used AI to create a rough outline for this podcast. So I don't think I told you that when we were preparing earlier. But and I'll share a little bit more about how we're using it here as a company occasionally. But you know we throw around the AI term and we all have sort of an idea of what that is and what that means, but could you just put a little bit of a framework around? You know what AI is to the average person?

Speaker 2:

Yeah, I mean, like when we look at AI, the question becomes I think the basic form is that AI is trying to replicate to a certain degree the ability to emulate human decision making and human thought processes and subsuming that to a certain point that we're comfortable with I always look at it like this is that time is a one commodity in life that you can't bank. I can bank money, we bank toilet paper. During the pandemic, you know, we went, we went ham on everything. But it's kind of like, you know, looking at like whatever we really give time out. Now we can, you know, have an Uber for transportation. You can have Door Dash brings you food. That can shop for your Instacart. And like you can shop on your phone Odds are maybe if you're listening to his podcast is like you're playing with your smartphone right now and doing something your booking appointments. So we've kind of taken on the capability of technology to help reduce amount of time to do these types of things. And going with that further is like you know, what about a self driving car? Would you want that? Would you not want to think about during a commute and you get to sit back and ask pick an app. You know why do you have a smart robot in your house that does your vacuuming for you? Because you don't, and a lot of stuff runs off like simple AI platforms and things like their math your house so you don't run over your rug or hit this thing, or determines that you know something that shouldn't be touched, and it's all in a nature of saving time. So the reason I bring this back is why do we want to make automation? Why do we make robotics? Why do we make this technology is to save time in the workspace.

Speaker 2:

You mentioned, for instance, that you used AI to help come up with this and you know, do you? Instead of thinking about, like you know, for an hour, what are my learning objectives? What title do I want? What's my session description going to be? What are my questions? Because you know that was time, but it's like is that something you like spending the time for to do?

Speaker 2:

I always start with this, because the background, I think, is why would we want to make something always comes to the beam in the question is why we want to make things to save time at the end of the day. So now we have to figure out how do we make something that copies what I would do, and so that's a learn. So if it has to learn, has to learn from humans and what our capabilities are. How do we think, how do we understand things? And this then goes into how you make AI to a certain extent. There are levels of algorithms that you can build. You can have self learning tools I can develop and say, hey, I came up with this decision. Is that what you think is right? And so it has to take our feedback to a certain extent and say, yes, you did a great job or no, don't ever do that again.

Speaker 2:

And it starts determining what is right, what is wrong. It's almost like having a child. It's like trying to train like don't do this, do this instead, and it doesn't know why, and the why is coming because it's different to you. But as it gets older, it'll start exploring its own literature, exploring other things. It may combat.

Speaker 2:

What you want, and I think that's really a hard part right now with Generated AI is we not only train something to basically take information and understand what we think, but we gave it access to the sum of human knowledge which we often hold in the palm of our hand on a smartphone, and it now has access to all this information that we don't, and it outperforms even us on terms of that because of processing speed, time again, and the development and opening what we see around us. So at the end of the day, ai is basically something that's going to be mimicking human capability, that falls certain algorithms, deep neural networks, et cetera, that's powered especially by mathematics let's just say that but to a certain extent maybe, let's say guard rails, in terms of what it can or cannot do to accomplish that. I mean, at the end of the day, if you are into mathematics, a lot of it is just computer science, understanding those kind of actions and such. But then you go further into the whole data side, data analytics and data science, where you try to help feed and create something to follow that logic pattern that makes sense, and we see different developments for that. Now we can have things like computer vision.

Speaker 2:

So at pharmacy can we make something that can look at a tablet or pill and say it's this medication. And that feature is now out on the field where you can take a box of medication. Ask a generative AI, hey, how do I take this medication? They are going to say it's this drug. It's going to pull up a package and sort of give you the information belt from the PI how to do that. That's the capability right now that does exist and these are different things and as it has success and failures, it'll learn how to get better and better and better, based on feedback in terms of what it should accomplish. So that's my basic 101 discussion of what AI is without getting into general.

Speaker 1:

That's really helpful. I think I love your sort of, the ultimate goal being to save time. And another example I used chat GPT yesterday to write a needs assessment for a project that would have taken me probably four hours and it took me about 10 minutes. And then you still have the required. You still obligated to make sure that it's accurate. Are the studies real? Are you pulling the most recent literature? Who can I share this with? Who's an expert in the field to verify? I mean all those things? But I didn't spend that four hours doing a lead search and writing things from scratch. So there's just some really great places for it.

Speaker 1:

And I think the other way that we're using it is just to generate some creative thinking. If I'm having a writer's block on a title or just a good catchy description for a course we really want people to hook people with, it can just be a really great resource for some of that too. So talk a little bit, tim, about what you're seeing in pharmacy education. This concept, while we know it has opportunities, I think folks' first reaction is to be frightened by it, and being in the education space brings another whole angle around academic honesty and some of those kinds of things. So chat a little bit about what some of the things you are seeing, what are some of the conversations you and other faculty are having around this.

Speaker 2:

So I've been giving talks across the country, having online chats with a lot of people about this, and I would say there are some themes that I picked up, and I can't quite pick up on who, what generates these things, but the one thing that does stand out to me is that there is a feeling that AI is a danger because it reduces the ability of people to have critical thought processes. That's probably one of the biggest barriers to perhaps embracing AI is that you have people saying this will reduce the need of people actually having to think, and I don't think that's a bad way of looking at that, but I think it also comes down to does that, then challenge in terms of how we should be teaching as things change, and I will take a step back then. We've gone through many industrial revolutions as a society, and the one thing that always comes back to me is what is the end product that is desired? And if I look at something like the original, one would be making cloth. So you had these groups that basically spent centuries just using a manual labor to make cloth, that you would then turn to clothing, textiles, et cetera, and then you come out with a loom that's automated and suddenly the work of 10 people can be done with one machine half the time, and I come back to that time thing. So do you feel like it's still valuable to put a lot of work into favoring 10 people that can get sick or not sick, that can still produce a male outcome and have human foibles, versus a machine that can do these types of things, when, at the end of the day, you desired an end product that just made textile, and what ended up happening was these people actually there was a leader that in this whole movie came the Luddite movement and they actually went out and they started smashing on the machines and what ended up happening was the French government was called in and actually put down the riots. So the government intervened and actually shut it down.

Speaker 2:

And we can look back and say, yeah, that kind of makes sense. It may drop down the price of clothing and drop down all these other things and freed up labor force, but you had people that were involved in this and I step back as that ability to make cloth, the ability to do these things, does it still have value? As times moves on, technology changes, we can't put Pandora's box shut anymore. Aei will be here to stay and will be integrated in workforces in every single industry. So I had to take a step back and say I don't want to borrow our students from having knowledge about this stuff because, quite frankly, if we're hoping they're going to be in the workforce for the next 30 years, they need to at least have some level of knowledge for education or utilization, to keep a press compared to their colleagues. For all intents and purposes we talk about current students we're probably going to go through a great retraining at some level where pharmacists who have been working for 10 to 20 years and still have another 10 to 20 years left in the workforce that will have to be trained to basically keep up with changes that are going on. Is CEA up? Is it new levels? We've gone through these transition periods in the past.

Speaker 2:

So the reason I gave you this long one answer I hate to say this just because I think comes back to the philosophical issue about like you're concerned about losing critical thinking, but I don't think anyone really asks us to do math by hand anymore when we have calculators available. I don't think you want to go back to doing and using a slide rule to do calculus. You have a TI-89 or 83, they can handle the same amount of work. At the end of the day, all you care about is the answer, and I think this is where I had to draw the line, though, because I don't think abandoning how we teach to a certain extent makes a lot of sense, because right now, you and me can use chatGPT and ask us something and it gives us an outcome, and when you were using it and gave you some titles, you could look and say that title looks and makes sense, those objectives make sense, the literature is playing from makes sense, and the reason why you can say it makes sense is you have that breadth of experience and knowledge and you've gone through all this stuff For a nascent person that's in their first year of education, that does not have any experience to use a generative AI tool and get a response.

Speaker 2:

They may not have the experience to say that makes sense or not. And I think that's going to be the defining factor, as those of us that have been around for a while can look at it and ask you a medical question or something like that, and look at and say talk like, yeah, that makes sense. And why is it? Because you're all from that experience.

Speaker 2:

These students, if they trust this to do it, but have no capabilities of fact checking.

Speaker 2:

They don't know how to fact check and how to rationalize and say, hey, this looks like it made sense.

Speaker 2:

They aren't saving any time because they're going to have to go back and check the work, no matter what. They're still kind of doing it sometime, but they're still going to have to go back and double check a lot of this stuff. So we have to teach students to still have the basic scientific knowledge principles and such around that, but how to then use that and apply it in a means of whether it's gen AI, whether it's other things that AI can do, so that they can at least ensure that they were fact checking, doing the quality assurance that we need at the end of the day, because a human still has to make that decision, I think from a regulatory period or anything else in the workforce. So, by and large, education-wise, I think we need to have students expose this stuff, but I don't highly say, hey, this is going to remove critical thinking, because they still have to have the critical logic in order to apply and say does this make sense at the end of the day.

Speaker 1:

Right. As you're talking, a couple of things come to mind for me. So my husband is a K-12 educator, now administrator, and this will tell you how long we've both been in our respective careers. But at the time that we were bringing laptops into the classroom and things like that, the conversation was will I just be able to Google it? And my husband also, tim, would say why are we teaching things they can Google? Let's teach them how to think about the things that they're Googling.

Speaker 1:

And we're not that far off the mark in the way you're describing this. It's not that we won't continue to teach critical thinking. It's that the context in which we're teaching that is changing and we need to be adapting to that, and it's kind of like the example I gave of using AI to write a needs assessment. It's still my responsibility to know that what I'm looking at is accurate and is it in line with most recent best practice and science, and to do the work to ensure that, and I've been having similar conversations with other CE providers. How do we manage AI in the context of CE development? And, at the end of the day, the author of the CE is responsible for making sure that what they're putting out there is accurate and appropriate and aligned with evidence. So it doesn't take away those responsibilities, it's just another tool in the toolbox that is there for our use.

Speaker 1:

When I talk to one of your colleagues, it sounds like and maybe this is more in the context of experiential education, but maybe and we'll talk a little bit more about assessment strategies. But assessment strategies are changing or may need to change as well. How do we know what our learners are learning, and those kinds of things. But before we get too far down into that, talk a little bit about what AI means for preceptors, who are either teaching students or residents. How does this play out in an experiential setting?

Speaker 2:

I think that's going to be the biggest question that will pop up is like, for instance, you know if you ask a student, you know, drug information question and if the expectation is that they will go to some kind of tertiary, secondary resource, you know whether it's like to come. You know clean farm micro medics and scroll through that, find the information that apply to your case, like you know, wait and like how much is the patient way, anything else going on kidney function, dose correctly and then deliver you the answer. And that's been like your, you know your your modus operandi for many years. And then someone someone, say, pulls out your smartphone, just puts your question and gives you the answer is correct. How does that change in terms of your expectation for the outcome? You ask for outcome, they deliver the outcome, but is it going to annoy you how they actually had that outcome delivered?

Speaker 2:

And I think that's going to be the biggest thing is we come back to the question of logic and thought processes. The fact that gave you the answer, it's right. You may be like, well, that's what was expected. But the student themselves you can see their actions. You get the answer, didn't know work themselves. So you can actually pivot and say how do you know you can trust that answer? You may, for yourself, ever had the experience, you would trust it. But the students, so it's done. So I will look at those opportunities. Like if you allow them to use these tools. You walk this through and say again identify what is the limitations that we can take on to and I don't think this is any different than like what we actually do in clinical practice.

Speaker 2:

We do have clinical decision support tools that have come to market. You look at our electronic health records. You look at our pharmacy management systems. They are already providing these drug interaction checkers such out there. So it's not like these tools have popped out anywhere and we, depending when you enter the workforce and how even the workforce may determine how long it took you to actually trust it. So the key thing is for those tools and such, they're built into a system that got through regulatory processes. What we see like chat, ept, bar, another large learning models and generate AI is they have not.

Speaker 2:

The thing is, you know that will happen at some point. Once we hit that point, you know, does it become like we don't have to deal with it? And I think I come back further. Further with this is, as these tools exist to help make our work go faster, you know that may have to adjust. You know how much do we want to put on a student in terms of take on and do more work or leverage these tools effectively at the end of the day? Now I will take a step back and say this my concern always with technology has been. As you get better and better at doing things faster and faster, do you put more work on the person you know, and I think you know you and me can deal with this like you know, as, like someone says, hey, you know, you're using AI and you managed to pump out what used to take you four hours. You do 10 minutes, right, well, let's just give you more projects. I'm not going to increase your pay, yeah Right. I just we think about time, but if you are more effective time because of the technology, does that mean that you do and take on more work? So something I do face a barrier is like let's take a step back and acknowledge the fact that this may save time, but does that mean you will put more work in a person you know as a student, and that we had to really think about the capabilities of leveraging those tools and such. So I think for me as a preceptor is like you know, we make newsletters, we do drug information stuff that and I tell students try it and let's see what it does. Let's also take that time to also fact check it, because this is part of your learning process. Is that you know what did it give what did not, and then you come out from a different level, I and again it's like whether or not you want to use those tools is fine.

Speaker 2:

The effective way using those tools is, I think, the biggest question. And it comes back to what you're talking about with your husband. You know the question is well, if they had this, you know, why do we want to deal with it? Or why do we still do the same things in these old processes? And I think when I look at those situations, the question that comes up again is around, like you know, does this have value for them?

Speaker 2:

I think historical knowledge and like where this come from, was a process. How does it work? And who puts information out there has values. And now we think about, like you know, that data, you know want to Google everything online and take the first hit as your information. Is that correct? I think no. Now we take a step back and say you have access, also information. But how do you understand that information is now correct? What's the limitations? Do you want to use Wikipedia? Do you want to use some kind of primary source? This came from a blog article. How do you determine if it's accurate or not? So again, like we have access to more information we've written in the past, but how do we actually accept it, if it's good or bad and that we can spend time thinking about further, I think Mm, hmm, let's talk.

Speaker 1:

Let's talk a little bit more about the critical thinking concept. You know that we touched on earlier. So, in the context of you know a residency environment or experiential learning for a student, what are some strategies that preceptors can use to truly assess student learning? And you know, or even even assess where they are, you know in, in their, in their journey, because you know, as the example you gave a student or a resident, come up with the, with an answer, and the question is whether or not they trusted and I realized that's a question that could be asked right, how do you trust that answer? But what are some other maybe? What are some other strategies and things that preceptors can keep in mind In that critical thinking space?

Speaker 2:

I think, and I think for me these days is that the key thing is to ask questions that the student and learner can tackle at that level of where they're at and asking how did they get and get to that answer? If you can provide a rational, logical approach to how you got to a point, then I think then you can answer to whether or not that makes sense. And I think this goes back to 2 things like. The 1 thing that always concerns me as a preceptor is if a student gives me the right answer, then, as I asked more questions, I find out they got the right, the right answer the wrong way, yeah, which is you got the output that you want to, but then you find out the how they did was either due to maybe a gap of knowledge, they just made up a lie. So I think going back into is we have to really not just kind of like, just ask questions that are easily derived these days. We have to put students in a situation With tools and resources that they will have access to, that they would have to use at that point in time, that are reasonable and watch how they do it. So if they have a patient case that I have to work up. You know they're going to go to primary literature that really should. I mean into guidelines, best practices, and maybe pull prior literature secondary as needed, to support it and see how they take the information, condense it down and then answer a question I've come to.

Speaker 2:

I've come to areas where students may have used a generative AI to give an answer and then, when I asked the questions about their thought process or what they think is going with this, they can't answer any further and with that, the notice is that they took something but they themselves don't understand the underlying stuff underneath.

Speaker 2:

And how you as a preceptor, respond to that is going to be what determines, I think, how you move forward. I mean, like you could attack that student for doing that, or it could be a learning moment saying, hey, you know, you're probably not at the level where you should be using these types of things. You don't understand the foundational sciences or principles and such. You are getting something that you don't even understand and that's a danger. Then, right, like why would you want to be don't understand how to use? And I think there's. You know there's many things in society we deal with like you can't do that I want to ask what an operating motor vehicle accident just after watching a motor vehicle, after watching a YouTube video or playing a video game. How to do it right?

Speaker 2:

I would want to go through some kind of training procedure and such and see what other examples of practicing, doing and showing that you understand the principles and then green lighting. Hey then you can do these types of things I think is acceptable. So for these preceptors, I think falling back on assessments, assignments that you know requires someone to have some thought process into something that is much more time focusing on these days. Then just asking for simple information recall if you spent 50% of your time do information recall Well, with the capability, current technology, and let's just say move it, drop it down to like 20% and make sure they can do it right, but use that remaining time to really focus more that logical context because they have more information to help back themselves up and how we actually do more of this stuff.

Speaker 1:

Yeah, which is honestly really not a shift and where we should be spending our time anyway. You described it as the how and the why, which aligns with things like the Socratic method and other strategies, where you're really getting at the how do you know what you know? And versus just what's the right answer. So that's not rocket science, but I think it still has to be sort of we have to reframe it a little bit in this new world.

Speaker 2:

I think the main thing we always struggle with in history has been we've always relied on our elders to be the experts because they have time and experience you can fall back onto.

Speaker 2:

I mean, like you know, we go back centuries and someone's asking why is the sky blue?

Speaker 2:

And someone says you know it's because of X.

Speaker 2:

Well, how are you gonna fact check them? You probably trust them, right, and I think this is where it gets really caught up for us right now is where we're liking a lot of educators and pretty, such as getting hit is that you know how this tool out there that they don't know how to use, students don't know how to use. So we're all playing around trying to figure out best practices for this right now. So no one's the expert in the room and because no one's in the expert room, we don't, that trust has diminished substantially, I think, throws people off, and that's tough. And I feel for people like that and I, because I feel it every day too is like you know, I have to accept that I have students that actually know how to use this stuff better than I do and I run with them like it's like, that's great, but let's just focus on like do you like, how do we actually use it, what makes sense, and such as you get further in your education and pull this together.

Speaker 1:

Yeah, do we throw out journal clubs and written assignments and presentations?

Speaker 2:

So these are gonna be the tough ones, to be quite honest, yeah.

Speaker 2:

There are some assignments I feel like the value has diminished substantially. Like written reflections, I feel has diminished in an incredible value. I think if you asked one at the point of time, tell me how you honestly feel that still has that way to put someone in a spot to work on soft skills, articulate how they feel, communicate and reflect, I think that's good. That may not be something for all learners. Some learners obviously can't, you know at the point. You know, answer effectively and that's fine. The big thing is I don't think we can ask people go back and write and do this because it doesn't take too much. If you really wanted to tell me about a time that blah, blah, blah and I pretended this blah blah and write me a 250 word, 500 word reflection and a style Like it can be done, like that's, it's feasible, highly feasible Journal clubs, what I'll say is off the shelf products like the current learner learning models aren't really good at it. What they get good at is gonna be the question this actually comes back to. What I'll call is like you know, the ethical conundrum actually face is that there are paid programs out there that can do some of this work, so like there's chat APT four versus 3.5 that most people have access to, and the four is like a grad student versus the 3.5, which is like a college student. You know I start. This is my conundrum is like if students are buying stuff to help them with their work that could put them at a higher capability than my students who do not, and that that's actually a big concern for myself right now is, you know, even in the playing field for everyone else. So I don't want to make assignments harder because I think people have access one tool and then make it hard for the other person who does not have that tool. At the end of the day, I don't think that's right. But anyway, coming back down to it, so, like you know, journal clubs by large, I am not too worried about right now.

Speaker 2:

I think what you can raise a standard is that you can expect someone to know the background, or at least query the background, figure out more, and I this is actually where I've had success for it I've actually have told some students. You know, if you don't have an answer or if you feel like you need to learn more, try asking chat to you to instruct you on those minute. Make the accident. You could go on YouTube and look at these things. But you could ask Cha Cha PD, like you know, explain to me what does this mean and give me another resource to understand this.

Speaker 2:

Like, and I think what you're going to see is there'll be new API, some platforms that come out and like new things that I've seen some educators doing this. They actually take their notes and I think some of their things that they created for the classroom they dumped it into the chat to PD and some other ones, so students can actually go through and say, hey, you know, create 15 sample questions for me to practice. Or my lecture talked about X, can you give me more descriptions on it? And basically the LM will go back in, look at the lectures notes and pull out their information out and get the student more tutoring.

Speaker 1:

Then for that.

Speaker 2:

So we had like those new opportunities there. But when I look at it's like so you know, again, like if we're seeing that that's where the things are moving, we can tell students will focus on X right now, these things, because current models are really good at it Presentations and I think that's an interesting one. I imagine you and me will hit a point within a few years where I can go on to PowerPoint and say, create a 15 slide deck on X and that spit something out Again. Coming back to it was going to probably be us to finesse it and make it look good and make sure I meet our target audience and have the right information on it.

Speaker 1:

And.

Speaker 2:

I think that's going to be the default in society at some level. So us knowing what the limits of what's going to come out is going to be key thing. Ultimately, then, I would challenge for any educator or preceptor is if you have something that you are concerned about, test it with the current tools. So if you throw it into the system and it gives a garbage answer and you've been prompting it very well you can say to yourself well, this is probably fine to use. But if you get like your prompt in and gives you something that's really easy to get the goal, you should probably take a step back and say this is worth the time.

Speaker 1:

Yeah, yeah, that's a great point, and I think back to my husband's thing about if we can, if they can Google it, why are we teaching it? So that's I think. Yeah, I hadn't had to thought about that at all, but that's really a good, that's a good piece of advice. I also hadn't thought about, you know, the potential for disparity among learners, you know, based on what they can afford to purchase for themselves or not, and so that was really interesting point, tim. I hadn't thought through that.

Speaker 1:

As we wrap up, you know, I think so many, so many topics in precepting come back to a couple of concepts, and one is transparency and creating clear expectations. So, if, if any, if anything I'm taking away from our conversation today as a couple of things is one, don't pretend that AI isn't out there and then it's not going to be used, and just be transparent about it and and try to help, you know, guide, guide learners toward appropriate use while they still understand their own responsibility, you know, and using whatever the outcome you know of that query is and I'm also hearing you, you know, talk about, think about how we're assessing learning, you know, and it's maybe an opportunity to to, and maybe this is perfect for experiential learning is you know more one on one, you know conversation, assessing through conversation to determine what a student or a resident knows and understands and what is their ability to regurgitate and answer. So you know what it may actually, at the end of the day, push us towards some learning that is actually a little bit more robust in some ways.

Speaker 2:

It has to be a focus. I mean, like, if I look at the history of higher education, you know whether it goes back to the foundation of the early universities and you have these monks or people sponsored by the church to go to, and today you basically had a lecture on a stage, reading from a book that the students basically copied down because he couldn't afford their own books. Always, I just regurgitation of information and as we got better, you know, printing, mass produce, mass production, information and access and the images that have popped up in mankind has been even despite access to data. Interpretation of data and using it has been the problem.

Speaker 2:

And the rise of the internet, smartphones. Now everyone has information. How do they apply and use it? As the issue, and so the new generation of AI, large learning management systems and their systems will. What they're going to do? They're going to push it even further. You're going to have to expect that they may be able to generate stuff, taking information already have so to what a human does, and that's what we want. That's what we're making these things, to be quite honest. But what is a ceiling limit for them is going to be, I think, still far out.

Speaker 2:

I don't see a replacing a human being for a lot of things in society, because I think, at the end of the day, the question will be from a regulator and controls be like do we want that response to put on this or do we want a human? At the end of the day, if we say we want a human, at the end of the day, creating that value, then I think that's going to keep us safe. But then I think they call the question you know, those higher cognitive functions that we then have to do is going to be the focus on. So again I take a step back.

Speaker 2:

Is you know, what do you want to focus on as an educator? Do you want to focus on just recall or thought process? Take information and use it and break it down? I think that's going to be the focus. So I think really strengthening the logic and the capabilities of handling data is going to be key. There's going to be a lot of data and I think large learning models and everything else can really help out with them. Using those tools is going to be the expectation of a job and a role to take on and use to keep a competitive edge. And one thing I hear people say it's not AI is going to replace people. It's people who don't use AI that's going to be replaced.

Speaker 1:

Yeah, that's a great. That's a great point. Yeah, I love that connection back to the elsewhere workforce too, because what we're trying to do is create professionals who, you know, are successful and effective in in their roles and in society, so that is a great way to wrap up. This is. This is such a great conversation and it went in other directions that I didn't expect. So I learned. I learned a ton and I'm super grateful for the time. Tim, Any last pearls or words of advice for listeners.

Speaker 2:

I mean, the one thing I'll take a look is the best thing you can do for students these days is look at where you think health care is going. Where do you think your roles are going. You can take your own job, which are what you're functioning as, and take an outward view, like you know what's going to change in five years, 10 years, was you know? Because if you're focused on one thing and that thing could be automated or that thing could be bypass or build upon the API, I think the best thing we do with our current learners is to acknowledge that, explain to them and help them understand what do. What should you focus on? Going forward? I don't think there's a point in doing some old fashioned work anymore that really won't be around very much longer and for I have to pick out one.

Speaker 2:

I had to be quite honest, like some dosing and medications, where, at this point, where a lot of health systems are doing vancomycin dosing through you know, because we have Bayesian math and everything else associated with it and can do calculations we're going to have dose orders on AI or renal dosing At some point. That will probably be the default in a decade. That being the case, then you know, is our job and to check that work, or how do we restore? I think for us as preceptors, you know, our job is not to tell people how practice now, but how practice may change so students have viability in their careers, to grow our focus on things that they should have because they're going to be around much longer than us in the workforce. I think that's the one thing I would challenge preceptors to really consider to help these students. You know, keep a press on what's going on.

Speaker 1:

Yeah, to stay forward thinking. Well, thank you again. This was great conversation and I know that people will appreciate it. Certainly a hot topic these days.

Speaker 2:

No, thank you for having really appreciate it.

Speaker 1:

Yeah, take care. This discussion went in many directions that I did not anticipate, and I came away with lots of new takeaways. At the end of the day, when we talk about successfully acknowledging and embracing the use of AI and our teaching practice, we're really talking about strategies that should already be the norm. Setting clear expectations in this case for the use of AI tools, being transparent and how and when AI should be used is no different than setting expectations for other aspects of a rotation Getting to the why versus the what. Using discussions to understand what each learner knows and how they know it. Evaluating current assignments and projects Are they really bringing value and supporting learning? Are we just doing things a certain way because that's how we've always structured the rotation?

Speaker 1:

If you have other strategies that have worked well in your practice, reach out to me at Kathy at cempackedcom. I would love to hear from you. Dr Ungst highlighted the importance of promoting critical thinking skills in the context of AI use. We've posted some additional resources in the show notes to help you move the focus of your teaching from the what to the why and, as always, be sure to check out the full library of preceptor by design courses available for preceptors on the CMPACT website. Be sure to ask your experiential program director or residency program director if you are a member, so that you can access it all for free. Thanks again for listening and I'll see you next time. On Precept to Practice.

Exploring Artificial Intelligence in Pharmacy Education
AI Impact on Education and Thinking
Technology and Critical Thinking in Education
Evolving Educational Strategies and Challenges
Embracing AI in Education
Promoting Critical Thinking in AI Education