[00:00:08] Speaker A: You're listening to faith in healthcare, the cmda matters podcast. Here's your host, Dr. Mike chubb.
[00:00:19] Speaker B: Welcome, friends, to Faith in Healthcare.
Artificial intelligence, or what I read recently from the AMA called augmented intelligence, is rapidly reshaping our world and modern medicine right along with it. From diagnostics and clinical decision support to documentation and patient interaction, these technologies are quickly becoming part of everyday clinical practice.
But as AI becomes more embedded in healthcare, the questions that it raises, they're just not technical anymore. They're actually deeply human and ethical.
Well, in today's episode I'm joined by Dr. Rosalind and she goes by Roz Picard. She's a pioneer in artificial intelligence and human centered technology.
Together we're going to talk about the promises as well as the limits of AI within healthcare. We're also going to talk about the risks of bias and over reliance and why Christian healthcare professionals we need to think carefully about conscience and responsibility in an age of accelerating technology.
Dr. Picard also shares her journey of faith, including. I'm going to tell you a remarkable birth story.
Many of you are familiar with former college and NFL star Tim Tebow's story. Well, wait till you hear Professor Picard's story.
You're going to hear from Dr. Roz Picard, a thoughtful vision for how we as Christian healthcare professionals can help guide the ethical future of medicine in the age of artificial intelligence.
So let's dive in.
Well, today on faith and healthcare, we are honored to welcome Dr. Rosalind Picard, who's the founder and director of Affective Computing Research Group at mit.
And she is a pioneer in the field of artificial intelligence and human centered technology.
Her groundbreaking work has helped shape how machines interpret human emotion, engage human experience, and assist in areas ranging from mental health to clinical decision support.
She's not only a leading voice in AI innovation, but also someone who has thoughtfully reflected on the moral and philosophical implications of technology.
And so, as artificial intelligence is rapidly reshaping medicine, touching diagnostics, workflow, patient monitoring and even elements of clinical judgment, the questions before us are no longer merely technical, they're profoundly human.
So without any further ado, and we're going to have a CV and all this on our show Notes today. Professor Picard, but thank you for joining me today on Faith in Healthcare.
[00:03:10] Speaker C: Thanks. It's such a pleasure to be here with you.
[00:03:12] Speaker B: Before we launch into a fascinating discussion, I can't wait to hear what you've got to share with us today about AI and the future of healthcare, I just wanted to read a brief quote of yours. From an article in 2019 Christianity Today, which was my introduction to you. And you wrote, I once thought I was too smart to believe in God. Now I know I was an arrogant fool who snubbed the greatest mind in the cosmos, the author of all science, mathematics, art and everything else there is to know. Well, Dr. Picard, how do you explain that remarkable transformation in your thinking and perspective for our listeners today?
[00:03:52] Speaker C: What a quote to start with.
Actually, it started with Dr. Bob Nelson, who was my dentist. I was a young teen.
I thought he and his wife Corey and their kids were like the coolest people in the neighborhood. I was their babysitter and I was a proud atheist.
And week after week I would hang out with them and do things. And then one week they invited me to go to church with them and which shocked me because I thought, wow, there's such cool people. Like why would they go to church? I thought that was not congruent with my image of them.
And come that weekend I did not want to go to church. So I faked a stomach ache to get out of going to church.
The following week they did the same thing inviting me to church. I tried to fake a second stomach ache.
It doesn't work to keep faking stomach aches or illness to a doctor.
Finally they were onto me and they said, well, the most important thing is not whether or not you go to church, but it's what you believe. And they asked if I'd read the Bible.
And I thought myself a smart person and the Bible is the best selling book of all time. And no, I had never read it. So they suggested I start by reading Proverbs 1 a day for a month.
To my shock, when I started reading Proverbs, it was the exact opposite of what I expected the Bible to be.
I expected it to be full of fictitious gobbledygook, you know, kind of made up beings floating around or so there, you know, silliness, at least in my mind. And as I read Proverbs I realized there was incredible wisdom and a lot for me to learn.
So that start got me actually interested in reading the whole Bible quietly for quite a while. I still refused to go to church for quite a long time. But that began a journey of studying, studying not only the Bible, but studying other religions too. Because I didn't want to become a believer. I thought maybe that was a.
I was maybe a product of my culture or something there. If I started to believe in God or Christianity. As I visited mosques and temples and lots of other and year many years Later I. I started to believe. Well, actually, during that process, I started believing God. While I was reading the Bible, I felt a power speaking to me. Not the kind of voices in your head you got to see a neurologist about, but stuff that was hard to explain and a sense of presence. And then much later, actually did show up at church and started to. I wanted to ask questions of the preacher. The person who invited me is like, no, put your arm down. We don't raise our hand in the middle of the sermon. And gradually I got, you know, the most important questions answered and decided to become a Christian.
[00:06:36] Speaker B: Dr. Picard, thank you for taking the time to share that testimony, because our listening audience over these past several years, many have given me feedback. They really love to hear that you're not only a smart person with a great intellect, but you're in communion with the ultimate intellect in the world. So thank you.
Really important.
So let's jump in. Affective AI. Honestly, the terminology, I'd not even heard it before until preparing to talk to you today.
And for listening, I'm sure you believe it's effective as well. But affective AI. So our Christian Hippocratic tradition, it dates back 2,400 years, actually, with the Christian version coming in the second century. But it frames modern medicine as a sacred covenant, a moral promise between we as physicians and our patients. And obviously, trust and responsibility are critical at the core.
Does AI risk, in your view, shifting medicine from that covenantal care to more of a transactional efficiency? And if so, if there's a risk, how do we guard against that?
[00:07:42] Speaker C: I think there has already been a risk with technology of people, you know, kind of trying to instrumentalize a lot of what goes on when a patient comes to see a doctor.
So I think AI is accelerating that in some places.
And the changes are ones that I think it's important doctors get informed about and not just trust what they're being told. Because there's a lot of false claims in the media, even from scientists who are, you know, being sloppy. And usually I think it's associated with them being investors in. In the companies that are doing this.
And by the way, full disclosure, I'm also a co founder of two companies that have done AI, and one of them is sells medical devices also. That's. The first one is Affectiva, now owned by Smart iab. And the second one is Empatica, Italian for empathetic. Okay, So a problem is that with these inflated claims, there's a tendency to just hear like, hey, it's 80% accurate here or 95% accurate there and not actually understand the nature of the errors, that 95% of the time it might do something right and 5% of the time when it does something wrong, the cost of those errors may actually be a million times higher than the cost of some of these other errors. So it's super important to understand the nature of where it does and doesn't work and to look very critically at where it's entering medicine. One MIT professor who I work with who specializes in healthcare technology and AI said that in a big study he was doing with doctors, 40% of the AI was wrong and giving them wrong answers, but they don't know which 40%.
And it reminds me the old advertising quip, you know, half of the advertising doesn't work, but we don't know which half.
And that's one thing if you're just sending out an ad. But when you're a physician and your time is limited and you want to spend your time most preciously with the most important needs, it's really annoying to have to suddenly not know which 40% is wrong and to become somebody who has to debug that. So I think it's really important that doctors get educated about the AI and don't just listen to, oh, you know, the success rates are really high, therefore you need to adopt this.
[00:10:13] Speaker B: Well, you've been Dr. Picard, in a lot of discussions in a lot of big places about then moral responsibility. So at the end of the day, if physicians are following the recommendations and yet maybe 40% is not, who's going to bear the responsibility for that when it doesn't work out? And it's a million times more significant, is that the physician, the institution that's mandating the use of the AI, or is it the designers of the algorithm in the first place? Where does that lie?
[00:10:40] Speaker C: Boy, you guys are on the forefront of the, of the real issues, right? Obviously the tech companies. Well, let me just say at the end of the day that the AI doesn't ever bear the responsibility.
Right? It's, it's not a mind, it doesn't actually know or learn or think anything. It's just a piece of technology made by humans, tuned by humans with imperfect information coming in and imperfect people tuning it. And it can do amazing things as we, we're all seeing, but it can also do incredibly idiotic things as we're all seeing too. And in some cases, the maker of the AI may be held responsible, but my guess is when it comes to medicine, they're going to push that on the physicians and say we need medical oversight, physician in the loop and that your medical malpractice insurance or whatever the procedures are. My guess is the companies are going to try to push that on the physicians, but I think it depends on the AI and the technology.
So, for example, in a company I co founded that commercialized work for my lab, we sell devices. I'm wearing the Empedica Embrace plus and the Embrace Mini. These are used for epilepsy, Parkinson's and lots of medical studies.
And in those cases when the device actually has FDA certifications and there are cases where Empatica would be liable if the. Or the user might be liable. Right. Depending upon what the problem is.
I don't think the doctor is liable with our devices.
I can't think of a case where that would be. Maybe I'm mistaken, but they're always part of a system.
So these are really good questions to ask and to walk through.
What is in the consent forms and what is in the use of the technology form and all that boring stuff. We all hate to read and want to click accept, but you really do need to read it and understand it.
[00:12:35] Speaker B: Dr. Picard, as I've followed AI and its development for health care, my assumption has been, well, at least the machines will never be able to demonstrate care for the patient like I can. And then a mutual friend of ours that we talked about before we got going here, Dr. Bill Cheshire, burst my bubble about a year and a half ago when he presented to our board information data comparing physicians with AI interacting with patients.
And I don't know if it was double blinded or how the study was set up, but at the end of the day, the patients preferred interacting with the AI over the real doctors. And I was like, oh no, oh no, this is not good.
[00:13:17] Speaker C: Was that text based? Was that just language? That was.
[00:13:22] Speaker B: Yeah, I believe so. But at the end of the day, the patients felt that the AI was more patient with them, spent more time with them. I think that was the heart of the issue, which of course doctors are being asked to push through their patients quickly and have time limits. But this whole idea of affective computing, can machines truly support or potentially erode the physician's role as a compassionate presence? And might these AI systems inadvertently diminish formative habits of attentiveness that we're trying to get our young trainees in medical school, dental school, to shape and become great doctors?
[00:14:03] Speaker C: Yeah, great question. I think what we're seeing with the language use that the AI tends To take a bit more time, if you tune it this way, to put in more empathetic language.
And most medical people, most scientists, most engineers, we're much more brief and kind of, how can I say it? Our language is. Tends to be more dominant. You can have the AI sound a little more submissive and polite and empathetic, and people like the language that matches them.
So doctors, I think, learn to kind of match their language to the patients they know. Well, the AI probably will do that a little bit more with more patient. It has no feeling of patience or frustration. Right. So it will just do what you tune it to do. And, you know, there have been perceived benefits of that. However, when it comes to the in person connecting, the sitting down, the truly listening, the truly trying to look compassionate. And I'll, I'll tell you a funny story about this in a moment. Some work we're doing then I, I don't think there's any substitute for the gift of real presence from an expert human physician.
Some work that I was approached to do by Johnny Avery at Cornell. He and his team, he's a physician who works a lot with people with substance use disorder. And he said as he trains people, he watches and he observes that at the end of a long day when the physician is exhausted and a patient comes in and they've fallen off the wagon once again.
And it's really hard to look compassionate at that moment.
You might feel tired, you may have
[00:15:54] Speaker B: a headache post call at the hospital.
[00:15:57] Speaker C: Yeah, yeah. And he, he said, but it's, he's learned it's so important to, you know, raise that inner brow, look compassionate at that patient. Even if the last thing you're feeling at that moment is compassion, it is therapeutically helpful to look compassionate at that patient. So he came to us and said, can you help me help my trainees learn to look compassionate, learn to raise that inner brow? I'm like, well, if they haven't had Botox treatment and paralyze that muscle, probably we can do that. And what we built was a facial action unit reading software, which many of these exist now. They, we built them in our lab decades ago, and now there are many commercial ones. And we gave the trainees a game where they could listen to a patient telling their story. And when it, when the patient was saying something, that might be an appropriate time, according to the doctors, to show compassion. We're not going to tell them what's the right time. Right. But we want them to start learning to think about how they appear.
Doctors are real Good at listening and thinking about the diagnosis and what's therapeutic and what to do, what's the optimal treatment. But sometimes they're so busy in their head thinking about that that they forget that if you're thinking hard, you might be doing this.
And that doesn't look compassionate to the patient. So we're trying to help teach them how to raise that inner brow while listening.
[00:17:21] Speaker B: Wow. So helping human physicians to do a better job of patient care while all along developing AI that will be responsive. I mean you spend a lot of time with this affective computing but it sounds like you're also trying to help the profession, the humans within the profession get better.
[00:17:42] Speaker C: Yeah, I don't think the AI should try to replace doctors. I think there are ways that it can come alongside and augment our, you know, opportunities for thinking of diagnoses, our opportunities for thinking of treatments and perhaps practice our non verbal skills for people who need help with that. Mind you, any AI that's helping doctors, if it achieves something good, it's because doctors helped it do that. Right. It's a, it's a loop. Any success of an AI like AlphaFold, the protein folding AI, the credit doesn't go to the AI, the credit goes to the humans who have thought about how can we use this technology to help people, how can we use this technology to advance science and medicine. And you know, we really need to be co designing with doctors. You know, I wish there was even more of this where we, we listened to you about what are the parts of your job that you really wish were automated versus what are the parts of that like you really enjoy and how do we rebalance that so that we're ultimately lifting up what's most important about human beings.
[00:18:54] Speaker B: Well, there's no question that documentation is onerous.
So just being able to sit down and have a conversation and allow the AI to listen in, which I understand is now out there, been out there now for the last several years in physicians offices to do all the documentation while you are face to face as opposed to, to the early years of electronic medical records where my face is in a laptop computer and I'm doing all this recording, not making. So clearly I'm just amazed. I'm now using AI for all of my meetings, one on ones and at the end of the meeting I upload it and ask it for the action plans and then distribute it to the members in the meeting. So it's just, I don't have to scribble anymore. It's just wonderful. So I'm grateful for that. I want to segue you've developed in your life now since coming to Christ, a real love for scripture. Well, you know, scripture tells us to defend the vulnerable and AI systems, they encode some degree of bias on what's fed into the data that's fed into it. So how should Christian healthcare leaders think about any algorithmic bias in light of a moral obligation to protect those at the margins and the vulnerable at both the beginning and end of a life?
[00:20:10] Speaker C: Yeah, thank you for focusing on that. As you know, the AI is trained on whatever people choose to give it as input. And the biggest, most famous one, ChatGPT, you know, OpenAI was, according to Elias, let's give her one of the early co founders and tech team leaders there, they vacuumed up everything on the Internet. He said that means that it consists of all the smut and garbage that's on the Internet, as well as everything else they could get in digital form, including, you know, some great content. Right. But it also carries with it every bias, everything one might find disgusting, vile.
The last thing you would want in your training data is in there. And then they try to sometimes fix it on the end by tuning it to remove bias.
And what we've seen and work at MIT Media Lab on what we call causal faithfulness and work also in EECS here at mit led by Katie Matten, is that if you compare the bias on the outside of the algorithm with what it's actually doing on the inside, they haven't fixed it.
They will claim they fixed it on the outside, but when we actually causally manipulate things, it's not fixed on the inside. And there are lots of kinds of bias which is not fixed. And it's not fixed in medicine in many areas. Where, for example, you might have the algorithm explain how it came to a decision and you look at the content of the explanation and you ask if the reasoning it's giving actually matches the content of what it used to generate that answer the reason, what humans would associate with the actual reasons and the alignment between those is getting farther and farther apart. The more they're tuning the outside, the more it's not matching what it's actually doing. So I think the problem is the models themselves are broken, the training data is broken. And building on top of that dung heap, if you will, is a really bad idea for doctors and people who are trying to use that just build wrappers around those models.
So the bias is deep in there.
And I think the way to get rid of it is not just to Tune on the outside.
[00:22:26] Speaker B: Yeah.
[00:22:30] Speaker A: Before we continue with this week's episode, here's a special announcement for you.
At cmda, we want to help Christian healthcare professionals live out their faith with both conviction and compassion.
One way we do that is through Standing Strong, a three tiered curriculum designed for students, residents and practicing clinicians.
Standing Strong equips Christians in healthcare to remain grounded in biblical truth while navigating increasing cultural pressures with wisdom and Christlike care.
You can learn
[email protected] standing-strong.
If you're interested in learning more about the work Southern Baptist Theological Seminary is doing and how it aligns with the ministry efforts of cmda, we've got some exciting news for you.
Southern Seminary is one of our silver sponsors for the upcoming 2026 CMDA National Convention and they will have an exhibit booth for you to stop by and say hello.
This year's convention is in loveland, Colorado on April 2023 through 26.
That means we're only a few weeks away from the convention, so now is the time to register.
For more information and to secure your spot, visit natcon.cmda.org.
Let's jump right back into this week's episode.
[00:24:05] Speaker B: That's fascinating to hear and I assume from what you've just shared with me is one of our nation's leading experts in this whole arena, that that means that an institution should not be, are not in a place where they can force any clinician, any physician, dentist to absolutely do what the algorithm said they have to do in a given situation, Obviously. I just got back from a conference, a pro life ob GYN conference in Washington, and clearly there are just so many voices, there's so much literature, pro abortion literature, and that's, I'm guessing that's really going to dominate algorithms for a pregnant woman who's considering an unplanned pregnancy.
So are you aware of any institutions, whether big academic institutions or private institutions, that are coming up with protocols that say our caregivers are going to follow the recommendations that are spit out from the AI? Are any such mandates in place right now, Professor Piccard?
[00:25:06] Speaker C: Well, I just heard from someone you and I both know that the medical system they work for is requiring them to launch a thousand new AI projects.
And I'm like, you know, I, I would hope for goals that are different than like just launch a thousand projects. Maybe, maybe that is the right goal. But I would recommend something different. I would recommend really looking at the real pain point problems.
Good old fashioned technology. Problem solving is still the way to go. Right. What are the real pain problems? You know, how do we reduce errors in documentation, automated documentation? How do we get rid of this bias? I'm really actually glad to hear you raise that issue with the pro life. It makes me think I need to tell my own birth story where my birth mom went to see a doctor to terminate the pregnancy.
And she was a single 17 year old in a, you know, definitely unwanted situation, wanted to go to college, had actually just gotten into mit, and she went to a doctor in New York who said he could terminate the pregnancy, but it was illegal back then and they could get caught, and she didn't want to get caught doing something illegal.
So here I am.
Wow.
[00:26:25] Speaker B: But I did not know that about your history. That's powerful.
[00:26:29] Speaker C: Yeah. So I think, you know, there are a lot of people like me, actually huge numbers of people. I mean, it used to be that, you know, most of us made it out the birth canal. And when I was finally able to meet her, I mean, the one thing I just wanted to say is thank you. Right, for what you went through. Thank you for going through that pregnancy that, you know, it's incredibly hard and a lot of women don't get the support they need to go through that. So I. I would hope that we would wrap our loving arms and medical support around every. I just wish there was good medical care promised for every pregnant person, especially who can't afford medical care to help them and the new life or lives they're carrying to have a chance, because there's way more families out there who want to adopt babies than there are babies born to give them that opportunity. Wow.
[00:27:20] Speaker B: I hope you'll keep telling that story everywhere you go.
We at CMDA will appreciate that, as well as the folks I was joining in Seattle, Washington.
Well, these systems, they increase efficiency and they reduce cost and in theory, make more money for institutions and practices.
So where do you see the tension between efficiency and fidelity to individual patients? How do we prevent optimization metrics from redefining what good care means?
[00:27:51] Speaker C: I'm glad to hear you say that. I mean, I. I write optimization metrics for AI, and it's always a struggle because they're always trading things off. You know, you want it to use this language, you want to have this empathy, but you want it to be accurate, but you want it to ignore this. Or it's such a, I think, truly impossible problem to try to reduce, you know, human care to a number. Right. It's just not right. And one of the key things we miss when we build these AIs is the context, the history, the knowledge that a physician hopefully gains around their patient and what's going on. I know, I. With my students, I see, you know, I don't have a fixed formula or optimization for interacting with them. You know, if they walk in and they're on top of the world, showing me some cool result, we start at a different place than when they walk in and they look pretty beat, you know, and I, I want to find out first, like, hey, how's it going? You know, and then I hear about a family member who's about to die or something. There's no way we could talk about the math and the science results when their mind is on this other thing. Right? So every context, every situation, the knowledge, the history you have, all of that is incredibly important for giving the right care. And yet these algorithms know zero about all of that, so they can't optimize it properly. They just don't have the right insight to do it.
So I do think, you know, you and fellow medical professionals have conversations about how to optimize care in the face of these optimal algorithms that are absolutely not optimal.
[00:29:32] Speaker B: Well, in the past, one of my favorite profs in the UK that I've heard talking about AI and healthcare has been Professor John Wyatt. I don't know if you know Professor Wyatt, but he's spoken at a number of conferences I've.
And he wrote a book, the Robot will see you Now. And one of his postulates is that medical school will be in the future compressed because we don't need to take students through so much basic sciences and so forth, because the AI is going to process that. So as AI, I'm seeing you shaking your head. As AI increasingly handles diagnostic reasoning, what do you think? I mean, what's going to happen to the intellectual and moral formation of young physicians, dentists, PAs, nurse practitioners? Is it possible that we're going to weaken the development of clinical wisdom in acumen?
[00:30:19] Speaker C: I'm so glad you're asking that, because we are very worried about when people rely on AI for their reasoning, that they're not using their brains and they're not developing their brains. And if that is a trend, then progress in medicine will screech to a halt. It will decay. And what we like what we found at mit, when students are randomized to groups of, you know, write essays using LLM, which Large Language Model, which happened to be ChatGPT, or write essays using search engine, which in this case happened to be Google, or write essays the traditional way, and repeatedly Doing this over four months and measuring their brain activity and also measuring things like can you, you know, quote something you wrote 15 minutes ago?
It's astonishing, the significant difference in those cases, right, with the absolute worst brain activity and inability to even say something you wrote 15 minutes ago in a group that's relying on the LLM.
So the LLM is the worst, followed by the search. And the best was people actively engaging in thinking and reasoning and generating things. So this is vital for the next generation who are not just on the front lines taking care of patients, but also helping inform future medicine. Right? Informing trials, informing what's working, what's not working, feedback on where improvements are needed.
How are we going to do that if we're not training people to engage their brains?
[00:31:54] Speaker B: And of course, you've got this tension that we don't have in many spheres, enough of the professionals out there caring for patients.
And so there will be this incredible push pressure potentially, as I mentioned, to shorten training, get people out the factory faster training, so they're out there taking care of patients. So there is going to be this tension to shorten, as Professor Wyatt has said. Concerns about institutional mandates.
Professor Picard. To use AI tools. I mentioned that earlier, some mandates and our desire at cmda, one of our big desires is a desire to pass as much legislation as can happen in every state to protect the conscience, the moral reasoning and the conscient a care of patients. And so how can we preserve all of that if AI systems recommend a course of action? I mentioned that earlier, that we just don't believe is in the patient's best interest.
[00:32:51] Speaker C: Thank you for banding together and speaking up. Everybody I know respects physicians and the training you've had. And let's not lose that by selling out your brains and souls to AI that has no conscience.
It has no conscience. Right. It may do what is right in a situation it was trained. And the doctors who gave input and said that's the right thing to do may, you know, be happy in that situation. But again, it doesn't read the context. It doesn't know that for that patient this would not be the right thing to do. And you expert humans, you know so much more than it knows.
So I think you have to, you know, join together, have conversations about where, where you need to override it, you need to maintain the right to do that, where that's the right thing to do.
And, you know, I'd be careful about people making mandates who don't really know what's happening down at that level. Or who look at 98% of the time it's doing the right thing. Well, ask about the cost of that 2% of the time that it's not. Or if it's 60, 40, ask about the 40 that it's not. And what new problems that's causing, what new costs that's incurring. Because every time it's solving one thing, it's usually causing another problem somewhere else.
And we can't pretend that it's not doing that. There are usually collateral issues that the new AI solutions are causing that are unwanted somewhere else. So look for those, measure those, measure the cost of those, and then reassess.
[00:34:28] Speaker B: Well, our mutual friend that you referred to earlier, one of my favorite smart people, after meeting you, reached out to me and said, I really would like to hear more about effective computing and effective AI and interacting with patients who are depressed or could be suicidal, having AI detect those sorts of emotions, psychological states of patients, and then responding appropriately to detect that. Can you talk to us just a little bit about where that kind of science and development stands?
[00:35:03] Speaker C: Yeah, there have been huge advances in AI that.
Well, I mean, it's trained on every dialogue out there, right? It's trained on patient therapist dialogues, it's trained on great empathetic dialogues and customer service and movies. It's also sometimes trained on really bad ones.
They've tried to tune that in a different direction. But no matter how good it is at sounding like it understands you or saying something that makes you feel like you're understood, it does not understand you and it does not know how you feel.
It is only moving around those giant Scrabble board phrases and putting together phrases that it has been rewarded to put together.
So it could say, you know, you're awesome, you know, you're my friend, I love you.
And then, you know, you do some horrific self injury and it just keeps saying wonderful things, right? Like, like it doesn't know. And it can be very misleading because it can act like it does know.
So there need to be huge cautions put around these uses. There are people who are pretty healthy, have it pretty together, who have conversations with it, that are semi therapeutic and that have a lot of benefit from it and report that.
But again, that's like maybe that's even a majority of people, but that minority can have some absolutely catastrophic interactions. And the psychiatrists and AI have been talking with are calling them catastrophic. That's not my word. They are finding, you know, where the AI is supporting suicide, self injury, really things. No, no sane human caring person would ever, ever allow to, to happen.
So these are the, some of the dangers. Again, the model doesn't know. It's just doing all this matching in a giant space. And if that space is trained on the dung heap of everything on the Internet, then it's in there, it's going to come out.
And. Oh, and they're also finding things like the less educated the user, perhaps because their language might be different, or maybe they use an area, a kind of slang that is associated with a certain part of the Internet that has different content on it. Well, guess what? When you have that input, guess where the model matches the outputs from, right?
So it's not going to give. It's not giving as good advice, not as accurate. These terrible biases come right back in.
So these are things I think are, you know, I hope our medical community, especially people making these mandates, you know, get educated about do not trust what the companies are saying. They're saying all kinds of things to make themselves sound great because most of them are deeply in debt and they're not going to survive another month if they don't get their revenue cranked up.
[00:37:59] Speaker B: So you've personally never observed or heard others in your field talking about coming out of AI any level of creativity or innovation? I've heard you say several times in our conversation that doesn't happen.
It's what humans are feeding into algorithms. That's where creativity and innovation is coming from. But have I heard you wrong? Have you seen anything suggesting creativity from machines?
[00:38:27] Speaker C: Well, now you have to talk about what you mean by creativity, right?
So let's go back to the game of Scrabble and your moving letters around in front of you because you can't think of a word. And as you move the letters around, a new word pops up that you like. You see a juxtaposition of letters and you're like, oh, I hadn't thought of that word in a long time, right? Is that creativity? To think of a word that you haven't thought of in a long time. That's an unusual word. Well, actually, that's one of the measures used for creativity, right? Is thinking of more atypical words, right? Things with wheels, car, truck, bike, those are not very creative, right? But you go off to, you know, this funny little LEGO monster or some spinny thing in the sky, right, that has wheels. Then you're starting to get more creative. There are all of these, you know, definitions of creativity that consist of things that in some cases, a machine can help you do, right? It is certainly great at looking at vastly more combinations of things that it has more quickly than we are. And those can lead people to insights that we might call creative insights. But again, it's only moving around, interpolating, extrapolating. If we allow it from content that humans gave it right, it's fundamentally moored on inputs, weights, combination functions, mechanisms, approximation functions that, that we designed it to have.
[00:39:55] Speaker B: Our time has run out, and I really appreciate all of these insights. Maybe if we could conclude by me asking you what are the most exciting things that you are working on or those around you are working on in terms of human flourishing in healthcare? And we'll close with that question.
What's ahead that gets you most excited, Professor Picard?
[00:40:19] Speaker C: Well, I'm very excited about not just the potential of technology to help us interpret lots of data from people that aids human flourishing. I do a lot of work with wearable data with wearable physiology and combining that with therapeutic care so a doctor can see if a treatment is actually messing up somebody's sleep or messing up their autonomic nervous system responses or promoting more seizures or things we can do to help predict and prevent illness. I'm super excited about the way lean Green AI, I would say, in these environments like lean green AI running and my smartwatch is allowing us to detect generalized tonic clonic seizures and is being used to potentially forecast.
So those are very exciting things. I'm also excited as a Christian, about some things I'm seeing happening with AI for human flourishing, and that is that AI researchers, many of whom have no interest in, or previously have shown no interest in theological or spiritual things, are suddenly having these big existential questions about what does it mean to be human?
You know, what is human flourishing? What is a better future?
And the answers, they're not finding them in AI. They're asking people who have theological experience. They're asking Christians, like, you know, when we're interested in removing bias, like, where does this. Like we've just assumed all people are equal. What you're telling us it didn't used to be that way in Rome, or that it's still not that way in other parts of the world. Where does that come from? Well, that comes from Genesis, from Imago Day, right? All, all of us created in the image of God. They're starting to see that it's Christian values that have informed much of what we think is good and valuable in the world. And that insight about what it means to be a person is very profound. When you read the Christian theologians and our, you know, great thinkers over time there. And that virtue ethics, you know, is back on the forefront now as maybe giving us guidelines we should be trying to infuse into the AI we build.
So I'm excited and I actually would love to encourage your listeners who are physicians to share with patients that they are not just physical stuff.
One of my favorite physicians I've met through our epilepsy work talks about people as not just the neuropsychosocial, but spiritual beings. Neuropsychosocial, spiritual. And when a physician tells a patient that they know or they believe that there's a spiritual side, it's incredibly fresh and uplifting for a patient to have that acknowledged. And if, if you do find a patient that doesn't like that, like maybe me back in my arrogant atheist days, I might have like scoffed at that. Let them scoff.
They'll probably remember that you, who really knows more than they do about what's going on inside, open that door.
So I would really encourage your expert physicians listening in to make sure patients know that they're not just their material stuff.
[00:43:44] Speaker B: We have a whole section dedicated to Christian academic physicians and scientists, and I've helped recruit them speakers for their group for that section to speak. And I've had some folks come out from their encounters with those various doctors, many of whom don't even want their association with this section or CMDA known at all, but come out and say, this is one of the most difficult groups I've ever had to talk to in terms of what they face every day in their academic institutions with the pressure to keep their faith silent. But your testimony today has been powerful. So I want to thank you for sharing that challenge at the end. In particular, you're singing my song, Professor Picard, so thank you. And I didn't ask you to sing it. You sang it spontaneously. That you can be in a very prominent, influential place like MIT and be faithful. Our new brand promise to our members and other ministry partners and all Christians in healthcare. Is your faith and healthcare connected and not just like a personal identity accessory which you hang up on the wall like a Vocera device as you walk into the hospital. But it is integrated, connected, inseparable. So you have cemented that message for us today. So God bless you and I hope we'll be able to get you one of these fine days to our national convention to share with our audience there. I hope.
[00:45:03] Speaker C: Thank you. And thanks for all the great work all of you and your listeners do. Taking care of patients. We need human beings Taking care of patients.
[00:45:11] Speaker B: That's a great way to conclude our talk about AI. Thank you very much and God bless.
[00:45:16] Speaker C: God bless.
[00:45:26] Speaker B: As Dr. Picard reminded us today, artificial intelligence may become a helpful tool in medicine, but it can never replace the wisdom as well as the moral responsibility and compassionate presence of a clinician who sees and cares for a patient as a whole person created in the image of God.
And it's worth reflecting again on the story that she shared about her own life. It began with the courageous decision of her birth mother, a young MIT student who chose life and was willing to give her daughter up for adoption.
Then, years later, Dr. Picard would return to MIT as a professor and become a global leader in artificial intelligence, helping to shape the future of technology and medicine. As a Christian professional, it's a powerful reminder, friends, of how one courageous choice it can shape a life and influence the world in ways that we could never predict in a time of rapid technological change. Christian healthcare professionals, we have an important role to play in helping to guide the ethical future of medicine. If this conversation encouraged you, would you please share it with a colleague or a friend, a student or other trainee? And be sure to subscribe so that you don't miss future episodes of Faith in Healthcare. And if you'd like to learn more about CMDA and to connect with other believers in healthcare who, well, they're wrestling with the same ethical issues, you can visit us by going to CMDA.org today.
Next week we'll be joined by Dr. Ann Sen and we're going to talk about why many clinicians feel dissatisfied in their work due to burnout and maybe the demands of modern medicine, and how to rediscover joy in the calling to care for others.
I want to thank you for listening to Faith in Healthcare Today, where our mission is to bring the hope and the healing of Jesus Christ to the world through committed Christ followers in health care. We'll see you next time, friends, Lord willing.
[00:47:43] Speaker A: Thanks for listening to Faith in Healthcare, the CMDA Matters Podcast. If you would like to suggest a future guest or share a comment with us, please email cmdamatters mda.org and if you like the podcast, be sure to give us a five star rating and share it on your favorite social media platform.
This podcast has been a production of Christian medical and dental associations.
The opinions expressed by guests on this podcast are not necessarily endorsed by Christian medical and dental associations.
CMDA is a nonpartisan organization that does not endorse political parties or candidates for public office.
The views expressed on this podcast reflect judgments regarding principles and values held by CMDA and its members and are not intended to imply endorsement of any political party or candidate.