July 6, 2025

Why AI Can’t Replace Human Care in Chronic Pain

As AI continues to evolve and integrate into more areas of healthcare, many clinicians are asking: will we be replaced? In this episode, Mark Kargela and Ben Whybrow have a frank conversation about how AI is affecting clinical work, what it can (and can’t) do well, and why the human elements of care—empathy, presence, collaboration—remain irreplaceable.

You’ll hear how we’re using AI tools like ChatGPT in our content creation and case planning, the limitations we’ve seen in clinical decision-making, and why now is the time for clinicians to lean into the “soft” human skills that no algorithm can replicate.

👇 Join our waitlist to learn how to build your future-proof skillset


*********************************************************************
📸 - Follow us on Instagram - https://www.instagram.com/modernpaincare/

🐦 - Follow us on Twitter - https://www.twitter.com/modernpaincare/

🎙️ - Listen to our Podcast - https://www.modernpaincare.com

____________________________________
Modern Pain Care is a company dedicated to spreading evidence-based and person-centered information about pain, prevention, and overall fitness and wellness

AI Replacing Us

Mark Kargela: [00:00:00] if you step back and you think that you're functioning under an algorithm that could be easily replicated, should have you have some pause. And to me it should stimulate us to lean heavily into the human skills the soft skills as they're said.

AI is transforming healthcare, but not in the way you think. In this episode, we dive deep into the real threat and the real opportunity AI presents to clinicians. If you're worried about being replaced, this conversation will shift your perspective. We're breaking down what AI does well, what it fails at completely, and how clinicians can build careers that are future-proof because they're fully human.

Let's get into it.

Announcer: This is the Modern Pain Podcast with Mark Kargela.

Mark Kargela: Ben, as you may all know, he is a our community manager at our soon to be pain Practice OS course and our community that we'll have going behind it as far as trying to really grow a community of folks who are trying to better do some things that we think today will hopefully touch upon a bit of how you know these skills.

In my opinion, [00:01:00] of course, I'm biased. There's no doubt about it that are more important than ever skills that we need to develop 'cause we have. The new ages of AI hitting us and AI being talked, and I've seen posts on social media where, oh, is AI gonna replace a physio? And you see some companies that are doing some interesting things with chatbots and different things to help.

Maybe triage some patients. So before we dive deep into AI, and there's so many ethical, moral and different issues that we probably won't touch upon all of it if, 'cause that episode could go for multiple episodes, but what's been your experience with ai, maybe within your system or maybe also just in life, like how's that kind of impacted you thus far?

Ben Whybrow: Well, I can only, I'll be honest, I can't even talk about life in our system where I work in n hs. Nothing, no formal AI is in use yet. Now, I will be honest. I'm sure clinicians using up to look things up and look up various things, but we don't have anything integrated yet into our system. There will be some, from my understanding, some things being trialed which we're trying to be part of, but [00:02:00] we'll see.

So Lifewise up, like everyone, we all got When was chat, GBT About a year ago now. 18 months that it came out.

Somewhere around there.

like that. So we all went on it initially, and let's be honest, it's not what it is now. It was good and it's gradually got better. So it was all just interesting in looking at it at first, and I'll be honest, for a while, didn't pay too much attention and it's really been, I.

If I'm honest, I think like most people, once chat GBT started to be able to do pictures, I think that was when we've all kind of gone, Ooh, now it's useful bit or more useful than it was. \, I use it for creative stuff like YouTube thumbnails. We are doing this community manager thing you were talking about.

We've been used it for certain scripts, structured emails, certain things that just saves time. And being, I'm doing my masters at the moment as well, we're about to finish it. But Google recently announced that students, any student in the world, so this could be useful for any students listening to this.

You get a, I think it's open [00:03:00] for the next few days. You get a free Gemini Pro account for the next 15 months apparently. And that's the level up from the basic one. And that can do bits of video and other stuff. So playing with that as well. But again, it's yet to see it's come into clinical life, but as we're gonna discuss today it is coming.

What about you?

Mark Kargela: Yeah, I mean for me, I've used it a lot for, as I've been trying to, piece together marketing and things around modern pain care stuff, but also YouTube, like thumbnail ideas. Like I have a whole, I. YouTube optimization GPT, that basically will take a script of a conversation you and I have.

This is exactly what I'm gonna do. I'm gonna put it into chat, GPT, and it's gonna give me, here's your titles, here's your thumbnail ideas, here's your description. You should put in your YouTube video. Here's some video tags for that. Here's a pinned comment you can put on there to pin the comments on YouTube so people can engage.

So yeah, I use it a lot for social media [00:04:00] content creation. I try to. Not be there. I will not lie that there's been some posts that, I mean, dang, this is good. I'm not gonna change it too much. Like they've actually nailed it and they've, I've pushed a lot of my own writing into the chat, GPT, so it hopefully uses my voice and I've been actually pretty surprised at how well it does.

With that, I'm not seeing, it always hits the mark, but for the most part at least gives me a good initial draft where I can go in and, tailor it. And I think that's probably a good practice for most of us, is to not get, we've all probably seen the, so you can usually sniff out a. AI generated post that's pretty maybe non conversational and just not the way maybe some folks would write, but I think overall it's definitely streamlined them.

Some things. I think AI has the capacity to really. Help on time. I mean, those are things that I used to, and there's even AI built into Riverside here, and I actually edit our podcast into script that it'll automatically take a s and ums, it will automatically give me chapters and different things.

So there does so many things that it used to take so much longer. So it's really a made it [00:05:00] efficient as a creator or somebody who's trying to put content out there in the world, which I know a lot of folks are. As far as clinically I've tested it, I've with different case scenarios just to see what it will do, and it's been pretty decent on like your basic, what's the most appropriate progression for a, post ACL Patel, patella graft or things like that.

What are oftentimes frames and just checking it versus what I've seen and they've actually done pretty well. I've also used it for like helping me program some things for patients for home program. I've had a patient recently, she, she works in a renaissance fair in the, when is that gonna be Spring.

So come up Next spring. So, but she's finishing knee rehab with like OA insensitivity the knee's. Doing great. But wants to get a walking program going I was able to just put in the parameters of where she's at, her step count currently, where her step count needs to be. If she's gonna hit the mileage she needs to be able to tolerate and in her renaissance fair work and then it needs to be happening in eight months.

We gave her, it said, draw this out over an eight month period and it did [00:06:00] a pretty dang good job of like things that I could have calculated and done myself, but it did it in like 30 seconds and I had something on a Word document that she can now follow and take forward. To for now on. So it's been pretty good.

I Have you seen any time saving things you've seen in, I mean, I know it's, I think we both use Epic epic's also. Well, it's a worldwide entity of course, and there's a lot of countries that are using it. And we taught beforehand. Epic still yet has not, and I'm sure there's a lot more. Con thinking they need to do as they're integrating it across many different governments and cultures and healthcare systems to make sure they're doing it the right way.

So I'm sure there's some thought with that, but have you tinkered with it in within, like I know you've looked up some things as I have, but have you kinda looked and checked its accuracy as far as when it's making recommendations on certain conditions.

Ben Whybrow: No, I don't know. 'cause I, what I'll say is it was ages ago now. I, when it initially came out, I was curious about how accurate is it gonna be, especially when there's conflicting views on the internet comparing scientific research [00:07:00] against things you'll read online and blogs and.

Different thoughts. And it is a bit more, some of the I did it for posture, for example, about, having an upright good posture or the how we don't endorse that anymore. And interestingly, its views were leaning towards the more what we used to think idea. But this was, if I'm full disclosure, at least six months ago.

I'll be honest, I don't haven't had to plug it, plug a question into it too much yet. Clinically probably because, and what we're going to be discussing, the nature of our, of people with persistent pain and the clientele we see. These aren't conditions where you just need to do said thing and then problem solved.

There's gonna be multiple factors and importance and influence will vary from over time. And I never felt, oh, if I just put all these problems into the chat bot, it will tell me a, because it might come up with a solution, might say, oh, well this person needs to go to this talking therapy.

They need to do X amount of [00:08:00] activity, this blah, blah, blah, blah, blah. But that's one thing to suggest. That doesn't mean the person's actually gonna do it at all. It eliminates any form of, it's just what we, it's almost what we try to avoid against with the telling. Right. We want our patients to come up with ideas, we want to get 'em involved, to have discussions.

And I haven't yet seen a way that it can do that at this time. What about you?

Mark Kargela: No, I would definitely agree. I've also shared the same concerns of like. There's been rightful criticisms on, strict adherence to like, evidence-based principles where research really drives. 'cause a lot of the research has been very derived in situations that don't really make room for unique n equals one.

Experiences, right? It's the humanity sometimes gets zapped outta research. Not for bad reasons as far as like the unique n equals one. 'cause we know exclusion [00:09:00] criteria, especially in, in, when you're working with persistent pain, a good chunk of the people, if not majority, are often within the exclusion criteria of the research evidence that we're trying to inform our practice with.

So there's obviously a conflict immediately with that. I wonder sometimes if chat GPT and the AI models aren't gonna lean on research. Not recognizing that, man, I have to be careful that what is this research founded on is it make account for people that don't fit the general statistical norms and standard deviations that oftentimes are the big things that these research studies give us.

So yeah I've shared the same concerns with that. And especially I totally agree that a lot of the patients, when you're working with persistent pain. Don't have this, I'm gonna plug in all their symptoms and there's gonna be this nice crystal clear answer. I think that just is a, symptom of what we see in healthcare too.

I mean, that's still the approach that healthcare tries to do, where there's gonna be this just one person that they're gonna come across that's gonna have the fix or the answer that's gonna make that situation dramatically better. [00:10:00] So, yeah I find it not to be helpful in those situations now.

With that said, I think I've been able to. Use some summaries of some copyrighted materials. 'cause again, you should not be uploading copyrighted materials, at least right now. I mean, it's the wild West with AI at the moment where I'm sure people are uploading, books and textbooks and different things.

And the legalities that I think need to hash out. And reminds me back in like Napster days when it was like everybody was just trucking, CD MP threes up on for. Know, free for all. It was all free and you're just trying to rip as much downloads from the internet. So I think there's still some things to be sorted on that front, but I do think when you can put it some summaries of what's act all about, how does it function within chronic pain, how, these type of things.

And then have that as your instructions of, okay. Okay. Here's how I want you to think about an answer questions. I've found it to be relatively helpful and there must be some training of obviously the internet and different things where there's a lot of, materials on like. Act and motivation interviewing and various things where you can get some solid [00:11:00] ideas of ways to approach it.

Like for instance, like okay, I wanna have an acceptance activity for this person who's just really struggling to be willing to move towards their valid activities. 'cause they're just not at the point. They're ready to accept that. Pain is part of the journey. You could also type in, Hey, can you please construct me a creative hopelessness exercise for a patient?

I've been actually relatively, but again to have a full nuanced management of a person in pain where it's gonna be able to weave. 'cause as we know, sometimes you go in a direction with somebody and then that just doesn't go well, and you gotta be able to pivot and move in different directions.

I don't think AI is there yet, at least for that point. What have you seen with like the pain populations as well? I mean, where do you see difficult di difficulties as well? 'cause folks are gonna be looking

for answers on, and I gotta imagine folks in pain are no different than any other person.

Especially when some of their challenging situations can have them seeking desperately for any answers, especially when healthcare isn't giving them a lot of clear messages. Have you seen any difficulties with folks that are peeking at [00:12:00] that and maybe not getting the best information or anything concerns that raises for you?

Ben Whybrow: now I should be clear. When people come to me, obviously the people will inevitably Google and now chat GPT, whatever their problem is. But they'll also have seen several other professionals before me and hopefully being given a similar-ish message. But you're talking about with the downsides.

I was thinking like a chat, whatever chatbot you're using, whatever platform it is. It can read words, right? Whatever you type in, but, and it can make assumptions about tone, but it can't truly hear it. So you can put in capitals and an exclamation mark and it will assume you're angry, but you might be wrong, you might be excited.

It can't read body language. This may change as generations go by, but there will, at least in this initial phase, be plenty of people who want to speak to the person. I presume [00:13:00] especially with any kind of big national organizations, companies, they'll have to be clear that the thing they're speaking to, if it is a chat bot, that it is that, and it's an AI thing.

Most people will want to speak to whatever person it is, doctor, physio, nurse, whoever. They won't want the chat bot 'cause they'll want the conversation. So And will it, will it ever a replace that? Well, I don't think, just as a text thing. No, of course not. We'd be talking far into the future.

Well beyond you and I think if that, if we get to that point, but I think that's where some of the downsides are. And also it can communicate back in text, but again, it can't communicate with body language. It can't communicate with tone ish. And the person reading it, it's like communicating via email in a way.

You know how emails go wrong all the time, right? It could be a similar way thing here. It can play around, but it's making assumptions. If I've [00:14:00] got someone in front of me, I get a much clearer idea of what they're saying and not just the words, but how they're saying it and their demeanor and stuff like that, which are a text-based chat bot can't.

What about you? What have you seen as limitations?

Mark Kargela: I think that's one of the things too that I think. If you're a healthcare person who's maybe doing some sort of very monotonous, repeatable, where you, I mean, if you step back and look at like this, that, a chatbot could easily learn what I'm doing. Hopefully that's not the case. But I gotta imagine there's some things that can become monotonous with like, I.

situations in rehab, and that's, I put a post on social media. I'll link it in the show notes of just like, if you're feeling, if you step back and you think that you're functioning under an algorithm that could be easily replicated, should have you have some pause. Right. And to me it should stimulate us to lean heavily into the human skills the soft skills as they're said.

Which again drives me nuts, but. 'cause those are the things, as you [00:15:00] rightfully mentioned, that body language, the ability to like, just be present with somebody and like, share, a painful story and empathize with it, validate it and do things. I just don't see chat these, large language models being able to do that.

Now I've had some, I've peaked at some interesting interviews of like emotions and different things and I, again, there that's not there yet, but.

To me, I think it should have us take pause that we should be leaning in on these type of skills to, to, 'cause I agree, people look for human connection and they're not gonna, when they're in crisis, have a hard time thinking that a chat bot is gonna do, they're gonna, and maybe that, maybe I'm an old man who also still clamor about eight tracks and we're at MP threes.

Who knows? But I just feel like. There's there, I don't, at least in this day and age the humanness of, suffering and having it validated [00:16:00] by another human and, cared for and empathized with and, guided back on, on a, I just don't see. Ai being able to touch that and to me again, makes it more important that those should be skills we leverage and lean on more.

Now, again, there's probably some hands-on things and touch that at least currently AI can't do with like touch. I think there's just a human connection with touch and our massage therapy colleagues obviously are masters of it and especially ones that are pain science informed can understand how that can be maybe a bridge to some bigger things for people.

I know some. You massage folks that are very act informed and really doing some of the same stuff we're teaching in our courses as well. I just I don't see the humanness. And again, it should be, I mean, just the fact that we're talking, it's humanness, it's humanity. Right? What do you see with that, like, that shared humanity that we have in a treatment room that I think sometimes can be so valuable for some people when life's really given them some [00:17:00] struggles.

Ben Whybrow: Yeah, absolutely. It that, I don't, if you use the word humanity, I not, I can't think of a better word than that to be honest, because it just sums up if it's going right. That connection you're talking about people in crisis, then there's been things on the news and various things about sometimes a chat drop, chat bot can be really useful.

Sometimes it's also ended very badly. And there's, people can look at it online. There's various stuff out there. It's just at the moment, again, there'll be certain people who, you know, what their, how they are, their personalities, their experience in life actually may bond great with a AI chatbot.

But there'll be certain people that won't. You know how we talk? The people say, like, been finding a therapist. A talking therapist. You should try a few to see which one you've mesh with best before you start. That's because we're all blending it on with different people. It'll be the same thing here.

There'll be some people who actually, you know what? They'll get on really well with one. But there'll be [00:18:00] plenty of others that won't. And actually thought we're planning on. Creating and starting later in this year with the group stuff. Everything about the AI chat stuff at the moment is all one-to-one can't run a group at all.

Especially if you've got multiple people pitching in at once and giving ideas and you're trying to read the room, certainly can't do that. As things stand and even if, as if AI gets clever and we can use webcams to read body language and then interpret tone, still not be able to control that group.

It's multiple people. So I think that's where, again, another downside is at the moment it's just one-to-one, I think stand. So that's why developing these skills and the things we are creating with the group stuff is just so key to keeping the what that choose your words, humanity.

Mark Kargela: , as I've seen AI and this idea of this, our course and community to help clinicians deliver this type of care, this humanity [00:19:00] driven care. I mean, to me, I've been, it's got me extra, like motivated. 'cause I, again, I think it's a huge need now as far as.

Really leaning in heavily on that to not lose sight of that and let AI get to where we furtherly strip humanity out of healthcare. It's been difficulties, which some of the systemic challenges that a lot of healthcare systems face, where, physician demands and the ability of someone to hear a story, validated a story and, co-created narrative with somebody's just not easy.

So, yeah, I, our course where we're gonna teach folks how to do it individually, but also more importantly with groups. I just think there's a group dynamic. We're in the process of getting ours launched in our university. And having talked and seen, folks who are doing group things, how powerful that dynamic is.

And I agree that's something that, an AI situation, I don't see that being able to manage that dynamic and being able to just. Have a group kind of work themselves through, some difficult things and teach each other in a social learning format and do the things that we're gonna be teaching clinicians to do.

So, again, I [00:20:00] think there's so much opportunity for that for clinicians to, to do some things that, really to me, safe, provide more safety in their jobs for if they can start really leaning on those skills. To me, I think you furtherly insulate yourself from ai, snatching your job as much. And I'll be curious. I got, I'm just waiting to see the first company that really goes heavy on ai and just to see how it works. I think we need to be ready to hear some things we may not want to hear, but maybe we'll be validated that Yep. In fact, people want other breathing, caring humans on the other side of their treatment interactions and not an AI bot.

I'm sure there might be situations where AI might be sufficient, but.

Again, in, in our line of work where it's persistent pain, I don't think so. So for those of you're interested in maybe leaning in on those skills and getting on that, don't hesitate to reach out. We will have some, a way to get on our waiting list for our program that we're gonna help you really develop these skills heavily and be able to deliver it in groups formats, but also obviously in individuals.

With individuals again, that I still don't [00:21:00] see AI understanding so much of the complexity and so much of the emotion and the body language and all the things that we help clinicians kind of work with there. So if you're interested in that, check the show notes out. We'll have a a link if you wanna jump on our waiting list.

We're not too far out from launching the program. It's been a long bit of work, but it's, I'm excited to forbid us to release it hopefully here in the next few weeks or month or so. So anything else you wanna leave folks with recommendation wise as they're toiling with where AI fits in the healthcare world?

Ben Whybrow: I think actually if we sum up what you said there, embrace. 'cause it can be useful in many ways. Embrace that to save time to be more productive, et cetera. But then also learn to focus and harness on the things that it can't help you with. 'cause there's some things that ultimately it won't be able and that's what people seem to be doing successfully in society with it.

They take, they use it to advantage and then they do exemplify on the things they can't do. I think if we can do that with everything we've discussed here, then that will help a [00:22:00] lot of people.

Mark Kargela: Definitely agree. Definitely agree. Well put. We're gonna leave it there this week. For all of you who are listening, we'd love if you could subscribe wherever you're listening to your podcast, wherever you're, if you're watching on YouTube, if you could subscribe, share the podcast for somebody who might get some benefit out of it, we'd greatly appreciate it.

That's the only way the show grows. I don't have any paid ads or anything on the show. I'm not, I'm resisting the monetization thing 'cause I drives me bonkers when I see two minutes of podcast ads to some of my past favorite podcasts that I no longer listen to, probably because of that. So we'd love if you could share the episode so we could help grow and spread the word to other clinicians, but.

We will leave it there this week. We'll talk to y'all next week.

 

Ben Whybrow Profile Photo

Ben Whybrow

Clinical Communication Skills Facilitator for University of East Anglia.
Specialist Physio for Pain Clinic in NHS (UK).
Reasonabley nice guy.