Data from Mass General Brigham reveals that ChatGPT achieved a 72% success rate in clinical decision-making. The success rate was notably only achieved in textbook cases and not in critical illnesses. Yahoo Finance Live breaks down how AI could be used in a medical setting.
Video Transcript
– There was another story we were tracking today too at a Mass General Brigham saying that they tested doctors– or basically an AI doctor if you were to go get a physician or go to your normal doctor's appointment. And they said it was 72% successful. I don't know if that's good or bad necessarily. I think I'm out on an AI doctor.
– I would add that and for non-urgent really basic stuff. If it seems like a human's talking to you like it does in the case of this answer thing, fine, go for it. But if you're in an accident, you don't want that person this robot–
– You press that button and there's a robot.
– It's like, I'm just picturing myself going agent, agent, agent, which I already do, pressing the 0.
– And a red button. The red buttons are emergency, that's a real person. The blue buttons for anything.
– OK. That's good, that's good to know that distinction.
– Oh, there's two buttons.
– When you're talking about the AI doctor, now I think it's important to note that 72% for the textbook cases. So when things get a little more complicated, probably don't want AI on your side solely. But I do think in terms of medicine, I think AI, we always think about how scary this is I do think it could be a supplement for help. Human doctors make decisions.
– Get bogged down with a lot of nonsense.
– Yeah. And maybe– you know any doctors out there for us?
– AI is [INAUDIBLE].
– I've heard.
– [INAUDIBLE] always talks about getting bogged down by nonsense, so AI is actually just a solution to just free up some work time. Fair enough. No, I mean, it is. I think it is for a lot of people, it would make it maybe more efficient.
– And AI is already in our world.
– Make your visit more productive and quicker.
– It's already in our world. And I think a lot of these headlines make it seem a little scary. Like oh, AI is going to diagnose you. That's not necessarily true, it could just help.
– I'll still take the human doctor, though.
– I did see the journal have stories today about how JP Morgan saying, don't use ChatGPT for work product. Don't put anything in ChatGPT that's work product because they don't want– it's like a security issue.
– Oh, interesting.
– So there's the drawback or the flip side is, corporate America is saying, no, no, no, don't go too far just quite yet. So we might see that side of it.
– But it's like in math class when they make you take a test and say you can't use a calculator. And we're like, why not? Why can't I use this calculator?
Originally published on Yahoo.com