“(I)f somebody is going to hold themselves out as a health care professional they actually have to be a health care professional. But in this situation, AI is stepping in and not disclosing that they’re not a person. And they’re advising people on their health care, their behavioral health. And we’re going to put a stop to that.”
I might ask why behavioral health advice should have to come from a licensed health care professional? What if a friend tells me I should be generally more polite so as not to upset my friends, or I should get a haircut or shower more often, or I should relax about politics and not watch the news so incessantly, but just do the sudoku and ken-ken puzzles in the paper? Isn't that behavioral health advice? Would my friend be violating the law which Rep. Morgan proposes if he/she is not a social worker or a psychiatrist/psychologist?
Kyle Hillman, a lobbyist for social workers, also pipes up to say:
“I’m sure this (AI-based advice on mental health) is something that individuals that just aren’t ready to make that call might look to. But it’s just not something that’s safe. We would never consider this as an option for physical health. Like, ‘hey, I have a laceration on my leg. I’m going to call an AI chat doctor on how to put stitches in my leg.’ … It’s not something we would do.”
I think AI is a much better and much safer way to go, for behavioral health advice, than calling any random social worker, Ph.D. psychologist, or M.D. paychiatrist! It's far less expensive (which is probably why most people consider it). But even more importantly, behavioral health cannot pretend to be objective science. If medications are part of it, you certainly want somebody on hand (a pharmacist?) who is familiar with drug actions and reactions and interactions, etc. But that information is all available on line, too, it just lacks the perspective relevant to a particular individual in a particular situation.
Anyone who has a doctor (or a priest or coach...) who knows them and has given them good advice in the past, is certainly being reasonable to choose that health care professional over an un-person AI program. I just don't believe the decision should be enforced by law. Maybe that's only because I almost never go to doctors myself. And even though I don't admit to any scary symptoms at this moment, maybe I'll die tomorrow of something that could have been prevented. But I am 73, and for the last 50 years, I've virtually never bothered with or worried about tedious and unpleasant medical errands like proctology or cardiology or cholesterol tests. I never check my own blood pressure. Maybe it's a fair exchange.
Modern medicine does not represent the best prospect for personal salvation or social progress. The field even brags, I think, about being a system, or an objective, mechanical process for evaluation and decision making that can make everyone happier. That's a machine, or it aspires to (ideally) be a mere machine. What we need is creativity.
Our problem with so-called "artificial intelligence" is merely that we are unwilling to be alive, creative, and responsible.
And it's far worse in so-called "mental health."
Bob Morgan and Kyle Hillman are wrong.
No comments:
Post a Comment