We shouldn’t rely on artificial intelligence (AI) for accurate and safe information about medications, because some of the information AI provides can be wrong or potentially harmful, according to German and Belgian researchers. They asked Bing Copilot - Microsoft's search engine and chatbot - 10 frequently asked questions about America's 50 most commonly prescribed drugs, generating 500 answers.
Only 54% of answers agreed with the scientific consensus, the experts say. In terms of potential harm to patients, 42% of AI answers were considered to lead to moderate or mild harm, and 22% to death or severe harm.
But, y'know, great progress, corporate responsibility etc etc
For starters the comparison should be against how well a GP does. Secondly a GP AI shouldn't be trained against random US websites and Internet searches.
Not that it's a fair test; but rather that if your publicly facing service with no apparent guardrails provides "advice" that would cause death or serious harm in 22% of its outcomes, perhaps it shouldn't be public facing yet
But, y'know, great progress, corporate responsibility etc etc
https://www.scimex.org/newsfeed/dont-ditch-your-human-gp-for-dr-chatbot-quite-yet