You are reading a single comment by @hugo7 and its replies. Click here to read the full conversation.
  • We shouldn’t rely on artificial intelligence (AI) for accurate and safe information about medications, because some of the information AI provides can be wrong or potentially harmful, according to German and Belgian researchers. They asked Bing Copilot - Microsoft's search engine and chatbot - 10 frequently asked questions about America's 50 most commonly prescribed drugs, generating 500 answers.

    Only 54% of answers agreed with the scientific consensus, the experts say. In terms of potential harm to patients, 42% of AI answers were considered to lead to moderate or mild harm, and 22% to death or severe harm.

    But, y'know, great progress, corporate responsibility etc etc

    https://www.scimex.org/newsfeed/dont-ditch-your-human-gp-for-dr-chatbot-quite-yet

  • Is that a fair test?

    For starters the comparison should be against how well a GP does. Secondly a GP AI shouldn't be trained against random US websites and Internet searches.

  • A fair test would be against somebody "doing their own 'research'", surely.

  • Is that a fair test

    Not that it's a fair test; but rather that if your publicly facing service with no apparent guardrails provides "advice" that would cause death or serious harm in 22% of its outcomes, perhaps it shouldn't be public facing yet

About

Avatar for hugo7 @hugo7 started