AI in Medicine: Convenience or Compromise?
As artificial intelligence becomes more ingrained into everyday life, the line between helpful tool and risky shortcut is becoming increasingly blurred.
, which is a new feature on the AI chatbot that asks users to upload their medical records and connect health apps. The announcement quickly brought the ethics of using AI in medical contexts to center stage.
Designed to help users understand and organize their personal health information, OpenAI says that ChatGPT Health will generate more personalized responses to medical questions, which the company says account for more than 5 percent of all messages from users on the ChatGPT platform.
Illinois Tech Professor of Philosophy Elisabeth Hildt, the director of the universityâs Center for the Study of Ethics in the Professions, is skeptical about ChatGPT Health. She says there are fundamental questions yet to be answered about ChatGPT Healthâs accountability, reliability, and privacy.
âItâs not a medical tool; it doesnât align with medical standards,â says Hildt, noting that there are no doctors, institutions, or companies clearly responsible for any consequences of using ChatGPT Health. âThereâs no one to be made accountable.â
AI outputs are only as good as the data provided, so Hildt warns that there is always the risk that responses may simply be wrongâor even worse, they could be hallucinated based on incomplete or biased information. That uncertainty could lead users to make health decisions based on misleading outputs rather than guidance from qualified medical professionals.
Privacy concerns also loom large. While OpenAI claims that âconversations in [ChatGPT] Health are not used to train our foundation models,â Hildt questions how it can function without relying on large volumes of data.
âThey have to train the model somehow,â she says. âIn order to train the model, they need a lot of data. Whereâs the data coming from?â
Even with enhanced safeguards, Hildt says questions remain about how health data may be stored, used, or potentially exposed in the event of a breach.
While she acknowledges the appeal of AI toolsâparticularly in physical areas with long wait times or limited access to doctorsâHildt cautions users against viewing ChatGPT Health as a reliable or safe substitute for medical care.
âI wouldnât go as far to say, âOh, never, never, ever trust,ââ says Hildt. âBut on the other hand, I would doubt whether this is really a useful alternative in the long run.â