Prescription problems: ChatGPT's errors in understanding medicine
New research has found that ChatGPT, an artificial intelligence language model, has made numerous errors in understanding medicine. According to a study presented at the American Society of Health-System Pharmacists Midyear Clinical Meeting, nearly 75% of drug-related questions answered by ChatGPT were either incomplete or incorrect. These inaccurate responses could potentially put patients' health at risk.