As artificial intelligence continues to expand its role in healthcare, large language models such as ChatGPT are increasingly being used to seek health-related information. In dentistry, where patient education is essential and misinformation online is on the rise, it is particularly important to evaluate the reliability of such tools. Given the well-established impact of tobacco use on oral health, a new study has assessed how effectively ChatGPT responds to public questions on this topic.
The researchers – from the University of Jordan and the Kuwait Ministry of Health in Sulaibikhat – generated a pool of commonly asked questions using online tools. These questions covered areas such as periodontal conditions, teeth and general oral health, soft-tissue health, oral surgery, and oral hygiene- and breath-related concerns. Responses were generated by ChatGPT 3.5 and assessed by the researchers based on the parameters of usefulness, readability, quality, reliability and actionability.
Most responses were judged to be either very useful (36.1%) or useful (42.0%), and of moderate (41.2%) to good quality (37.0%). However, just 23.5% of responses scored highly in terms of actionability, and 35.3% were found to be only moderately easy to read.
Responses to questions on specialised topics were less useful, including those relating to the effects of tobacco use on oral soft tissue and on oral surgeries.
The authors concluded that ChatGPT is a valuable tool for providing general information related to the effects of tobacco use on oral health, but that it faces challenges in readability, consistent actionability, and quality of responses on specialised topics: “While ChatGPT can effectively supplement healthcare education, it should not replace professional dental advice”.
The study was published online in BMC Oral Health.
