OpenAI chatbot ChatGPT has impressed medical practitioners with its ability to answer medical questions without any prior research.
The use cases for ChatGPT are far reaching, and we are seeing every week a new industry or niche which the AI chatbot is able to assist.
The healthcare industry has worked with chatbots in the past, as a front facing service for patients to book appointments and check for common illnesses, but ChatGPT could be deployed for far more useful applications, such as diagnosis.
That’s according to a study conducted by medical professionals, which quizzed ChatGPT on its medical knowledge by asking it questions which may come up in the medical licensing exam. The study asked questions which were indexed outside of ChatGPT training timelines, meaning it would not have known the answers beforehand. The test is also known for being difficult to cheat on, as the questions are not straightforward and cannot easily be Googled.
The three main pillars of the test are basic science, medical knowledge, and case management, but it also evaluates a person’s reasoning, ethics, critical thinking and problem solving skills.
“We were just so impressed and truly flabbergasted by the eloquence and sort of fluidity of its response that we decided that we should actually bring this into our formal evaluation process and start testing it against the benchmark for medical knowledge,” said Dr. Victor Tseng, medical director at Ansible Health.
ChatGPT managed to provide an accurate answer about 60 percent of the time, which would equal a pass grade. Of course, this is with the caveat that ChatGPT is simply guessing what the correct answer is, it has not studied papers and would not be aware if it had been inaccurate.
“I think this technology is really exciting,” said Dr. Alex Mechaber, vice president of the US medical licensing examination board. “We were also pretty aware and vigilant about the risks that large language models bring in terms of the potential for misinformation, and also potentially having harmful stereotypes and bias.”
While it may be a cause for concern for some in the profession, it wouldn’t be a surprise if some practices decided to test drive ChatGPT for some low-level diagnosis. OpenAI offers a professional version of its service, alongside business access to the more robust GPT-3.
Sam Altman, the CEO and co-founder of OpenAI, has said that ChatGPT should not be used for anything serious at the moment, but is incredibly bullish of the usefulness of AI in the future for all industries. The healthcare industry is often rather skittish when it comes to deployment of new technology, but there has been a lot of success with deploying AI and analytics to improve diagnosis, and to speed up analysis of complex scans.
Another worry with the deployment of chatbot technology, such as ChatGPT, into clinics and other healthcare services is the opaqueness of why the AI came to a certain decision. OpenAI is a private company, and like Google and Facebook, does not appear to be happy to open up its algorithm to outsiders. While you can ask ChatGPT why it answered a question a certain way, you cannot get to the heart of its reasoning.
This may put off some healthcare professionals, who are used to being in command of their data and able to refer to every opinion and piece of data which led to a certain decision being made.