The Pan-American Journal of Ophthalmology

ORIGINAL ARTICLE
Year
: 2023  |  Volume : 5  |  Issue : 1  |  Page : 17-

Performance of chatGPT-3.5 answering questions from the Brazilian Council of Ophthalmology Board Examination


Mauro C Gobira1, Rodrigo C Moreira1, Luis F Nakayama2, Caio V S. Regatieri4, Eric Andrade4, Rubens Belfort Jr4 
1 Vision Institute, Instituto Paulista de Estudos e Pesquisas em Oftalmologia, São Paulo, SP, Brazil
2 Vision Institute, Instituto Paulista de Estudos e Pesquisas em Oftalmologia; Department of Ophthalmology, Federal University of São Paulo, São Paulo, SP, Brazil; Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA

Correspondence Address:
Luis F Nakayama
Rua Botucatu, 821, Vila Clementino, 04023-062, São Paulo

Importance: Large language models being approved in medical boarding examinations highlights problems and challenges for healthcare education, improving tests, and deployment of chatbots. Objective: The objective of this study was to evaluate the performance of ChatGPT-3.5 in answering the Brazilian Council of Ophthalmology Board Examination. Material and Methods: Two independent ophthalmologists inputted all questions to ChatGPT-3.5 and evaluated the responses for correctness, adjudicating in disagreements. We compared the performance of the ChatGPT across tests, ophthalmological themes, and mathematical questions. The included test was the 2022 Brazilian Council of Ophthalmology Board Examination which consists of theoretical test I, II, and theoretical–practical. Results: ChatGPT-3.5 answered 68 (41.46%) questions correctly, 88 (53.66%) wrongly, and 8 (4.88%) undetermined. In questions involving mathematical concepts, artificial intelligence correctly answered 23.8% of questions. In theoretical examinations I and II, correctly answered 43.18% and 40.83%, respectively. There was no statistical difference (odds ratio 1.101, 95% confidence interval 0.548–2.215, P = 0.787) comparing correct answers between tests and comparing within the test themes (P = 0.646) for correct answers. Conclusion and Relevance: Our study shows that ChatGPT would not succeed in the Brazilian ophthalmological board examination, a specialist-level test, and struggle with mathematical questions. Poor performance of ChatGPT can be explained by a lack of adequate clinical data in training and problems in question formulation, with caution recommended in deploying chatbots for ophthalmology.


How to cite this article:
Gobira MC, Moreira RC, Nakayama LF, S. Regatieri CV, Andrade E, Jr RB. Performance of chatGPT-3.5 answering questions from the Brazilian Council of Ophthalmology Board Examination.Pan Am J Ophthalmol 2023;5:17-17


How to cite this URL:
Gobira MC, Moreira RC, Nakayama LF, S. Regatieri CV, Andrade E, Jr RB. Performance of chatGPT-3.5 answering questions from the Brazilian Council of Ophthalmology Board Examination. Pan Am J Ophthalmol [serial online] 2023 [cited 2023 Jun 7 ];5:17-17
Available from: https://www.thepajo.org/article.asp?issn=2666-4909;year=2023;volume=5;issue=1;spage=17;epage=17;aulast=Gobira;type=0