• Users Online: 2209
  • Print this page
  • Email this page
ORIGINAL ARTICLE
Year : 2023  |  Volume : 5  |  Issue : 1  |  Page : 17

Performance of chatGPT-3.5 answering questions from the Brazilian Council of Ophthalmology Board Examination


1 Vision Institute, Instituto Paulista de Estudos e Pesquisas em Oftalmologia, São Paulo, SP, Brazil
2 Vision Institute, Instituto Paulista de Estudos e Pesquisas em Oftalmologia; Department of Ophthalmology, Federal University of São Paulo, São Paulo, SP, Brazil; Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA
3 Department of Ophthalmology, Federal University of São Paulo, São Paulo, SP, Brazil
4 Vision Institute, Instituto Paulista de Estudos e Pesquisas em Oftalmologia; Department of Ophthalmology, Federal University of São Paulo, São Paulo, SP, Brazil

Correspondence Address:
Luis F Nakayama
Rua Botucatu, 821, Vila Clementino, 04023-062, São Paulo

Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/pajo.pajo_21_23

Rights and Permissions

Importance: Large language models being approved in medical boarding examinations highlights problems and challenges for healthcare education, improving tests, and deployment of chatbots. Objective: The objective of this study was to evaluate the performance of ChatGPT-3.5 in answering the Brazilian Council of Ophthalmology Board Examination. Material and Methods: Two independent ophthalmologists inputted all questions to ChatGPT-3.5 and evaluated the responses for correctness, adjudicating in disagreements. We compared the performance of the ChatGPT across tests, ophthalmological themes, and mathematical questions. The included test was the 2022 Brazilian Council of Ophthalmology Board Examination which consists of theoretical test I, II, and theoretical–practical. Results: ChatGPT-3.5 answered 68 (41.46%) questions correctly, 88 (53.66%) wrongly, and 8 (4.88%) undetermined. In questions involving mathematical concepts, artificial intelligence correctly answered 23.8% of questions. In theoretical examinations I and II, correctly answered 43.18% and 40.83%, respectively. There was no statistical difference (odds ratio 1.101, 95% confidence interval 0.548–2.215, P = 0.787) comparing correct answers between tests and comparing within the test themes (P = 0.646) for correct answers. Conclusion and Relevance: Our study shows that ChatGPT would not succeed in the Brazilian ophthalmological board examination, a specialist-level test, and struggle with mathematical questions. Poor performance of ChatGPT can be explained by a lack of adequate clinical data in training and problems in question formulation, with caution recommended in deploying chatbots for ophthalmology.


[FULL TEXT] [PDF]*
Print this article     Email this article
 Next article
 Previous article
 Table of Contents

 Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
 Citation Manager
 Access Statistics
 Reader Comments
 Email Alert *
 Add to My List *
 * Requires registration (Free)
 

 Article Access Statistics
    Viewed625    
    Printed6    
    Emailed0    
    PDF Downloaded37    
    Comments [Add]    

Recommend this journal