Ophthalmology and Artificial Intelligence: did we crack the code?

Artificial intelligence (AI)  is a fast-growing branch of computer science and is being increasingly explored in medical research for utility across various fields. Of particular interest is the concept of machine learning (ML), referring to the capacity of a ‘machine’ to autonomously acquire its own programming for completing a task without a pre-written set of instructions. Impressively, these ‘machines’ can adapt (and, if necessary, correct) their algorithms according to new data and subsequently implement these programs for pattern recognition, determinations and predictions.  Furthermore, technological advances in computation tools and data acquisition, storage, and transfer are dramatically expanding the volume and speed of multi-modal data processing and communication. Ophthalmology, being an image-based speciality, lends itself to benefit from these digital ecosystems. 

The recent global health crisis has incentivised the maturation of virtual health delivery infrastructures and the assimilation of technology into restructured clinical workflows. While there has been a boom in AI applications across ophthalmology subspecialties (which I encourage interested readers to pursue), glaucoma, diabetic retinopathy and age-related macular degeneration have perhaps received the most attention being as they are prevalent causes of global blindness and have high health needs and follow-up requirements. 

For background, AI has been a topic in glaucoma assessment for over 20 years.  Glaucoma presents a substantial health burden and describes the thinning of the retinal nerve fibre layer (RNFL) and neurodegeneration of the optic nerve. While these changes may be secondary to increased intraocular pressure (IOP), they can also be seen with normal IOP in normotensive glaucoma- exemplifying the heterogeneity of the underlying disease process and clinical presentation.  It has been estimated to affect 76 million people globally, and prevalence increases with age. The disease process is initially asymptomatic but can be chronic and progressive, culminating in increasing disability, care needs and poor quality of life. Early diagnosis and treatment are essential for controlling disease progression and preventing irreversible vision loss. However, the demand for relevant specialist expertise (e.g. ophthalmologists, eye care services) and specific imaging modalities (e.g. optical coherence tomography (OCT)) surpasses supply and care services are often oversubscribed or inaccessible to some groups. 

ML can and has been used for screening, diagnosing, and classifying glaucoma, with some studies reporting high accuracy, sensitivity, and specificity as well as high agreement rates against clinician assessments. These ML models are generally developed using large training sets consisting of multi-modal investigation parameters such as demographics, clinical data and imaging such as fundus photography and OCT which can visualise and quantify the thinning of the RNFL, reflecting loss of ganglion cell axons and glaucomatous disease progression. Integrating both clinical and imaging data is important to these datasets as structural changes precede visual dysfunction, which allows the model to potentially predict the likelihood of developing the disease as well as disease trajectory. This information can then be used for guiding diagnosis, prognostication, and treatment. 

However, there are challenges for machine learning in ophthalmology. Firstly, ML models are only as good as the datasets used to train, test and validate them. For training the model, there are important considerations regarding the size and composition of the training set; variation in the disease and the equipment used for the acquisition of the training data (particularly if pooling different datasets from multiple sites); the relevance, clinical utility and weighing of different dataset types to the target outcome or classification criteria and finally, the external validity of the training set.  The validation dataset may be biased by subjective clinician assessment in the absence of more objective tests, and labelling the data in the training and validation datasets may be tedious and time-consuming. The benefits of more extensive and multi-dimensional datasets may be offset by the increased algorithmic complexity and computation time. 

Overall, while the use of machine learning in ophthalmology may allow for the semi-automation and streamlining of clinical assessments- there are limitations that need to be addressed before its widespread implementation. 


Written by Nada Alfahad


References:

AlRyalat, S.A. Machine learning on glaucoma: the missing point. Eye 35, 2456–2457 (2021). https://doi.org/10.1038/s41433-021-01561-7

Balyen, Lokman MD; Peto, Tunde MD, PhD† Promising Artificial Intelligence-Machine Learning-Deep Learning Algorithms in Ophthalmology, Asia-Pacific Journal of Ophthalmology: May-June 2019 - Volume 8 - Issue 3 - p 264-272 doi: 10.22608/APO.2018479 

Li J-PO, Liu H, Ting DSJ, Jeon S, Chan RVP, Kim JE, et al. Digital technology, tele-medicine and artificial intelligence in ophthalmology: A global perspective. Progress in Retinal and Eye Research [Internet]. 2021 May [cited 2021 Oct 5];82:100900. Available from: http://dx.doi.org/10.1016/j.preteyeres.2020.100900 

Kapoor R, Walters SP, Al-Aswad LA. The current state of artificial intelligence in ophthalmology. Survey of Ophthalmology [Internet]. 2019 Mar [cited 2021 Oct 5];64(2):233–40. Available from: http://dx.doi.org/10.1016/j.survophthal.2018.09.002 


Images merged and modified with Pixlr

Image of code by Ali Shah Lakhani from unsplash [Free to use under the Unsplash License] https://unsplash.com/photos/sp1BZ1atp7M 

Image of eye by Marc Schulte from unsplash  [Free to use under the Unsplash License] https://unsplash.com/photos/KJCyvlA_aAQ 


Comments

Popular posts from this blog

PTSD - A Psychodynamic Explanation

Disparities in Global Eye Care

Do Contact Lenses Really “Support Your Vision”?