How Machine Learning Can Aid the Visually Impaired
Maureen, a retired worker, carefully walked towards her mahogany front door. She felt eager for a morning walk in the sunshine. However, right after she opened the door, she felt one foot run over a shoe. Moments later, she was laying on the floor, stunned at the pain she was experiencing.
This is the plight of people- over 295 million of them — who suffer from moderate-to-severe visual impairments. In a study published by the Investigative Ophthalmology & Visual Science, it has been demonstrated that individuals with visual impairments have a higher risk of receiving unintentional injuries. Before the pandemic, people have guided visually impaired walkers across the street. However, during COVID, Terence Page, who is blind, explained that “there are less people who want to help you or even touch you.” Having a viable solution to aid the visually impaired becomes more and more imperative.
There have been many mechanisms developed to assist the visually impaired. Currently, assistive tools such as Braille and read-aloud software have helped in reading. Yet in physical navigation there have not been such widespread tools.
The Relation to Machine Learning
Recently, however, with the rapid development of machine learning technology and methods, researchers were able to transform seemingly abstract techniques into a real-life solution. Machine learning, a part of artificial intelligence that involves processing large amounts of data, seems like a daunting and irrelevant field. However, it has been proven by researchers to be a potential solution for the visually impaired.
In 2018, machine learning researchers Kedar Potdar, Chinmay D. Pai, and Sukrut Akolkar trained a Convolutional Neural Network (CNN), a type of machine learning model, on a large image dataset, ImageNet, and combined it with a computer’s camera to detect and recognize common objects around users. The ImageNet dataset contains around 14 million labeled pictures consisting of types of animals, plants, transportation, food, kitchen items, etc. The goal of developing a machine learning model is to train it to learn in a similar way as humans do, which, in this image identification and classification case, is to recognize significant features of an object from looking at different versions of the same kind of object many times through training on pictures of these objects. CNNs contain kernels, which are matrices of numbers that serve as weights that can help determine which part of data is most important. A kernel scans over data, such as an image, then multiplies itself with the input data to extract the most important features to produce a classification result. This classification result is an image category, such as a table, chair, etc.
CNNs have proven to be specialized in identifying objects from being trained on pictures, such as shoes, traffic light colors, cars, etc. If integrated into a mobile app, it can leverage the phone’s camera to recognize common objects and utilize the phone’s microphone to notify users of the type of object that is in front of them. This dual mechanism can potentially help users navigate inside a house or on busy streets, helping to prevent tripping, falling, and even from accidentally entering a crosswalk at the wrong traffic signal, therefore potentially saving lives. Hence, a CNN model in a mobile app can become a feasible and accessible tool for the visually impaired.
Although machine learning is still being refined, researchers continue to develop accurate, accessible, and robust models and technologies that can be used in many areas, such as cancer detection and hurricane predictions, proving that artificial intelligence isn’t as abstract as previously thought, and can be intelligent enough to help solve real-world issues in many sectors.
 J. M. Wood, et al., “Risk of falls, injurious falls, and other injuries resulting from visual impairment among older adults with age-related macular degeneration,” Investigative Ophthalmology & Visual Science, Vol.52, 5088–5092, 2011. Available: https://doi.org/10.1167/iovs.10-6644
 ScienceDaily, “We’re watching the World Go Blind, researchers say,” December 2020. [Online]. Available: https://www.sciencedaily.com/releases/2020/12/201207124118.htm [Accessed 4 August 2022].
 Curing Retinal Blindness Foundation, “Tools for the visually impaired: Devices & resources.” May 2020. [Online]. Available: https://crb1.org/for-families/resources/tools/#:~:text=Braille,impaired%20to%20read%20and%20write [Accessed 4 August 2022].
 K. Potdar et al., “A convolutional neural network based live object recognition system as blind aid,” arXiv preprint arXiv:1811.10399, 2018. Available: https://doi.org/10.48550/arXiv.1811.10399
 J. Deng et al., “Papers with code — imagenet dataset.” [Online]. Available: https://paperswithcode.com/dataset/imagenet [Accessed 4 August 2022].
 P. Mcgeehan, “Why the pandemic has made streets more dangerous for blind people,” December 2020. [Online]. Available: https://www.nytimes.com/2020/12/01/nyregion/nyc-blind-pedestrian-signals.html [Accessed 4 August 2022].