According to the International Agency for the Prevention of Blindness, an estimated 36
million people are blind and an additional 217 million have moderate to severe visual
impairment. This means that a total of 253 million individuals must rely on their four
available senses, primarily touch, in order to perceive their surroundings. One major
limitation of this is that the individual must be within arm’s reach (or cane’s reach) in order
to determine what is in front of them. Iris changes this by providing a mobile application with
a voice-controlled user interface to improve environmental awareness for the blind. With
Iris, users are able to take a picture and have the application describe it to them with
additional information on specific entities within the picture if requested. Through
accessibility features and the power of machine learning, Iris is an extension to the user’s
senses.