Developing wearable devices to assist visually impaired people

8 May 2023

How does a visually impaired person negotiate a congested intersection with different types of vehicles and pedestrians coming from every direction? Traditional aids including guide dogs and white canes come with considerable challenges and responsibilities – “canes can’t give information on oncoming cars and dogs are expensive and require care”.  Iso Lomso fellow Achille Melingui of the Department of Electrical and Telecommunication Engineering at the University of Yaoundé 1, who is in his first residence at STIAS, therefore hopes to use deep learning and fuzzy logic to develop and design smart devices that can assist visually impaired people.

STIAS Iso Lomso Fellow Achille Melingui during his seminar on 2 May 2023

“Statistics from the World Health Organization, released in 2017, estimated that more than 285 million people worldwide had a visual impairment, with 39 million blind,” said Melingui. “Sight plays a vital role in human movement, including reaching a desired destination, finding the right path in known/unknown indoor environments, or navigating outdoor scenes.”

“Blind or visually impaired people face many challenges when performing such tasks and often feel disoriented or even intimidated,” he added. “Lack of vision can affect personal, professional and environmental relationships, and hinder performing everyday tasks.”

Existing free apps including, Be My Eyes and Aira, although extremely helpful, generally work by connecting the visually impaired person to another person (usually volunteers) and therefore don’t afford complete independence.

Considerable work has been done particularly over the last decade to develop wearable assistive devices for visually impaired people – devices aimed to improve the user’s cognition when navigating in known/unknown indoor/outdoor environments and designed to improve their quality of life. However, Melingui explained that the existing devices are not only expensive, but also not always adapted to all environments.

“Although these tools are trendy, they still cannot provide blind people with all the information and functionality for safe mobility available to sighted people,” he said.

Melingui’s previous work in autonomous navigation of mobile robots, which can move autonomously from one point to another accurately avoiding obstacles, provided the inspiration for this current project. “Robots don’t have eyes, they need to use something else to sense the environment and move to the target,” he said. “I wondered if you could do the same for visually impaired people?”

Existing assistive devices fall into three types – vision replacement, vision enhancement and vision substitution. In vision substitution information is usually translated into vibrations or audio or both – based on hearing or touch that can be controlled by a blind person.

Vision substitution devices are in three categories – positional locator devices using GPS; electronic orientation aids that give pedestrians guidelines and instructions about the path ahead; and, electronic travel aids that collect and sense data about the environment or surrounding surface areas and send them via sensor, laser or sonar to the user or to a server.

These can identify surrounding obstacles, provide information about surfaces, and detect distances to objects all of which allows a visually impaired person to orientate and create a mental map of their surroundings. They work by using sensors that collect and process information, allowing them to recognise objects and give feedback to the user. Ideal operational features include processing speed, usability, robustness, distance coverage, obstacle detection, portability and user friendliness.

Current state of the art in these devices include monocular camera-based systems; stereo camera-based systems and RGB-D camera-based systems.

But, as Melingui pointed out, these are still not ideal for things like night-time navigation and detailed work like identifying the right pill to take, and this is where a deep-learning approach is needed.

“This project aims to propose an intelligent system for visually impaired people using computer vision techniques, deep-learning models, fuzzy logic, and audio or haptic (based on touch) feedback to facilitate independent movement in known/unknown environments. The proposed solution comprises an object recognition and audio-feedback model, a salient object-detection model, and a speech-synthesis model,” he said.

“Fuzzy logic is about multi-value rather than binary logic,” he explained. “Fuzzy logic implements a set of rules or a mathematical means of representing vagueness and uncertainty which tries to take into account the fact that people are often required to make decisions without having all the information.”

“The aim is to design simple, low-cost, and efficient deep learning-based solutions including a faster region-based convolutional neural network which will be used for object identification and face recognition, while type-2 fuzzy logic will be implemented for autonomous navigation. The novelty of this approach lies in the presentation of a complete solution integrating deep-learning structures in portable, low-cost, reliable and high-performance hardware.”

“But it’s all still a long way away – we first need to design something that works. You can’t develop novel algorithms without a basic prototype,” he concluded. “It’s a long-term project which we are thinking about from the context of the Global South and the specific challenges that poses.”

 

Michelle Galloway: Part-time media officer at STIAS
Photograph: Noloyiso Mtembu

 

 

Share this post:

Share on whatsapp
WhatsApp
Share on email
Email
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

Subscribe to posts like these:

STIAS is a creative space for the mind.