Real-time Object Detection and Auditory Feedback for the Visually Impaired

I am excited to share a project idea that I believe could significantly benefit the visually impaired community using BeagleBone technology.The project involves implementing a machine learning model based on YOLOv4Tiny and TensorFlow Lite . This model will be trained to analyze camera input in real-time, identify objects within the frame, and establish relationships between them. The primary goal is to assist visually impaired individuals by providing them with information about their surroundings through audio feedback.

Object Detection: Utilizing YOLOv4Tiny for efficient and accurate real-time object detection.
Relationship Identification: Implementing algorithms to establish relationships between detected objects.
Auditory Feedback: Integrating a speaker device with BeagleBone to read out the identified objects and their relationships.

Can anyone kindly guide me with insights and suggestions as to can this project be included as an idea for GSoc 2024?