SlideShare a Scribd company logo
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 01 | Jan 2024 www.irjet.net p-ISSN: 2395-0072
© 2024, IRJET | Impact Factor value: 8.226 | ISO 9001:2008 Certified Journal | Page 127
Object Detection and Localization for Visually Impaired People using
CNN
Sony Tirkey1, Anmol Ratan Tirkey2, Cazal Tirkey3
1Student, CHRIST (Deemed-To-Be University), Bengaluru, Karnataka, India
2 Student, JAIN (Deemed-To-Be University), Bengaluru, Karnataka, India
3Researcher, Bengaluru, Karnataka, India
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - Visually impaired people constitute a significant
portion of the global population, with both permanent and
temporary disabilities. WHO estimates around 390 lakh
individuals are completely blind, and2850lakhindividuals are
purblind or visually impaired. To help them aid in daily
navigation, numerous supportingsystemsarebeing developed
which had numerous disadvantages. Our main objective is to
create an auto-assistance system for the visually impaired. By
using CNNs (Convolution Neural Network), a widely-used
approach in deep learning models, our system achieves over
95% accuracy in object detection based on camera images.
Identified objects are conveyed through voice messages,
making it a valuable prototype for assisting the visually
impaired.
Key Words: Visually Impaired,ObjectDetection,CNN,Deep
Learning, assistance.
1. INTRODUCTION
Visually impaired people constitute a significant portion of
the population component, with tensofmillionspredictedto
exist globally. Their integration into society is an essential
and ongoing aim. A lot of work has gone into ensuring a
health-care system, to help visually impaired people live a
normal life, many guiding system approaches have been
created. These systems are frequently created just for
certain activities. However, these solutions can significantly
improve such people's ability to move and their security.
The advancement of cutting-edge guiding systems to assist
visually impaired persons is tightly linked to advanced
technologies in image processing and computer vision, as
well as the speed of the devices and unit processors.
Regardless of the technology used, the application must
work in real time with quick actions and decisions, as speed
is crucial for taking action.
Choosing the best possible outcome is essentially a trade-off
between the performance of the software component and
the hardware capabilities. It is necessary to adjust the
parameters to optimum. One of the primary goals of the
aided system during a visually impaired person's indoor
movement is to automatically identify and recognizeobjects
or obstacles, followed by an auditory alert.
The image processingvision moduledescribedinthissystem
is an integrated aspect of the platform dedicated to assist
visually impaired people.Furthermore,theprovidedmodule
can be used independently of the integrated platform. The
proposed vision-based guidance systemiscreated,built,and
tested throughout experiments and iteratively optimized.
The module follows the principle of producing a high-
performance device that is also cost-effective for practical
use. The module employs disruptive technologyandpermits
updates and the addition of new functionality.
WORK DONE
Downloaded the project's model file.
2. EXISTING SYSTEMS
Convolutional Neural Networks (CNN), speech recognition,
smartphone camera, and object personalization were all
used in existing systems. The purpose is to help visually
impaired people navigate indoor surroundings, recognize
items, and avoid obstacles.
By using facial recognition for authentication, the Facial
Identification and Authentication System provides a secure
and personalized user experience while also ensuring that
only authorized users may access the system and its
features. Nonetheless, it is dependent on the accuracy of
facial recognition technology, which can be influenced by
lighting, changes in look, and other factors.
The Object Detection System (General Object Detection -
Model 1) employs a pre-trained CNN model for general
object detection, allowing the system to identify a wide
range of items and providing real-time object recognition,
which improves the user's comprehension of their
surroundings. However, it is restricted to the items and
categories in the pre-trained model. Objectsthatarenotpart
of the model's training data may not be detected accurately.
The Customized Object Detection System (Model 2) allows
users to personalize the system by adding their own
detection objects, increasing the system's versatility and
usability for visually impaired users with special needs.
However, users must take and label photographs, which can
be time-consuming. Accuracy may also vary depending on
the quality of photographs captured by users. Distance
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 01 | Jan 2024 www.irjet.net p-ISSN: 2395-0072
© 2024, IRJET | Impact Factor value: 8.226 | ISO 9001:2008 Certified Journal | Page 128
measurements may contain mistakesduetorelianceongood
camera and the assumption of a fixed focus length.
The detection ofobstaclesandnavigationdirectionimproves
the user's safety when travelling through indoor spaces. It
delivers real-time alerts and guidance to help you avoid
obstacles and get to your destination. However, obstacle
detection and navigation guidance may not be totally
foolproof, and users must still use caution. Overreliance on
the system may result in unanticipated outcomes.
Text-to-Speech Interaction and Speech Recognition offer
natural and convenient interactionbetweentheuserandthe
smartphone. It recognizes human commands and delivers
navigation instructions, making the system more user-
friendly. However, the accuracy of speech recognition can
vary based on the user's speech patterns, accents, and
background noise. It is possible that commands will be
misinterpreted.
Supports personalized object detection, allowing users to
add objects tailored to their individual requirements. Cloud
training improves the precision of personalized models,
ensuring accurate object recognition.However,itmaycreate
latency and reliance on network connectivity. Concerns
about privacy should be considered while uploading
personal data to the cloud.
The experimental results and evaluations show promising
accuracy in object detection and distance measurement,
confirming the proposed approach. However, experimental
results can differ depending on the testing environment and
the quality of the data acquired. The accuracy percentages
stated may not be achievable in all real-world conditions.
While the suggested systems provide essential features and
benefits for visually impaired individuals, they also have
limitations and possible issues that must be addressed. The
careful examination of these benefits and drawbacks is
critical for the successful deployment and enhancement of
the overall system.
3. PROPOSED SYSTEMS
This research addresses at the difficulties that visually
impaired people have when traversingindoorsurroundings.
The system attempts to enable real-time object recognition
and localization by exploiting the capabilities of CNNs,while
also providing users with complete and intuitive audio or
haptic feedback.
Fig -1: Deep Learning Steps
The following are the aims of the presented research paper:
Object Detection and Recognition: Use of CNN for precise
object recognition, improving awareness of the indoor
environment.
Object Localization: The use of a system to calculate or point
out the location of an object in the frame of any image or
video input.
Speech Interaction and Communication: Allows users to
interact with the system by using voice commands and
hearing instructions.
Personalized Object Detection: Entails developing
customized object detection models and providing support
for user-specific items.
Affordability and Accessibility: Develop a cheap solution by
utilizing widely available cellphoneswithintegratedsensors
and functionalities.
Improved freedom and Safety: Improve the freedom and
safety of visually impaired people by making indoor
navigation and object recognition easier.
4. IMPLEMENTATION
A camera is used to capture the footage, which is then
separated into frames. CNN classifiers are used for object
detection, and pyttsx3 is used for text-to-speech conversion.
Fig -2: Workflow of the object detection algorithm.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 01 | Jan 2024 www.irjet.net p-ISSN: 2395-0072
© 2024, IRJET | Impact Factor value: 8.226 | ISO 9001:2008 Certified Journal | Page 129
For every person'smovementintheindoorenvironment, the
process image acquisition > image processing > acoustic
notification is looped. The total processing timeiscalculated
by adding the three processing periods, which determines
the acquisition rate for the input image frames. The process
must be quick enough so that possible roadblocks may be
avoided on time.
The image processing method is used to detect a specific
object, more specifically traffic sign recognition. We made
use of the cv2 integrated OpenCV function.
Image acquisition, pre-processing, CNN model, object
detection and localization, feedback generation, and user
interface are all components of the implemented system
architecture. Image acquisition is the process of capturing
photographs of the indoor environment using a camera
module (e.g., web camera, depth sensor). Image pre-
processing techniques are used to improve image quality
and eliminate noise. Design and training are requiredforthe
Convolutional Neural Network (CNN) model for object
detection and localization. The input photos are then
processed through the trained CNN to detect and localise
objects, barriers, and landmarks.
The user receives auditory or haptic input regarding the
detected objects and their positions. To convey information
to the user, a user-friendly interfaceiscreated,whichmay be
a mobile application or wearable device.
CNN Model Development:
The first stage is DatasetCollection,whichrequiresamassing
a broad indoor dataset that includes various settings, items,
barriers, and landmarks. Precise annotations serve as the
foundation for training and evaluation. In the Model
Architecture phase, a Convolutional Neural Network (CNN)
is painstakingly developed to solve real-time itemdetection,
harnessing spatial features for accurate indoor object
identification and positioning. The model's efficiency is
improved through rigorous training using the acquired
dataset and accuracy optimization. The third step,
Integration, incorporates the CNN model into the system
architecture, which improves real-time object recognition
and localization. This improves overall functionality,
allowing for more informed decisions in interior
environments.
Hardware Integration:
A suitable camera module was chosen and integrated into
the system to facilitate image acquisition. To effectively
manage the tasks of image processing, Convolutional Neural
Network (CNN) inference, and feedback generation, a
microcontroller or processor was selected. To convey the
outcomes of the analysis to the user, auditory and haptic
output devices were incorporated. The integration of these
components resulted in development of functional and
robust systems.
User Testing and Validation:
To facilitate picture acquisition, a suitable camera module
was selected and put into the system. A microcontroller or
CPU was chosen to successfully manage the activities of
image processing, Convolutional Neural Network (CNN)
inference, and feedback production. Auditory and haptic
output devices were used to communicate the results of the
analysis to the user. The combination of these elements led
in the creation of functional and robust systems.
5. RESULTS
The result indicates that the CNN (Convolutional Neural
Network) program for object recognition was implemented
effectively. By detecting and assistingthem withtheobstacle
or object found, the aim is to help persons who are purblind
improve their quality of life. This application can be used to
differentiate between objects and assist those with
disabilities, according to the proposed paradigm.
6. CONCLUSIONS
A system-based aiding network has been proposed to help
purblind and fully blind people. The template that matches
the procedures completed by experimenting with OpenCV
has developed a successful multiscale and useful method for
the applications employed within the environment. Finally,
the identified items are output as an auditory message with
the object's name. The clarity of the image obtained by the
user will determine the accuracy. The real-time
implementation of the technology is promising, providing
real-world benefits for visually impaired persons traversing
indoor settings. Our technology has the potential to become
an indispensable tool, enabling greater freedom and
inclusivity for visually impaired individuals, with user-
centric input and incremental enhancements. This project
shows the positive impact of cutting-edge technology and
opens the way for future advances in assistive systems,
making the world more accessible to all.
REFERENCES
[1] Samkit Shah, Jayraj Bandariya, Garima Jain, Mayur
Ghevariya, Sarosh Dastoor, “CNN basedAuto-Assistance
System as a Boon for Directing Visually Impaired
Person”, 2019 3rd International Conference on Trends
in Electronics and Informatics(ICOEI)23-25April 2019,
DOI: 10.1109/ICOEI.2019.8862699.
[2] Bin Jiang, Jiachen Yang, Zhihan Lv, Houbing Song,
“Wearable Vision Assistance SystemBasedonBinocular
Sensors for Visually Impaired Users”, IEEE Internet of
Things Journal (Volume: 6, Issue: 2, 31 May 2018), DOI:
10.1109/JIOT.2018.2842229.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 01 | Jan 2024 www.irjet.net p-ISSN: 2395-0072
© 2024, IRJET | Impact Factor value: 8.226 | ISO 9001:2008 Certified Journal | Page 130
[3] Md. Milon Islam, Muhammad Sheikh Sadi, and Thomas
Bräunl, “Automated Walking Guide to Enhance the
Mobility of VisuallyImpaired People”,IEEETransactions
on Medical Robotics and Bionics (Volume: 2, Issue: 3,
August 2020), DOI: 10.1109/TMRB.2020.3011501.
[4] Devashish Pradeep Khairnar, Rushikesh Balasaheb
Karad, Apurva Kapse, Geetanjali Kale, Prathamesh
Jadhav, “PARTHA: A Visually Impaired Assistance
System”, 2020 3rd International Conference on
Communication System, Computing and ITApplications
(CSCITA), DOI: 10.1109/CSCITA47329.2020.9137791.
[5] Ajinkya Badave, Rathin Jagtap, Rizina Kaovasia, Shivani
Rahatwad, Saroja Kulkarni, “Android Based Object
Detection System for Visually Impaired”, 2020
International Conference on Industry 4.0 Technology
(I4Tech), DOI: 10.1109/I4Tech48345.2020.9102694.
[6] Ruiqi Cheng, Kaiwei Wang, Longqing Lin and Kailun
Yang, “Visual Localization of Key Positions for Visually
Impaired People”, 2018 24th International Conference
on Pattern Recognition (ICPR), DOI:
10.1109/ICPR.2018.8545141.
[7] Payal T. Mahida, Seyed Shahrestani, Hon Cheung,
“Localization Techniques in Indoor Navigation System
for Visually Impaired people”, 2017 17th International
Symposium on Communications and Information
Technologies (ISCIT), DOI:
10.1109/ISCIT.2017.8261229.
[8] Vidula V. Meshram, Kailas Patil, Vishal A. Meshram and
Felix Che Shu, “An Astute Assistive Device for Mobility
and Object Recognition for Visually Impaired People”,
IEEE Transactions on Human-Machine Systems
(Volume: 49, Issue: 5, October 2019), DOI:
10.1109/THMS.2019.2931745.
[9] Vidya N. Murali, James M. Coughlan,“Smartphone-based
crosswalk detection and localization for visually
impaired pedestrians”, 2013 IEEE International
Conference on Multimedia and Expo Workshops
(ICMEW), DOI: 10.1109/ICMEW.2013.6618432.
[10] Jawaid Nasreen, Warsi Arif, Asad Ali Shaikh, Yahya
Muhammad, Monaisha Abdullah, “Object Detection and
Narrator for Visually Impaired People”, 2019 6th IEEE
International Conference on Engineering Technologies
and Applied Sciences (ICETAS), DOI:
10.1109/ICETAS48360.2019.9117405.
[11] Cang Ye, Xiangfei Qian, “3D Object Recognition of a
Robotic Navigation Aid for the Visually Impaired”, IEEE
Transactions on Neural Systems and Rehabilitation
Engineering (Volume: 26, Issue: 2, February2018),DOI:
10.1109/TNSRE.2017.2748419.
[12] Akshaya Kesarimangalam Srinivasan, Shwetha
Sridharan, Rajeswari Sridhar, “Object Localization and
Navigation Assistant for the Visually challenged”, 2020
Fourth International Conference on Computing
Methodologies and Communication (ICCMC), DOI:
10.1109/ICCMC48092.2020.ICCMC-00061.
[13] Joel A. Hesch and Stergios I. Roumeliotis, “An Indoor
Localization Aid for the Visually Impaired”, Proceedings
2007 IEEE International Conference on Robotics and
Automation, DOI: 10.1109/ROBOT.2007.364021.
[14] K. Matusiak, P. Skulimowski, P. Strurniłło, “Object
recognition in a mobile phone application for visually
impaired users”, 2013 6th International Conference on
Human System Interactions (HSI), DOI:
10.1109/HSI.2013.6577868.
[15] Nouran Khaled, Shehab Mohsen, Kareem Emad El-Din,
Sherif Akram,HaythamMetawie,AmmarMohamed,“In-
Door Assistant Mobile Application Using CNN and
TensorFlow”, 2020 International Conference on
Electrical, Communication, and Computer Engineering
(ICECCE), DOI: 10.1109/ICECCE49384.2020.9179386.
[16] Hsueh-Cheng Wang, Robert K. Katzschmann, Santani
Teng, Brandon Araki, Laura Giarré, Daniela Rus,
“Enabling Independent NavigationforVisuallyImpaired
People through a Wearable Vision-Based Feedback
System”, 2017 IEEE International Conference on
Robotics and Automation (ICRA), DOI:
10.1109/ICRA.2017.7989772.
Ad

More Related Content

Similar to Object Detection and Localization for Visually Impaired People using CNN (20)

IRJET- Object Detection and Recognition for Blind Assistance
IRJET- Object Detection and Recognition for Blind AssistanceIRJET- Object Detection and Recognition for Blind Assistance
IRJET- Object Detection and Recognition for Blind Assistance
IRJET Journal
 
DYNAMIC ENERGY MANAGEMENT USING REAL TIME OBJECT DETECTION
DYNAMIC ENERGY MANAGEMENT USING REAL TIME OBJECT DETECTIONDYNAMIC ENERGY MANAGEMENT USING REAL TIME OBJECT DETECTION
DYNAMIC ENERGY MANAGEMENT USING REAL TIME OBJECT DETECTION
IRJET Journal
 
IRJET- Automated Student’s Attendance Management using Convolutional Neural N...
IRJET- Automated Student’s Attendance Management using Convolutional Neural N...IRJET- Automated Student’s Attendance Management using Convolutional Neural N...
IRJET- Automated Student’s Attendance Management using Convolutional Neural N...
IRJET Journal
 
Drishyam - Virtual Eye for Blind
Drishyam - Virtual Eye for BlindDrishyam - Virtual Eye for Blind
Drishyam - Virtual Eye for Blind
IRJET Journal
 
Advance Intelligent Video Surveillance System Using OpenCV
Advance Intelligent Video Surveillance System Using OpenCVAdvance Intelligent Video Surveillance System Using OpenCV
Advance Intelligent Video Surveillance System Using OpenCV
IRJET Journal
 
Object Detetcion using SSD-MobileNet
Object Detetcion using SSD-MobileNetObject Detetcion using SSD-MobileNet
Object Detetcion using SSD-MobileNet
IRJET Journal
 
Gesture Recognition System using Computer Vision
Gesture Recognition System using Computer VisionGesture Recognition System using Computer Vision
Gesture Recognition System using Computer Vision
IRJET Journal
 
Convolutional Neural Network Based Real Time Object Detection Using YOLO V4
Convolutional Neural Network Based Real Time Object Detection Using YOLO V4Convolutional Neural Network Based Real Time Object Detection Using YOLO V4
Convolutional Neural Network Based Real Time Object Detection Using YOLO V4
IRJET Journal
 
INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...
INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...
INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...
IRJET Journal
 
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance SystemIRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET Journal
 
Human Activity Recognition Using Neural Network
Human Activity Recognition Using Neural NetworkHuman Activity Recognition Using Neural Network
Human Activity Recognition Using Neural Network
IRJET Journal
 
IRJET- Real-Time Object Detection System using Caffe Model
IRJET- Real-Time Object Detection System using Caffe ModelIRJET- Real-Time Object Detection System using Caffe Model
IRJET- Real-Time Object Detection System using Caffe Model
IRJET Journal
 
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET Journal
 
Human Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision TechniqueHuman Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision Technique
IRJET Journal
 
IRJET- Surveillance of Object Motion Detection and Caution System using B...
IRJET-  	  Surveillance of Object Motion Detection and Caution System using B...IRJET-  	  Surveillance of Object Motion Detection and Caution System using B...
IRJET- Surveillance of Object Motion Detection and Caution System using B...
IRJET Journal
 
IRJET- Threat Detection in Hostile Environment with Deep Learning based on Dr...
IRJET- Threat Detection in Hostile Environment with Deep Learning based on Dr...IRJET- Threat Detection in Hostile Environment with Deep Learning based on Dr...
IRJET- Threat Detection in Hostile Environment with Deep Learning based on Dr...
IRJET Journal
 
IRJET - Face Detection and Recognition System
IRJET -  	  Face Detection and Recognition SystemIRJET -  	  Face Detection and Recognition System
IRJET - Face Detection and Recognition System
IRJET Journal
 
IRJET- DocLock Application for Secure Document Sharing
IRJET- DocLock Application for Secure Document SharingIRJET- DocLock Application for Secure Document Sharing
IRJET- DocLock Application for Secure Document Sharing
IRJET Journal
 
Person Detection in Maritime Search And Rescue Operations
Person Detection in Maritime Search And Rescue OperationsPerson Detection in Maritime Search And Rescue Operations
Person Detection in Maritime Search And Rescue Operations
IRJET Journal
 
Person Detection in Maritime Search And Rescue Operations
Person Detection in Maritime Search And Rescue OperationsPerson Detection in Maritime Search And Rescue Operations
Person Detection in Maritime Search And Rescue Operations
IRJET Journal
 
IRJET- Object Detection and Recognition for Blind Assistance
IRJET- Object Detection and Recognition for Blind AssistanceIRJET- Object Detection and Recognition for Blind Assistance
IRJET- Object Detection and Recognition for Blind Assistance
IRJET Journal
 
DYNAMIC ENERGY MANAGEMENT USING REAL TIME OBJECT DETECTION
DYNAMIC ENERGY MANAGEMENT USING REAL TIME OBJECT DETECTIONDYNAMIC ENERGY MANAGEMENT USING REAL TIME OBJECT DETECTION
DYNAMIC ENERGY MANAGEMENT USING REAL TIME OBJECT DETECTION
IRJET Journal
 
IRJET- Automated Student’s Attendance Management using Convolutional Neural N...
IRJET- Automated Student’s Attendance Management using Convolutional Neural N...IRJET- Automated Student’s Attendance Management using Convolutional Neural N...
IRJET- Automated Student’s Attendance Management using Convolutional Neural N...
IRJET Journal
 
Drishyam - Virtual Eye for Blind
Drishyam - Virtual Eye for BlindDrishyam - Virtual Eye for Blind
Drishyam - Virtual Eye for Blind
IRJET Journal
 
Advance Intelligent Video Surveillance System Using OpenCV
Advance Intelligent Video Surveillance System Using OpenCVAdvance Intelligent Video Surveillance System Using OpenCV
Advance Intelligent Video Surveillance System Using OpenCV
IRJET Journal
 
Object Detetcion using SSD-MobileNet
Object Detetcion using SSD-MobileNetObject Detetcion using SSD-MobileNet
Object Detetcion using SSD-MobileNet
IRJET Journal
 
Gesture Recognition System using Computer Vision
Gesture Recognition System using Computer VisionGesture Recognition System using Computer Vision
Gesture Recognition System using Computer Vision
IRJET Journal
 
Convolutional Neural Network Based Real Time Object Detection Using YOLO V4
Convolutional Neural Network Based Real Time Object Detection Using YOLO V4Convolutional Neural Network Based Real Time Object Detection Using YOLO V4
Convolutional Neural Network Based Real Time Object Detection Using YOLO V4
IRJET Journal
 
INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...
INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...
INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...
IRJET Journal
 
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance SystemIRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET Journal
 
Human Activity Recognition Using Neural Network
Human Activity Recognition Using Neural NetworkHuman Activity Recognition Using Neural Network
Human Activity Recognition Using Neural Network
IRJET Journal
 
IRJET- Real-Time Object Detection System using Caffe Model
IRJET- Real-Time Object Detection System using Caffe ModelIRJET- Real-Time Object Detection System using Caffe Model
IRJET- Real-Time Object Detection System using Caffe Model
IRJET Journal
 
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET Journal
 
Human Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision TechniqueHuman Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision Technique
IRJET Journal
 
IRJET- Surveillance of Object Motion Detection and Caution System using B...
IRJET-  	  Surveillance of Object Motion Detection and Caution System using B...IRJET-  	  Surveillance of Object Motion Detection and Caution System using B...
IRJET- Surveillance of Object Motion Detection and Caution System using B...
IRJET Journal
 
IRJET- Threat Detection in Hostile Environment with Deep Learning based on Dr...
IRJET- Threat Detection in Hostile Environment with Deep Learning based on Dr...IRJET- Threat Detection in Hostile Environment with Deep Learning based on Dr...
IRJET- Threat Detection in Hostile Environment with Deep Learning based on Dr...
IRJET Journal
 
IRJET - Face Detection and Recognition System
IRJET -  	  Face Detection and Recognition SystemIRJET -  	  Face Detection and Recognition System
IRJET - Face Detection and Recognition System
IRJET Journal
 
IRJET- DocLock Application for Secure Document Sharing
IRJET- DocLock Application for Secure Document SharingIRJET- DocLock Application for Secure Document Sharing
IRJET- DocLock Application for Secure Document Sharing
IRJET Journal
 
Person Detection in Maritime Search And Rescue Operations
Person Detection in Maritime Search And Rescue OperationsPerson Detection in Maritime Search And Rescue Operations
Person Detection in Maritime Search And Rescue Operations
IRJET Journal
 
Person Detection in Maritime Search And Rescue Operations
Person Detection in Maritime Search And Rescue OperationsPerson Detection in Maritime Search And Rescue Operations
Person Detection in Maritime Search And Rescue Operations
IRJET Journal
 

More from IRJET Journal (20)

Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
IRJET Journal
 
BRAIN TUMOUR DETECTION AND CLASSIFICATION
BRAIN TUMOUR DETECTION AND CLASSIFICATIONBRAIN TUMOUR DETECTION AND CLASSIFICATION
BRAIN TUMOUR DETECTION AND CLASSIFICATION
IRJET Journal
 
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
IRJET Journal
 
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ..."Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
IRJET Journal
 
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
IRJET Journal
 
Breast Cancer Detection using Computer Vision
Breast Cancer Detection using Computer VisionBreast Cancer Detection using Computer Vision
Breast Cancer Detection using Computer Vision
IRJET Journal
 
Auto-Charging E-Vehicle with its battery Management.
Auto-Charging E-Vehicle with its battery Management.Auto-Charging E-Vehicle with its battery Management.
Auto-Charging E-Vehicle with its battery Management.
IRJET Journal
 
Analysis of high energy charge particle in the Heliosphere
Analysis of high energy charge particle in the HeliosphereAnalysis of high energy charge particle in the Heliosphere
Analysis of high energy charge particle in the Heliosphere
IRJET Journal
 
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
A Novel System for Recommending Agricultural Crops Using Machine Learning App...A Novel System for Recommending Agricultural Crops Using Machine Learning App...
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
IRJET Journal
 
Auto-Charging E-Vehicle with its battery Management.
Auto-Charging E-Vehicle with its battery Management.Auto-Charging E-Vehicle with its battery Management.
Auto-Charging E-Vehicle with its battery Management.
IRJET Journal
 
Analysis of high energy charge particle in the Heliosphere
Analysis of high energy charge particle in the HeliosphereAnalysis of high energy charge particle in the Heliosphere
Analysis of high energy charge particle in the Heliosphere
IRJET Journal
 
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
IRJET Journal
 
FIR filter-based Sample Rate Convertors and its use in NR PRACH
FIR filter-based Sample Rate Convertors and its use in NR PRACHFIR filter-based Sample Rate Convertors and its use in NR PRACH
FIR filter-based Sample Rate Convertors and its use in NR PRACH
IRJET Journal
 
Kiona – A Smart Society Automation Project
Kiona – A Smart Society Automation ProjectKiona – A Smart Society Automation Project
Kiona – A Smart Society Automation Project
IRJET Journal
 
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
IRJET Journal
 
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
IRJET Journal
 
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
Invest in Innovation: Empowering Ideas through Blockchain Based CrowdfundingInvest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
IRJET Journal
 
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
IRJET Journal
 
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUBSPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
IRJET Journal
 
AR Application: Homewise VisionMs. Vaishali Rane, Om Awadhoot, Bhargav Gajare...
AR Application: Homewise VisionMs. Vaishali Rane, Om Awadhoot, Bhargav Gajare...AR Application: Homewise VisionMs. Vaishali Rane, Om Awadhoot, Bhargav Gajare...
AR Application: Homewise VisionMs. Vaishali Rane, Om Awadhoot, Bhargav Gajare...
IRJET Journal
 
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
IRJET Journal
 
BRAIN TUMOUR DETECTION AND CLASSIFICATION
BRAIN TUMOUR DETECTION AND CLASSIFICATIONBRAIN TUMOUR DETECTION AND CLASSIFICATION
BRAIN TUMOUR DETECTION AND CLASSIFICATION
IRJET Journal
 
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
IRJET Journal
 
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ..."Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
IRJET Journal
 
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
IRJET Journal
 
Breast Cancer Detection using Computer Vision
Breast Cancer Detection using Computer VisionBreast Cancer Detection using Computer Vision
Breast Cancer Detection using Computer Vision
IRJET Journal
 
Auto-Charging E-Vehicle with its battery Management.
Auto-Charging E-Vehicle with its battery Management.Auto-Charging E-Vehicle with its battery Management.
Auto-Charging E-Vehicle with its battery Management.
IRJET Journal
 
Analysis of high energy charge particle in the Heliosphere
Analysis of high energy charge particle in the HeliosphereAnalysis of high energy charge particle in the Heliosphere
Analysis of high energy charge particle in the Heliosphere
IRJET Journal
 
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
A Novel System for Recommending Agricultural Crops Using Machine Learning App...A Novel System for Recommending Agricultural Crops Using Machine Learning App...
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
IRJET Journal
 
Auto-Charging E-Vehicle with its battery Management.
Auto-Charging E-Vehicle with its battery Management.Auto-Charging E-Vehicle with its battery Management.
Auto-Charging E-Vehicle with its battery Management.
IRJET Journal
 
Analysis of high energy charge particle in the Heliosphere
Analysis of high energy charge particle in the HeliosphereAnalysis of high energy charge particle in the Heliosphere
Analysis of high energy charge particle in the Heliosphere
IRJET Journal
 
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
IRJET Journal
 
FIR filter-based Sample Rate Convertors and its use in NR PRACH
FIR filter-based Sample Rate Convertors and its use in NR PRACHFIR filter-based Sample Rate Convertors and its use in NR PRACH
FIR filter-based Sample Rate Convertors and its use in NR PRACH
IRJET Journal
 
Kiona – A Smart Society Automation Project
Kiona – A Smart Society Automation ProjectKiona – A Smart Society Automation Project
Kiona – A Smart Society Automation Project
IRJET Journal
 
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
IRJET Journal
 
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
IRJET Journal
 
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
Invest in Innovation: Empowering Ideas through Blockchain Based CrowdfundingInvest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
IRJET Journal
 
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
IRJET Journal
 
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUBSPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
IRJET Journal
 
AR Application: Homewise VisionMs. Vaishali Rane, Om Awadhoot, Bhargav Gajare...
AR Application: Homewise VisionMs. Vaishali Rane, Om Awadhoot, Bhargav Gajare...AR Application: Homewise VisionMs. Vaishali Rane, Om Awadhoot, Bhargav Gajare...
AR Application: Homewise VisionMs. Vaishali Rane, Om Awadhoot, Bhargav Gajare...
IRJET Journal
 
Ad

Recently uploaded (20)

Control Methods of Noise Pollutions.pptx
Control Methods of Noise Pollutions.pptxControl Methods of Noise Pollutions.pptx
Control Methods of Noise Pollutions.pptx
vvsasane
 
Water Industry Process Automation & Control Monthly May 2025
Water Industry Process Automation & Control Monthly May 2025Water Industry Process Automation & Control Monthly May 2025
Water Industry Process Automation & Control Monthly May 2025
Water Industry Process Automation & Control
 
hypermedia_system_revisit_roy_fielding .
hypermedia_system_revisit_roy_fielding .hypermedia_system_revisit_roy_fielding .
hypermedia_system_revisit_roy_fielding .
NABLAS株式会社
 
Working with USDOT UTCs: From Conception to Implementation
Working with USDOT UTCs: From Conception to ImplementationWorking with USDOT UTCs: From Conception to Implementation
Working with USDOT UTCs: From Conception to Implementation
Alabama Transportation Assistance Program
 
Little Known Ways To 3 Best sites to Buy Linkedin Accounts.pdf
Little Known Ways To 3 Best sites to Buy Linkedin Accounts.pdfLittle Known Ways To 3 Best sites to Buy Linkedin Accounts.pdf
Little Known Ways To 3 Best sites to Buy Linkedin Accounts.pdf
gori42199
 
Artificial intelligence and machine learning.pptx
Artificial intelligence and machine learning.pptxArtificial intelligence and machine learning.pptx
Artificial intelligence and machine learning.pptx
rakshanatarajan005
 
Automatic Quality Assessment for Speech and Beyond
Automatic Quality Assessment for Speech and BeyondAutomatic Quality Assessment for Speech and Beyond
Automatic Quality Assessment for Speech and Beyond
NU_I_TODALAB
 
2.3 Genetically Modified Organisms (1).ppt
2.3 Genetically Modified Organisms (1).ppt2.3 Genetically Modified Organisms (1).ppt
2.3 Genetically Modified Organisms (1).ppt
rakshaiya16
 
Generative AI & Large Language Models Agents
Generative AI & Large Language Models AgentsGenerative AI & Large Language Models Agents
Generative AI & Large Language Models Agents
aasgharbee22seecs
 
Slide share PPT of SOx control technologies.pptx
Slide share PPT of SOx control technologies.pptxSlide share PPT of SOx control technologies.pptx
Slide share PPT of SOx control technologies.pptx
vvsasane
 
acid base ppt and their specific application in food
acid base ppt and their specific application in foodacid base ppt and their specific application in food
acid base ppt and their specific application in food
Fatehatun Noor
 
JRR Tolkien’s Lord of the Rings: Was It Influenced by Nordic Mythology, Homer...
JRR Tolkien’s Lord of the Rings: Was It Influenced by Nordic Mythology, Homer...JRR Tolkien’s Lord of the Rings: Was It Influenced by Nordic Mythology, Homer...
JRR Tolkien’s Lord of the Rings: Was It Influenced by Nordic Mythology, Homer...
Reflections on Morality, Philosophy, and History
 
Machine foundation notes for civil engineering students
Machine foundation notes for civil engineering studentsMachine foundation notes for civil engineering students
Machine foundation notes for civil engineering students
DYPCET
 
Slide share PPT of NOx control technologies.pptx
Slide share PPT of  NOx control technologies.pptxSlide share PPT of  NOx control technologies.pptx
Slide share PPT of NOx control technologies.pptx
vvsasane
 
Frontend Architecture Diagram/Guide For Frontend Engineers
Frontend Architecture Diagram/Guide For Frontend EngineersFrontend Architecture Diagram/Guide For Frontend Engineers
Frontend Architecture Diagram/Guide For Frontend Engineers
Michael Hertzberg
 
Empowering Electric Vehicle Charging Infrastructure with Renewable Energy Int...
Empowering Electric Vehicle Charging Infrastructure with Renewable Energy Int...Empowering Electric Vehicle Charging Infrastructure with Renewable Energy Int...
Empowering Electric Vehicle Charging Infrastructure with Renewable Energy Int...
AI Publications
 
Lecture - 7 Canals of the topic of the civil engineering
Lecture - 7  Canals of the topic of the civil engineeringLecture - 7  Canals of the topic of the civil engineering
Lecture - 7 Canals of the topic of the civil engineering
MJawadkhan1
 
Design of Variable Depth Single-Span Post.pdf
Design of Variable Depth Single-Span Post.pdfDesign of Variable Depth Single-Span Post.pdf
Design of Variable Depth Single-Span Post.pdf
Kamel Farid
 
Applications of Centroid in Structural Engineering
Applications of Centroid in Structural EngineeringApplications of Centroid in Structural Engineering
Applications of Centroid in Structural Engineering
suvrojyotihalder2006
 
twin tower attack 2001 new york city
twin  tower  attack  2001 new  york citytwin  tower  attack  2001 new  york city
twin tower attack 2001 new york city
harishreemavs
 
Control Methods of Noise Pollutions.pptx
Control Methods of Noise Pollutions.pptxControl Methods of Noise Pollutions.pptx
Control Methods of Noise Pollutions.pptx
vvsasane
 
hypermedia_system_revisit_roy_fielding .
hypermedia_system_revisit_roy_fielding .hypermedia_system_revisit_roy_fielding .
hypermedia_system_revisit_roy_fielding .
NABLAS株式会社
 
Little Known Ways To 3 Best sites to Buy Linkedin Accounts.pdf
Little Known Ways To 3 Best sites to Buy Linkedin Accounts.pdfLittle Known Ways To 3 Best sites to Buy Linkedin Accounts.pdf
Little Known Ways To 3 Best sites to Buy Linkedin Accounts.pdf
gori42199
 
Artificial intelligence and machine learning.pptx
Artificial intelligence and machine learning.pptxArtificial intelligence and machine learning.pptx
Artificial intelligence and machine learning.pptx
rakshanatarajan005
 
Automatic Quality Assessment for Speech and Beyond
Automatic Quality Assessment for Speech and BeyondAutomatic Quality Assessment for Speech and Beyond
Automatic Quality Assessment for Speech and Beyond
NU_I_TODALAB
 
2.3 Genetically Modified Organisms (1).ppt
2.3 Genetically Modified Organisms (1).ppt2.3 Genetically Modified Organisms (1).ppt
2.3 Genetically Modified Organisms (1).ppt
rakshaiya16
 
Generative AI & Large Language Models Agents
Generative AI & Large Language Models AgentsGenerative AI & Large Language Models Agents
Generative AI & Large Language Models Agents
aasgharbee22seecs
 
Slide share PPT of SOx control technologies.pptx
Slide share PPT of SOx control technologies.pptxSlide share PPT of SOx control technologies.pptx
Slide share PPT of SOx control technologies.pptx
vvsasane
 
acid base ppt and their specific application in food
acid base ppt and their specific application in foodacid base ppt and their specific application in food
acid base ppt and their specific application in food
Fatehatun Noor
 
Machine foundation notes for civil engineering students
Machine foundation notes for civil engineering studentsMachine foundation notes for civil engineering students
Machine foundation notes for civil engineering students
DYPCET
 
Slide share PPT of NOx control technologies.pptx
Slide share PPT of  NOx control technologies.pptxSlide share PPT of  NOx control technologies.pptx
Slide share PPT of NOx control technologies.pptx
vvsasane
 
Frontend Architecture Diagram/Guide For Frontend Engineers
Frontend Architecture Diagram/Guide For Frontend EngineersFrontend Architecture Diagram/Guide For Frontend Engineers
Frontend Architecture Diagram/Guide For Frontend Engineers
Michael Hertzberg
 
Empowering Electric Vehicle Charging Infrastructure with Renewable Energy Int...
Empowering Electric Vehicle Charging Infrastructure with Renewable Energy Int...Empowering Electric Vehicle Charging Infrastructure with Renewable Energy Int...
Empowering Electric Vehicle Charging Infrastructure with Renewable Energy Int...
AI Publications
 
Lecture - 7 Canals of the topic of the civil engineering
Lecture - 7  Canals of the topic of the civil engineeringLecture - 7  Canals of the topic of the civil engineering
Lecture - 7 Canals of the topic of the civil engineering
MJawadkhan1
 
Design of Variable Depth Single-Span Post.pdf
Design of Variable Depth Single-Span Post.pdfDesign of Variable Depth Single-Span Post.pdf
Design of Variable Depth Single-Span Post.pdf
Kamel Farid
 
Applications of Centroid in Structural Engineering
Applications of Centroid in Structural EngineeringApplications of Centroid in Structural Engineering
Applications of Centroid in Structural Engineering
suvrojyotihalder2006
 
twin tower attack 2001 new york city
twin  tower  attack  2001 new  york citytwin  tower  attack  2001 new  york city
twin tower attack 2001 new york city
harishreemavs
 
Ad

Object Detection and Localization for Visually Impaired People using CNN

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 11 Issue: 01 | Jan 2024 www.irjet.net p-ISSN: 2395-0072 © 2024, IRJET | Impact Factor value: 8.226 | ISO 9001:2008 Certified Journal | Page 127 Object Detection and Localization for Visually Impaired People using CNN Sony Tirkey1, Anmol Ratan Tirkey2, Cazal Tirkey3 1Student, CHRIST (Deemed-To-Be University), Bengaluru, Karnataka, India 2 Student, JAIN (Deemed-To-Be University), Bengaluru, Karnataka, India 3Researcher, Bengaluru, Karnataka, India ---------------------------------------------------------------------***--------------------------------------------------------------------- Abstract - Visually impaired people constitute a significant portion of the global population, with both permanent and temporary disabilities. WHO estimates around 390 lakh individuals are completely blind, and2850lakhindividuals are purblind or visually impaired. To help them aid in daily navigation, numerous supportingsystemsarebeing developed which had numerous disadvantages. Our main objective is to create an auto-assistance system for the visually impaired. By using CNNs (Convolution Neural Network), a widely-used approach in deep learning models, our system achieves over 95% accuracy in object detection based on camera images. Identified objects are conveyed through voice messages, making it a valuable prototype for assisting the visually impaired. Key Words: Visually Impaired,ObjectDetection,CNN,Deep Learning, assistance. 1. INTRODUCTION Visually impaired people constitute a significant portion of the population component, with tensofmillionspredictedto exist globally. Their integration into society is an essential and ongoing aim. A lot of work has gone into ensuring a health-care system, to help visually impaired people live a normal life, many guiding system approaches have been created. These systems are frequently created just for certain activities. However, these solutions can significantly improve such people's ability to move and their security. The advancement of cutting-edge guiding systems to assist visually impaired persons is tightly linked to advanced technologies in image processing and computer vision, as well as the speed of the devices and unit processors. Regardless of the technology used, the application must work in real time with quick actions and decisions, as speed is crucial for taking action. Choosing the best possible outcome is essentially a trade-off between the performance of the software component and the hardware capabilities. It is necessary to adjust the parameters to optimum. One of the primary goals of the aided system during a visually impaired person's indoor movement is to automatically identify and recognizeobjects or obstacles, followed by an auditory alert. The image processingvision moduledescribedinthissystem is an integrated aspect of the platform dedicated to assist visually impaired people.Furthermore,theprovidedmodule can be used independently of the integrated platform. The proposed vision-based guidance systemiscreated,built,and tested throughout experiments and iteratively optimized. The module follows the principle of producing a high- performance device that is also cost-effective for practical use. The module employs disruptive technologyandpermits updates and the addition of new functionality. WORK DONE Downloaded the project's model file. 2. EXISTING SYSTEMS Convolutional Neural Networks (CNN), speech recognition, smartphone camera, and object personalization were all used in existing systems. The purpose is to help visually impaired people navigate indoor surroundings, recognize items, and avoid obstacles. By using facial recognition for authentication, the Facial Identification and Authentication System provides a secure and personalized user experience while also ensuring that only authorized users may access the system and its features. Nonetheless, it is dependent on the accuracy of facial recognition technology, which can be influenced by lighting, changes in look, and other factors. The Object Detection System (General Object Detection - Model 1) employs a pre-trained CNN model for general object detection, allowing the system to identify a wide range of items and providing real-time object recognition, which improves the user's comprehension of their surroundings. However, it is restricted to the items and categories in the pre-trained model. Objectsthatarenotpart of the model's training data may not be detected accurately. The Customized Object Detection System (Model 2) allows users to personalize the system by adding their own detection objects, increasing the system's versatility and usability for visually impaired users with special needs. However, users must take and label photographs, which can be time-consuming. Accuracy may also vary depending on the quality of photographs captured by users. Distance
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 11 Issue: 01 | Jan 2024 www.irjet.net p-ISSN: 2395-0072 © 2024, IRJET | Impact Factor value: 8.226 | ISO 9001:2008 Certified Journal | Page 128 measurements may contain mistakesduetorelianceongood camera and the assumption of a fixed focus length. The detection ofobstaclesandnavigationdirectionimproves the user's safety when travelling through indoor spaces. It delivers real-time alerts and guidance to help you avoid obstacles and get to your destination. However, obstacle detection and navigation guidance may not be totally foolproof, and users must still use caution. Overreliance on the system may result in unanticipated outcomes. Text-to-Speech Interaction and Speech Recognition offer natural and convenient interactionbetweentheuserandthe smartphone. It recognizes human commands and delivers navigation instructions, making the system more user- friendly. However, the accuracy of speech recognition can vary based on the user's speech patterns, accents, and background noise. It is possible that commands will be misinterpreted. Supports personalized object detection, allowing users to add objects tailored to their individual requirements. Cloud training improves the precision of personalized models, ensuring accurate object recognition.However,itmaycreate latency and reliance on network connectivity. Concerns about privacy should be considered while uploading personal data to the cloud. The experimental results and evaluations show promising accuracy in object detection and distance measurement, confirming the proposed approach. However, experimental results can differ depending on the testing environment and the quality of the data acquired. The accuracy percentages stated may not be achievable in all real-world conditions. While the suggested systems provide essential features and benefits for visually impaired individuals, they also have limitations and possible issues that must be addressed. The careful examination of these benefits and drawbacks is critical for the successful deployment and enhancement of the overall system. 3. PROPOSED SYSTEMS This research addresses at the difficulties that visually impaired people have when traversingindoorsurroundings. The system attempts to enable real-time object recognition and localization by exploiting the capabilities of CNNs,while also providing users with complete and intuitive audio or haptic feedback. Fig -1: Deep Learning Steps The following are the aims of the presented research paper: Object Detection and Recognition: Use of CNN for precise object recognition, improving awareness of the indoor environment. Object Localization: The use of a system to calculate or point out the location of an object in the frame of any image or video input. Speech Interaction and Communication: Allows users to interact with the system by using voice commands and hearing instructions. Personalized Object Detection: Entails developing customized object detection models and providing support for user-specific items. Affordability and Accessibility: Develop a cheap solution by utilizing widely available cellphoneswithintegratedsensors and functionalities. Improved freedom and Safety: Improve the freedom and safety of visually impaired people by making indoor navigation and object recognition easier. 4. IMPLEMENTATION A camera is used to capture the footage, which is then separated into frames. CNN classifiers are used for object detection, and pyttsx3 is used for text-to-speech conversion. Fig -2: Workflow of the object detection algorithm.
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 11 Issue: 01 | Jan 2024 www.irjet.net p-ISSN: 2395-0072 © 2024, IRJET | Impact Factor value: 8.226 | ISO 9001:2008 Certified Journal | Page 129 For every person'smovementintheindoorenvironment, the process image acquisition > image processing > acoustic notification is looped. The total processing timeiscalculated by adding the three processing periods, which determines the acquisition rate for the input image frames. The process must be quick enough so that possible roadblocks may be avoided on time. The image processing method is used to detect a specific object, more specifically traffic sign recognition. We made use of the cv2 integrated OpenCV function. Image acquisition, pre-processing, CNN model, object detection and localization, feedback generation, and user interface are all components of the implemented system architecture. Image acquisition is the process of capturing photographs of the indoor environment using a camera module (e.g., web camera, depth sensor). Image pre- processing techniques are used to improve image quality and eliminate noise. Design and training are requiredforthe Convolutional Neural Network (CNN) model for object detection and localization. The input photos are then processed through the trained CNN to detect and localise objects, barriers, and landmarks. The user receives auditory or haptic input regarding the detected objects and their positions. To convey information to the user, a user-friendly interfaceiscreated,whichmay be a mobile application or wearable device. CNN Model Development: The first stage is DatasetCollection,whichrequiresamassing a broad indoor dataset that includes various settings, items, barriers, and landmarks. Precise annotations serve as the foundation for training and evaluation. In the Model Architecture phase, a Convolutional Neural Network (CNN) is painstakingly developed to solve real-time itemdetection, harnessing spatial features for accurate indoor object identification and positioning. The model's efficiency is improved through rigorous training using the acquired dataset and accuracy optimization. The third step, Integration, incorporates the CNN model into the system architecture, which improves real-time object recognition and localization. This improves overall functionality, allowing for more informed decisions in interior environments. Hardware Integration: A suitable camera module was chosen and integrated into the system to facilitate image acquisition. To effectively manage the tasks of image processing, Convolutional Neural Network (CNN) inference, and feedback generation, a microcontroller or processor was selected. To convey the outcomes of the analysis to the user, auditory and haptic output devices were incorporated. The integration of these components resulted in development of functional and robust systems. User Testing and Validation: To facilitate picture acquisition, a suitable camera module was selected and put into the system. A microcontroller or CPU was chosen to successfully manage the activities of image processing, Convolutional Neural Network (CNN) inference, and feedback production. Auditory and haptic output devices were used to communicate the results of the analysis to the user. The combination of these elements led in the creation of functional and robust systems. 5. RESULTS The result indicates that the CNN (Convolutional Neural Network) program for object recognition was implemented effectively. By detecting and assistingthem withtheobstacle or object found, the aim is to help persons who are purblind improve their quality of life. This application can be used to differentiate between objects and assist those with disabilities, according to the proposed paradigm. 6. CONCLUSIONS A system-based aiding network has been proposed to help purblind and fully blind people. The template that matches the procedures completed by experimenting with OpenCV has developed a successful multiscale and useful method for the applications employed within the environment. Finally, the identified items are output as an auditory message with the object's name. The clarity of the image obtained by the user will determine the accuracy. The real-time implementation of the technology is promising, providing real-world benefits for visually impaired persons traversing indoor settings. Our technology has the potential to become an indispensable tool, enabling greater freedom and inclusivity for visually impaired individuals, with user- centric input and incremental enhancements. This project shows the positive impact of cutting-edge technology and opens the way for future advances in assistive systems, making the world more accessible to all. REFERENCES [1] Samkit Shah, Jayraj Bandariya, Garima Jain, Mayur Ghevariya, Sarosh Dastoor, “CNN basedAuto-Assistance System as a Boon for Directing Visually Impaired Person”, 2019 3rd International Conference on Trends in Electronics and Informatics(ICOEI)23-25April 2019, DOI: 10.1109/ICOEI.2019.8862699. [2] Bin Jiang, Jiachen Yang, Zhihan Lv, Houbing Song, “Wearable Vision Assistance SystemBasedonBinocular Sensors for Visually Impaired Users”, IEEE Internet of Things Journal (Volume: 6, Issue: 2, 31 May 2018), DOI: 10.1109/JIOT.2018.2842229.
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 11 Issue: 01 | Jan 2024 www.irjet.net p-ISSN: 2395-0072 © 2024, IRJET | Impact Factor value: 8.226 | ISO 9001:2008 Certified Journal | Page 130 [3] Md. Milon Islam, Muhammad Sheikh Sadi, and Thomas Bräunl, “Automated Walking Guide to Enhance the Mobility of VisuallyImpaired People”,IEEETransactions on Medical Robotics and Bionics (Volume: 2, Issue: 3, August 2020), DOI: 10.1109/TMRB.2020.3011501. [4] Devashish Pradeep Khairnar, Rushikesh Balasaheb Karad, Apurva Kapse, Geetanjali Kale, Prathamesh Jadhav, “PARTHA: A Visually Impaired Assistance System”, 2020 3rd International Conference on Communication System, Computing and ITApplications (CSCITA), DOI: 10.1109/CSCITA47329.2020.9137791. [5] Ajinkya Badave, Rathin Jagtap, Rizina Kaovasia, Shivani Rahatwad, Saroja Kulkarni, “Android Based Object Detection System for Visually Impaired”, 2020 International Conference on Industry 4.0 Technology (I4Tech), DOI: 10.1109/I4Tech48345.2020.9102694. [6] Ruiqi Cheng, Kaiwei Wang, Longqing Lin and Kailun Yang, “Visual Localization of Key Positions for Visually Impaired People”, 2018 24th International Conference on Pattern Recognition (ICPR), DOI: 10.1109/ICPR.2018.8545141. [7] Payal T. Mahida, Seyed Shahrestani, Hon Cheung, “Localization Techniques in Indoor Navigation System for Visually Impaired people”, 2017 17th International Symposium on Communications and Information Technologies (ISCIT), DOI: 10.1109/ISCIT.2017.8261229. [8] Vidula V. Meshram, Kailas Patil, Vishal A. Meshram and Felix Che Shu, “An Astute Assistive Device for Mobility and Object Recognition for Visually Impaired People”, IEEE Transactions on Human-Machine Systems (Volume: 49, Issue: 5, October 2019), DOI: 10.1109/THMS.2019.2931745. [9] Vidya N. Murali, James M. Coughlan,“Smartphone-based crosswalk detection and localization for visually impaired pedestrians”, 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), DOI: 10.1109/ICMEW.2013.6618432. [10] Jawaid Nasreen, Warsi Arif, Asad Ali Shaikh, Yahya Muhammad, Monaisha Abdullah, “Object Detection and Narrator for Visually Impaired People”, 2019 6th IEEE International Conference on Engineering Technologies and Applied Sciences (ICETAS), DOI: 10.1109/ICETAS48360.2019.9117405. [11] Cang Ye, Xiangfei Qian, “3D Object Recognition of a Robotic Navigation Aid for the Visually Impaired”, IEEE Transactions on Neural Systems and Rehabilitation Engineering (Volume: 26, Issue: 2, February2018),DOI: 10.1109/TNSRE.2017.2748419. [12] Akshaya Kesarimangalam Srinivasan, Shwetha Sridharan, Rajeswari Sridhar, “Object Localization and Navigation Assistant for the Visually challenged”, 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC), DOI: 10.1109/ICCMC48092.2020.ICCMC-00061. [13] Joel A. Hesch and Stergios I. Roumeliotis, “An Indoor Localization Aid for the Visually Impaired”, Proceedings 2007 IEEE International Conference on Robotics and Automation, DOI: 10.1109/ROBOT.2007.364021. [14] K. Matusiak, P. Skulimowski, P. Strurniłło, “Object recognition in a mobile phone application for visually impaired users”, 2013 6th International Conference on Human System Interactions (HSI), DOI: 10.1109/HSI.2013.6577868. [15] Nouran Khaled, Shehab Mohsen, Kareem Emad El-Din, Sherif Akram,HaythamMetawie,AmmarMohamed,“In- Door Assistant Mobile Application Using CNN and TensorFlow”, 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), DOI: 10.1109/ICECCE49384.2020.9179386. [16] Hsueh-Cheng Wang, Robert K. Katzschmann, Santani Teng, Brandon Araki, Laura Giarré, Daniela Rus, “Enabling Independent NavigationforVisuallyImpaired People through a Wearable Vision-Based Feedback System”, 2017 IEEE International Conference on Robotics and Automation (ICRA), DOI: 10.1109/ICRA.2017.7989772.
  翻译: