• Tnea
    Code 1399

SDG 10 – Research Activities

S.No Name of the project Abstract
1 OBJECT IDENTIFICATION USING MACHINE LEARNING FOR VISUALLY IMPAIRED Computer Vision is a field of computer science and software engineering focused on recognizing and interpreting images and scenes. It encompasses various tasks like image recognition, object detection, image generation, and image super resolution. Object detection, a crucial aspect of Computer Vision, finds applications in face detection, vehicle detection, pedestrian counting, web image analysis, security systems, and autonomous vehicles. This project aims to utilize advanced object detection algorithms such as R- CNN, Fast R-CNN, Faster R-CNN, Retina Net, SSD, and YOLO. These algorithms, powered by deep learning techniques, require a solid understanding of mathematical principles and deep learning frameworks like TensorFlow, OpenCV, and Image AI. By leveraging these methods, the project seeks to accurately detect objects withinimages, highlighting them with rectangular boxes and assigning tags to each identified object. Additionally, the project evaluates the accuracy of each detection method.
2 Exploring Sign Language Recognition Using Convolutional Neural Network Sign language recognition plays a crucial role in enabling communication for the Deaf and Hard of Hearing community. With the advancements in computer vision and deep learning techniques, Convolutional Neural Networks (CNNs) have shown promising results in understanding and interpreting sign language gestures. This paper presents a comprehensive exploration of sign language recognition using CNN architectures. The research begins by collecting a diverse dataset of sign language gestures, encompassing various hand shapes, movements, and facial expressions. Subsequently, a CNN architecture tailored for sign language recognition is designed and trained using the collected dataset. The CNN model is fine tuned through rigorous experimentation to achieve optimal performance in accurately identifying and classifying sign language gestures. Furthermore, the study investigates the impact of data augmentation techniques, transfer learning, and hyperparameter optimization on the CNN’s performance. Comparative analyses are conducted to evaluate the effectiveness of different CNN architectures and training strategies in sign language recognition tasks. Additionally, the research delves into the challenges encountered in sign language recognition, such as variations in lighting conditions, occlusions, and complex hand movements. Strategies to mitigate these challenges are explored, including the incorporation of temporal information and attention mechanisms in CNN architectures. Through extensive experimentation and evaluation, the proposed CNN- based approach demonstrates significant advancements in sign language recognition accuracy and robustness
3 Signspeak: Audio to sign language converter This project is based on converting the audio signals receiver to text using speech to text API. Speech to text conversion comprises of small, medium and large vocabulary conversions. Such systems process or accept the voice which then gets converted to their respective text. This paper gives a comparative analysis of the technologies used in small, medium, and large vocabulary Speech Recognition System. The comparative study determines the benefits and liabilities of all the approaches so far. The experiment shows the role of language model in improving the accuracy of speech to text conversion system. We experiment with the speech data with noisy sentences and incomplete words. The results show a prominent result for randomly chosen sentences compared to sequential set of sentences. This project focuses on building an effective means of communication for the specially abled people by the implementation of graphical hand gestures. We utilize the major principles of NLP(natural language processing) to make this project into a reality.
4 VISUAL GESTURAL COMMUNICATION DECIPHERING: A COMPUTER VISION APPROACH Sign language is the only tool of communication for the person who is not able to speak and hear anything. Sign language is a boon for the physically challenged people to express their thoughts and emotion. In this work, a novel scheme of sign language recognition has been proposed for identifying the alphabets and gestures in sign language. With the help of computer vision and neural networks we can detect the signs and give the respective text output. The main purpose of this technology is to create algorithms and software that can accurately recognize and interpret hand gestures. Computer technology is used to capture the movements of gestures and convert them to text or speech. The system uses machine learning algorithms to recognize patterns in hand gestures and translate them into meaningful language. The accuracy of this device is very important as it directly affects the communication between the deaf and the hearing. Sign interpretation using computer vision has many applications, including education, medicine, and entertainment. This technology has the potential to bridge the communication gap between deaf and hearing people and provide easier communication solutions. As research in this area continues, we can expect to see major advances in translation technology in the future.
5 A Machine learning approach to human trafficking prediction This study introduces a comprehensive method for identifying and predicting human trafficking using Machine-Learning. Given the urgent need for more efficient prevention and intervention techniques in addressing this pervasive crime, the conventional manual approaches are time-consuming. The proposed method automates the identification and prediction processes by leveraging various Machine- Learning techniques. It analyzes extensive data, including social media posts, individual demographics, and internet activity, to pinpoint potential victims and forecast their likelihood of involvement in human trafficking. Utilizing methods such as decision trees, support vector machines, and neural networks enhances the system’s effectiveness. Employing cross-validation, model evaluation, and feature selection further boosts the accuracy of the system. This technique offers a substantial improvement in accuracy, aiding law enforcement organizations in their endeavors to combat this heinous crime.
6 INDOOR NAVIGATION SYSTEM FOR VISUALLY IMPAIRED PEOPLE USING LIFI AND DEEP LEARNING Indoor navigation system for virtually impaired individuals utilizing deep learning techniques. The system addresses the challenge of navigating complex indoor environments by leveraging visible light communication technology to provide real time navigation system. The proposed system encompassesing, modeling selection, training, evaluation, and testing phases. Through iterative improvement and user feedback, the system achieves enhanced performance and usability. Integration into user friendly applications enables seamless deployment in various indoor settings, empowering visually impaired individuals to navigate independently and safely. This system contributes to advancing assistive technologies and promoting accessibility for individual impairments. Throughout the process, factors such as robustness, adaptability to different indoor environments, real time performance and user interface design for accessibility. Additionally, ensuring privacy and security of user data crucial when developing and deploying such systems. Indoor navigation poses significant challenges for visually impaired individuals, hindering their autonomy and mobility.
Admission 2024