• Tnea
    Code 1399

SDG 04 – Research Activities

S.No Name of the project Abstract
1 ATTENDANCE TRACKING WITH FACIALRECOGNITION This project explores the development and implementation of a face recognition attendance system for improved efficiency and accuracy in attendance tracking. The system leverages facial recognition technology, a form of biometric identification, to automatically identify and mark attendance of individuals. This approach eliminates the need for manual attendance processes, which are susceptible to time consumption and human error. The system functions by capturing facial images upon entry to designated attendance zones. These captured images are then compared against a pre registered database of authorized individuals. Upon successful recognition, the system marks the individual as present and records the corresponding time stamp within a designated storage mechanism, such as an Excel spreadsheet. The core advantages of this system lie in its ability to streamline the attendance tracking process and enhance data integrity. By automating attendance recording, the system eliminates the time required for manual processes and mitigates the risk of errors associated with them. In conclusion, this face recognition attendance system offers a convenient, reliable, and efficient solution for attendance tracking across various settings. It streamlines processes, minimizes errors, and empowers informed decision-making through detailed attendance reports.
2 DOCUMENT VISUALIZATION This is a review report on the research performed and a project built- in the field of Information Technology to develop a system We introduce the Document visualization, a new visualization and information retrieval technique aimed at text documents. A word tree is a graphical version of the traditional “keyword-in-context” method, and enables rapid querying and exploration of bodies of text. In this paper we describe the implementation of Google chart word tree from google static into our website. which provides a window onto the ways in which users obtain value from the visualization. In This digital age, managing vast document datasets demands a solution that transcends traditional methods. This document visualization project addresses this imperative need by seamlessly transforming unstructured data into interactive visual representations. Leveraging web technology and visualization techniques, our system enhances document accessibility, allowing users to intuitively explore complex information. This project aspires to redefine how knowledge is extracted, providing a user-centric approach to uncover hidden insights and empower informed decision-making in diverse domains.
3 COLLEGE XPLORER College Xplorer is a revolutionary mobile application tailored to enhance the student experience within college campuses. Designed to streamline access to essential services, the app digitalizes stationery shops and food stalls, offering students a convenient platform to order food and purchase stationery products with ease. One of the standout features of College Xplorer is its note-sharing facility, which facilitates seamless collaboration between teachers and students. Teachers can create accounts and upload lecture notes and study materials, empowering students to access these resources at their convenience. Furthermore, students can create accounts to post attendance, ensuring accurate and efficient record-keeping. The integration of stationery shops and food stalls into the app revolutionizes the way students interact with campus amenities. No longer constrained by physical queues or limited opening hours, students can easily browse through a diverse range of products and place orders from anywhere on campus. This not only saves valuable time but also enhances overall convenience and accessibility.
4 Ultimate Q& A Large language model chat application The Ultimate Q&A LLM chat app represents a novel approach to interacting with PDF documents through a chat interface. Leveraging natural language processing and machine learning technologies, this application allows users to query multiple PDFs simultaneously, obtaining relevant information and responses based on the content of the documents. This paper outlines the development, functionality, and potential applications of the Ultimate Q&A LLM chat app, emphasizing its significance in enhancing document interaction and information retrieval.
5 Adaptive learning for autistic children: Mood-based music therapy integration Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder that involves difficulties in social communication. Previous research has demonstrated that these difficulties are apparent in the way ASD children speak, indicating that it may be possible to estimate ASD severity using quantitative features of speech. Here, we extracted a variety of prosodic, acoustic, and conversational features from speech recordings of Hebrew speaking children who completed an Autism Diagnostic Observation Schedule (ADOS) assessment. Sixty features were extracted from the recordings of 72 children and 21 of the features were significantly correlated with the children’s ADOS scores. Positive correlations were found with pitch variability and Zero Crossing Rate (ZCR), while negative correlations were found with the speed and number of vocal responses to the clinician, and the overall number of vocalizations. Using these features, we built several Deep Neural Network (DNN) algorithms to estimate ADOS scores and compared their performance with Linear Regression and Support Vector Regression (SVR) models. We found that a Convolutional Neural Network (CNN) yielded the best results. This algorithm predicted ADOS scores with a mean RMSE of 4.65 and a mean correlation of 0.72 with the true ADOS scores when trained and tested on different sub- samples of the available data. Automated algorithms with the ability to predict ASD severity in a reliable and sensitive manner have the potential of revolutionizing early ASD identification, quantification of symptom severity, and assessment of treatment efficacy.
6 Student attendance system using face recognition using machine learning This paper presents a novel, real-time student attendance system leveraging face recognition technology and machine learning algorithms. Traditional attendance tracking methods in educational institutions are plagued by inefficiencies like manual calling, potential for errors, and administrative burdens. To address these shortcomings, the proposed system automates attendance monitoring through advanced face recognition techniques. By integrating machine learning algorithms, the system continuously improves its accuracy and reliability for optimal performance. The system offers a user-friendly interface and integrates seamlessly with existing infrastructure, promoting convenience for both students and staff. It features automatic attendance marking, real-time data updates, and comprehensive reporting, fostering efficiency and transparency in attendance management. Additionally, its scalability allows for effortless deployment across diverse educational settings. This project aims to revolutionize student attendance management by providing a robust, efficient, and technologically advanced solution tailored to modern educational environments. Furthermore, the system eliminates the need for physical attendance registers or ID cards, mitigating potential fraud. It utilizes deep learning models for robust identification even in challenging lighting conditions, ensuring reliable attendance tracking across various scenarios. Prioritizing data privacy and security, the system implements encryption protocols and access controls to effectively safeguard sensitive student information. By generating comprehensive attendance reports and analytics, the system empowers educational institutions to make informed decisions based on data driven insights into student attendance patterns, further enhancing operational efficiency and strategic planning. Overall, this work signifies a significant advancement in attendance management, offering a seamless, accurate, and secure solution that caters to the evolving needs of modern educational institutions while upholding the integrity and confidentiality of student data.
7 ENHANCED GESTURE CONFERRAL PROCESSING LEVERAGING OPENCV In recent years, there has been an increase in the use of IoT devices for home automation, shopping malls, and other public places. However, for individuals who are mute or bedridden, accessing these devices can be difficult, especially when they are voice-activated. To address this issue, hand gesture recognition technology has been developed to allow individuals to control these devices through simple hand movements. Image processing and pattern recognition are crucial for accurately detecting these hand gestures, and platforms such as Open CV, Python, PyCharm, and Media Pipe are commonly used in software development to achieve this. This technology has the potential to help people with physical, sensory, or intellectual disabilities to participate fully in all activitiesin society and enjoy equal opportunities. By using hand gestures to communicate with IoT devices, individuals who are deaf can also benefit from this technology. Ultimately, this technology has the potential to create a human-computer interaction that is accessible to all, making it a valuable addition to the field of assistive technology. Furthermore, hand gesture recognition technology is an excellent example of the potential of IoT devices to facilitate a more connected and automated world. However, it is important to note that with any new technology, there are also concerns around data privacy and security. As such, it is essential that developers prioritize ethical considerations and robust security protocols when designing these systems. Moreover, hand gesture recognition technology can be further improved through the use of artificial intelligence and machine learning. These technologies can help improve the accuracy of the recognition system and provide a more personalized experience for users. This system is highly reliable and user-friendly, and does not require any physical contact, which makes it highly suitable for disabled people. Furthermore, the development of new sensor technologies can also help increase the reliability and efficiency of the hand gesture recognition system. Overall, the development of hand gesture recognition technology is an exciting and innovative area of research that has the potential to improve the lives of many individuals, particularly those with physical or sensory disabilities. With continued advancements in technology, it can expect to see more sophisticated and accessible hand gesture recognition systems that will help create a more inclusive and accessible society.
8 COMPUTER ASSISTANCE USING LARGE LANGUAGE MODEL In the realm of computing, the integration of large language models (LLMs) has spurred revolutionary advancements in natural language understanding and processing. This project proposes an innovative approach to streamline system-wide operations through the utilization of LLM capabilities. The objective is to empower users with intuitive and efficient means to perform tasks such as file creation, deletion, and other system manipulations using natural language commands. This project aims to develop a robust framework that harnesses the power of LLMs to interpret and execute user instructions seamlessly across various computing platforms. By leveraging the contextual understanding and semantic comprehension capabilities of LLMs, the proposed system seeks to enhance user productivity and convenience by eliminating the need for traditional command-based interactions.
9 Iterative deepening chess engine with alpha beta pruning The chess engine undergoes iterative development and enhancement. Testing, bug fixing, and optimizations are used to improve performance. The program is tested against itself to assess improvements. Random test positions and critical bug fixes enhance reliability and accuracy. Search capabilities are bolstered, and search extensions refine decision-making. Analysis helps track progress and identify areas for improvement.
10 SUBJECTIVE ANSWER EVALUATION USING MACHINE LEARNING AND NATURAL LANGUAGE PROCESSING This paper presents an innovative approach for the automated evaluation of subjective answers leveraging the power of machine learning (ML) and natural language processing (NLP) techniques. Traditional methods of assessing subjective responses often rely on manual grading, which can be time-consuming and prone to subjectivity. Our proposed system aims to streamline this process by employing advanced ML algorithms and NLP models to objectively evaluate and score subjective answers. We explore various methodologies for feature extraction, sentiment analysis, semantic understanding, and contextual comprehension to develop a robust evaluation framework. Furthermore, we discuss the integration of these techniques into an end- to-end system capable of handling diverse types of subjective responses. Experimental results demonstrate the effectiveness and efficiency of our approach, showcasing its potential to revolutionize the evaluation of subjective answers in various educational and professional settings.
11 USING LARGE LANGUAGE MODEL CHAT WITH YOUR DOCUMENTS LOCALLY WITH PRIVATE GPT Private GPT plays a crucial role in privacy with the growing concerns surrounding privacy in conversational AI systems, there is a pressing need for innovative approaches to mitigate potential risks associated with data exposure and misuse. Private GPT emerges as a promising solution to address these challenges by incorporating privacy-preserving mechanisms into the architecture of the renowned GPT models. This paper provides an in-depth analysis of Private GPT, focusing on its design principles, functionality, and efficacy in safeguarding user privacy. By integrating techniques such as federated learning, differential privacy, and secure multiparty computation, Private GPT offers robust protection against unauthorized access to sensitive user data while maintaining high-quality language generation capabilities. Through a comprehensive review of existing literature and empirical studies, this paper evaluates the strengths and limitations of Private GPT in various real-world scenarios. Additionally, it discusses potential avenues for future research and development to further enhance the privacy and utility of conversational AI systems.
12 ASSISTIVE TOOL FOR ONLINE-BASED EXAMINATIONS FOR BLIND PEOPLE This In the context of advancing living standards and the prevalence of a more digitized society, computers play a pivotal role in achieving efficiency and optimal methods for various tasks. Online examinations stand out as a prominent method in contemporary education; however, individuals with visual impairments face challenges in independently participating in these exams. Currently, visually impaired individuals rely on Braille or the assistance of a writer to navigate exams. This project proposes a solution centered around Speech Synthesis (text-to-speech conversion) to facilitate a seamless examination experience for blind candidates. Questions are dictated to candidates through speech, and their responses are recorded using Speech-to-Text conversion. The project aims to compare response results based on metrics such as response time, providing visually impaired candidates with the ability to re-listen to questions and modify answers before submission. Results are recorded in a database for reference. The primary goal is to design and implement a user-friendly interface specifically tailored for visually impaired students, providing them with the autonomy to independently take online exams at their preferred time. As a result, the project envisions a significant increase in the number of visually impaired individuals who can navigate exams independently, thereby fostering greater accessibility and inclusivity in the realm of education.
13 E-LEARNING PLATFORM FOR FULL STACK WEB DEVELOPMENT This report details the development of an e-learning platform tailored specifically for full-stack developers. The platform integrates features such as YouTube video tutorials, interactive coding exercises, a playground IDE, and a user contribution interface. A login/sign- up system is also implemented for user authentication and personalized experiences. The report outlines the platform’s architecture, functionality, user interaction, and concludes with its potential impact on software development education.
14 Code synchronization a code learning platform The project CodeSync is a web based online coding platform designed for developing a learning System for students and teachers to learn collaboratively in realtime. The project makes the learning collaboratively with the help of video calling facility and realtime code editing in a online coding platform. It is aimed at developing a learning system for coders who learn coding as a beginner in online mode. The main objective of developing the system is for learning purpose without wasting lot of time in encountering minor bugs while learning as beginner. Even though this application is mainly developed for code learning purpose it can also be used for any editors and whiteboards to make learning collaboratively .
15 Conversion of images to multi lingual text The use of language and visuals together is increasingly important in the writings we read nowadays. It is possible to use this combination of language and imagery to encourage literacy. In this research study, an innovative application has been developed with the aim of interpreting text embedded within images, thereby enhancing visual literacy. Moreover, various techniques for multilingual translation from images to text have been extensively examined. A refined methodology is proposed based on collaborative findings derived from a comprehensive review of existing literature. This paper’s main goal is to describe an approach for text extraction from photos, including both clean and cluttered visuals, using an optical character recognition (OCR) engine. To improve the quality of the input photos, the suggested method integrates image preprocessing techniques including image binarization and noise reduction. The OCR engine is then used to extract the embedded text from the preprocessed photos. A post- processing step is used to correct any flaws in the retrieved text in order to improve the OCR engine’s accuracy. The effectiveness of the proposed method is evaluated through experimental analysis, which demonstrates its capability to accurately extract text from both clear and noisy images. The results affirm the high accuracy achieved by the proposed approach.
16 ACADEMIC PROGRESS FORECASTING USING MACHINE LEARNING IN PYTHON This paper proposes a machine learning based approach for forecasting academic progress, aiming to assist educators in identifying students at risk of underperformance. Leveraging data from student demographics, educational background, and classroom engagement metrics, our methodology employs various supervised learning algorithms, including decision trees, random forests, perceptron, logistic regression, and neural networks. We evaluate the performance of these models using real-world student performance data, comparing their accuracy in predicting academic outcomes. The results demonstrate the effectiveness of the proposed approach in accurately forecasting student progress, thereby enabling proactive interventions to support at-risk students and improve overall educational outcomes.
17 Signspeak: Audio to sign language converter This project is based on converting the audio signals receiver to text using speech to text API. Speech to text conversion comprises of small, medium and large vocabulary conversions. Such systems process or accept the voice which then gets converted to their respective text. This paper gives a comparative analysis of the technologies used in small, medium, and large vocabulary Speech Recognition System. The comparative study determines the benefits and liabilities of all the approaches so far. The experiment shows the role of language model in improving the accuracy of speech to text conversion system. We experiment with the speech data with noisy sentences and incomplete words. The results show a prominent result for randomly chosen sentences compared to sequential set of sentences. This project focuses on building an effective means of communication for the specially abled people by the implementation of graphical hand gestures. We utilize the major principles of NLP(natural language processing) to make this project into a reality.
18 VISUAL GESTURAL COMMUNICATION DECIPHERING: A COMPUTER VISION APPROACH Sign language is the only tool of communication for the person who is not able to speak and hear anything. Sign language is a boon for the physically challenged people to express their thoughts and emotion. In this work, a novel scheme of sign language recognition has been proposed for identifying the alphabets and gestures in sign language. With the help of computer vision and neural networks we can detect the signs and give the respective text output. The main purpose of this technology is to create algorithms and software that can accurately recognize and interpret hand gestures. Computer technology is used to capture the movements of gestures and convert them to text or speech. The system uses machine learning algorithms to recognize patterns in hand gestures and translate them into meaningful language. The accuracy of this device is very important as it directly affects the communication between the deaf and the hearing. Sign interpretation using computer vision has many applications, including education, medicine, and entertainment. This technology has the potential to bridge the communication gap between deaf and hearing people and provide easier communication solutions. As research in this area continues, we can expect to see major advances in translation technology in the future.
19 TRUTH TRACK : HARNESSING RNNS AND NLP FOR NEWS VERIFICATION WITH CHATBOT SUPPORT Research delved into the pervasive issue of fake news and limited information literacy through a novel AI system. The system, which utilized Natural Language Processing (NLP) techniques and Recurrent Neural Networks (RNNs), offered the following key functionalities. An RNNmodel, trained on a comprehensive dataset of labelled real and fake news articles, was used to analyse news content using NLP. The likelihood of an article being fake news was then predicted by the model. Additionally, legitimate news was categorized into relevant categories (politics, sports, business) using NLP techniques like topic modelling. To address user queries arising from news content, an NLP-powered chatbot was integrated into the project. User questions were understood, and the most relevant and reliable information was provided by the chatbot, leveraging machine learning. The news analysis performed by the first component was drawn upon by the chatbot, guiding users towards trustworthy sources and offering explanations to combat potential biases. The primary objective of the AI system was to empower users to become more discerning consumers of information. Users could readily identify fake news and gained a deeper understanding of legitimate news content. Information literacy was further enhanced by the chatbot, which provided context and facilitated user queries.
20 COLLABORATIVE CODE EDITOR WITH VIDEO CONFERENCING AND FACE DETECTION FOR INTERVIEWS In the ever-evolving times of software development, the adoption of effective coding practices and collaborative techniques stands as a cornerstone for seamless progress. The ability to work harmoniously and exchange ideas in real-time not only enhances teamwork but also fosters innovation within teams. Moreover, the integration of instant feedback mechanisms ensures a continuous cycle of improvement, thereby maintaining the standards and quality of code throughout the development lifecycle. Against this backdrop of technological advancement, recent innovations have yielded powerful tools that hold promise for constructing robust solutions. Among these, CodeMirror provides an intuitive platform for code editing, while Peer.js harnesses the capabilities of WebRTC to facilitate peer-to-peer connections. Additionally, advancements in facial recognition technology, such as FaceAPI, offer opportunities for real- time monitoring and analysis.
21 CUTTING EDGE FPGA BASED APPROACHES FOR LANGUAGE TRANSCRIPTION WITH ADVANCED NEURAL NETWORK ARCHITECTURE Edge computing, particularly in embedded systems and the Internet of Things, has gained significant traction in recent times. Deep learning, with its wide-ranging applications, has become increasingly prevalent in this technological landscape. Leveraging application-specific hardware, such as Field-Programmable Gate Arrays (FPGAs), offers a cost-effective approach to deploying highly efficient deep learning models in edge computing scenarios. In countries like India, characterized by linguistic diversity, the development of a system capable of recognizing handwritten characters across multiple languages holds considerable significance. However, the implementation of large neural networks poses challenges due to their resource-intensive nature. In this study, a cascading methodology for neural network implementation is proposed with the aim of enhancing resource efficiency. The focus is on efficiently recognizing handwritten characters from three languages: Hindi, Tamil, and English. This approach involves initially classifying input data into one of the three languages using a smaller neural network, followed by routing the data to language- specific neural networks for character recognition. The performance of this cascading method is compared with that of a single neural network, which directly classifies input into respective characters. The results of the proposed work indicate the improvement in efficiency while maintaining accuracy. This approach to multilingual handwritten character recognition demonstrates its potential for practical deployment in real-world applications. Additionally, the findings reveal that the cascaded network utilizes 29 neurons less than the combined network, representing a reduction of 3.545% in neuron count compared to the combined CNN model and gives more than 90% accuracy similar to the combined CNN.
22 Prostate Cancer Detection and Prediction System Using Power BI Develop an automated system using convolutional neural network (CNN) algorithms for detecting lung diseases from chest X-rays. Achieve accurate identification of various lung conditions such as pneumonia, bacterial pneumonia, and tuberculosis.Assist healthcare professionals in timely diagnosis and treatment planning, enhancing clinical decision-making. Utilize deep learning techniques and transfer learning for efficient feature extraction and classification. Optimize model  performance  through  rigorous  training,  validation,  and performance evaluation.
Admission 2024