Building intelligent, efficient, and scalable solutions at the intersection of AI, Web Technologies, and Robotics, I love integrating machine learning into web and mobile applications to create seamless, cutting-edge user experiences. Whether it's developing full-stack applications, leveraging AI for automation, or working with embedded systems and robotics, I'm always exploring innovative ways to blend AI with software engineering.
Aug 2024 - May 2025
May 2024 - Aug 2024
Jan 2024 - May 2024
Jan 2023 - July 2023
May 2021 - Dec 2021
June 2021 - Sept 2021
Developed a dynamic, block-grained scaling system enabling adaptive DNN selection and deployment on edge devices while maintaining optimal accuracy-latency trade-offs under fluctuating compute and memory constraints.
Project Link
Project done in collaboration with CU Anschutz. Developed a state-of-the-art offline reinforcement learning model, utilizing historical patient data from Medtronic pacemakers to optimize pacemaker parameters for improved cardiac rhythm pacing.
Project Link
Developed an autonomous vehicle controller using ROS2, implementing LIDAR-based SLAM for visual navigation, reverse parking, obstacle avoidance, and stop sign detection, recognized as one of the fastest teams to navigate a closed indoor course in under 2 mintes.
Project Link
Led a team of 4 to develop an app for real-time bidirectional Indian Sign Language translation, training a Siamese Neural Network on a self-collated dataset of 34,000+ images, now deployed as one of the largest open-source ISL datasets.
Project LinkKm-IFC is a novel framework that estimates road courses using radar detections. It combines K-means clustering, Isolation Forest, and least squares method to accurately match vehicle-driven paths with an average difference of 0.365 meters and a confidence coefficient of 0.88. This efficient, self-contained solution processes raw sensor data without relying on external outputs.
The RtTSLC framework for real-time translation of Indian Sign Language (ISL) finger-spelled English alphabet gestures, employing VGG16, EfficientNet, and AlexNet architectures, achieved high accuracy rates (92%, 99%, and 89% respectively), demonstrating its potential to aid the hard-of-hearing community.
Conducted a systematic literature review on sign language translation systems, identifying the state-of-the-art techniques, challenges, and future research directions in the field.
Developed a deep learning model to detect cardiac anomalies in ECG signals, achieving an accuracy of 99.5% and a sensitivity of 99.6% on the MIT-BIH Arrhythmia Database. The model was deployed on a wearable device to provide real-time feedback to users.