ASL Translator

View Source

Machine Learning - Python, CNN

Overview

A real-time American Sign Language (ASL) translator leveraging Convolutional Neural Networks (CNNs) to interpret hand gestures captured via webcam and convert them into corresponding English text.

This project aims to bridge communication gaps for the deaf and hard-of-hearing community by translating ASL gestures into readable text. Utilizing computer vision and deep learning techniques, the system processes live video input to recognize ASL alphabets.

The system achieved 94.831% accuracy for test images when evaluated on a dataset of 2,520 images representing the 26 letters of the English alphabet and 10 ASL digits.

← Back to Projects
Built with Next.js and Tailwind CSS, deployed with Github.
© 2025 Naman Chandok. All Rights Reserved.