A machine learning-based application designed to help users learn and practice sign language through real-time detection and feedback.
![image](https://private-user-images.githubusercontent.com/62391859/403151099-f084fa8e-f65b-4c9d-883c-012489cba223.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyMzk0NTUsIm5iZiI6MTczOTIzOTE1NSwicGF0aCI6Ii82MjM5MTg1OS80MDMxNTEwOTktZjA4NGZhOGUtZjY1Yi00YzlkLTg4M2MtMDEyNDg5Y2JhMjIzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjExVDAxNTkxNVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTgwN2MyNDU1ZjcxMzAyZGMwMmQyMWIwYjVhYzFhZmM1MmY2Y2FlYjc0Nzg5NjE3MTBjNWRhMGZmZGM4ZWU2ZGYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.apTqwccgTCv5SxccK12sXW7hJTlKZwhT7xJqtHmKr4o)
- Real-time sign language detection and recognition
- Interactive learning mode
- Practice mode with scoring system
- Model training capabilities
- Data collection tools
- Save and load trained models
- Performance testing mode
- Text-to-speech feedback
- GUI interface built with Tkinter
- Python 3.7+
- Webcam or camera device
pip install -r requirements.txt
Dependencies list (requirements.txt
):
opencv-python # Computer vision and image processing
mediapipe # Hand and pose detection framework
numpy # Numerical computing and array operations
pickle # Model serialization
tkinter # GUI framework (usually comes with Python)
pyttsx3 # Text-to-speech conversion
Pillow # Image processing library
- Clone the repository:
git clone https://github.com/yourusername/sign-language-assistant.git
cd sign-language-assistant
- Install required dependencies:
pip install -r requirements.txt
The application offers several key functionalities through its intuitive interface:
- Click "Collect Data" to start gathering training data
- Follow the on-screen instructions to record signs
- Ensure proper lighting and camera positioning
- Use "Train Model" to begin the training process
- Monitor the training progress in the status window
- Wait for completion notification
- Click "Start Detection" to begin real-time sign language recognition
- Position yourself in front of the camera
- Perform signs to see instant recognition results
Untitled.video.-.Made.with.Clipchamp.6.mp4
- Use "Start Test Mode" to evaluate your sign language skills
- Follow the prompts to perform specific signs
- Receive immediate feedback and scoring
Untitled.video.-.Made.with.Clipchamp.5.mp4
- Save your trained models using "Save Model"
- Load previously trained models using "Load Model"
- Stop the current session using "Stop"
- Real-time prediction display
- Accuracy scoring system
- Recent predictions history
- Status monitoring
The application utilizes several key technologies:
- OpenCV (cv2): Handles video capture and image processing
- MediaPipe: Provides hand tracking and gesture recognition
- NumPy: Manages numerical operations and data processing
- Tkinter: Creates the graphical user interface
- pyttsx3: Enables text-to-speech feedback
- Threading: Ensures smooth GUI operation during processing
- Pickle: Handles model serialization and deserialization
- PIL (Pillow): Processes images for the GUI display
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
For questions, feedback, or support, please open an issue in the GitHub repository.