This project demonstrates a novel way to control the Temple Run game using body gestures captured via a webcam. Using computer vision and machine learning techniques, the script detects specific poses and translates them into keyboard inputs to control the game.
The Temple Run Controller Using Body Gesture project allows users to play Temple Run using their body movements. The program captures video input from a webcam, detects specific poses using the MediaPipe library, and converts these poses into corresponding keyboard inputs to control the game. This innovative method of interaction provides a fun and engaging way to play Temple Run.
- Python: The programming language used for the script.
- OpenCV: For video capture and image processing.
- MediaPipe: For pose detection and tracking.
- pynput: To simulate keyboard inputs.
- TensorFlow Lite: For model inference (optional, based on warnings seen).
- pygetwindow: To manage the game window focus.
- Python 3.x
- Webcam
- Temple Run game installed on your computer
-
Clone the repository:
git clone https://github.com/your_username/temple-run-controller.git cd temple-run-controller
-
Create and activate a virtual environment:
python -m venv venv venv\Scripts\activate # On Windows source venv/bin/activate # On macOS/Linux
-
Install the required libraries:
pip install -r requirements.txt
-
Ensure your
requirements.txt
includes the following dependencies:opencv-python mediapipe pynput numpy<2.0.0 pygetwindow
-
Start the Temple Run game.
-
Run the script:
python main.py
-
Ensure the game window is active to receive keyboard inputs.
- Up Gesture: Raise both arms to simulate jumping.
- Down Gesture: Move your body down to simulate sliding.
- Left Gesture: Move your left hand to the left to turn left.
- Right Gesture: Move your right hand to the right to turn right.
The script will detect these gestures and translate them into corresponding keyboard inputs to control the character in the Temple Run game.
- MediaPipe by Google for providing the pose detection solution.
- OpenCV for the powerful computer vision functions.
- pynput for the easy-to-use keyboard input simulation.
This project is licensed under the MIT License - see the LICENSE file for details.