HTM-Teacher is an educational tool designed to demonstrate the fundamental workings of Hierarchical Temporal Memory (HTM) using minimalistic and easy-to-understand Python code. This project aims to help learners grasp the core concepts of HTM models, including Sparse Distributed Representations (SDRs), Spatial Pooling, and Temporal Memory, through visualization and animation.
Hierarchical Temporal Memory is a biologically inspired machine learning model that mimics the structure and function of the neocortex. It is capable of learning time-based patterns and making predictions. HTM-Teacher provides a simplified implementation of an HTM model from scratch, focusing on educational clarity rather than performance or scalability.
The project includes:
- A Random Distributed Scalar Encoder (RDSE) for converting scalar inputs into Sparse Distributed Representations.
- A Spatial Pooler (SP) that learns to recognize spatial patterns in the input data.
- A Temporal Memory (TM) that learns sequences of patterns over time.
- Minimalistic Code: The code is written to be as concise and readable as possible, making it accessible to beginners.
- Visualization: Uses
matplotlib
to visualize SDRs, input values, active columns, and prediction accuracy. - Animation: Animates the entire HTM processing pipeline, showing how the model learns and predicts over time.
- Educational Comments: Thoroughly commented code explains the purpose and functionality of each component and step.
- No External Dependencies: Apart from common libraries (
numpy
,matplotlib
), the code does not rely on any external HTM frameworks.
-
Clone the Repository
git clone https://github.com/NQevxvEtg/htm-teacher.git cd htm-teacher
-
Create a Virtual Environment (Optional)
python -m venv venv source venv/bin/activate # On Windows, use venv\Scripts\activate
-
Install Dependencies
pip install numpy matplotlib
To run the HTM simulation and view the animation:
Use Jupyter Notebook to generate video.
Use HTML for interactive mode.
Note: Ensure that you have ffmpeg
or imagemagick
installed if you want to save the animation as a video or GIF file.
The program will display an animated visualization with four subplots:
- Input Value Over Time: Shows the input scalar values being fed into the model.
- Prediction Accuracy Over Time: Displays how the model's prediction accuracy evolves.
- Encoded Input SDR: Visualizes the Sparse Distributed Representation of the current input.
- Active Columns Over Time: Illustrates which columns in the Spatial Pooler are active at each time step.
The animation is divided into training and testing phases, mimicking typical machine learning workflows.
- Python 3.11 or higher
- NumPy
- Matplotlib
- htm-teacher.ipynb: The main Jupyter Notebook containing the HTM implementation and animation code.
- htm-teacher-interactive.html: Interactive app
- README.md: Project description and usage instructions.
- LICENSE: The project's license.
Contributions are welcome! If you have ideas for improvements or new features, feel free to open an issue or submit a pull request.
- Fork the project.
- Create your feature branch (
git checkout -b feature/YourFeature
). - Commit your changes (
git commit -m 'Add your feature'
). - Push to the branch (
git push origin feature/YourFeature
). - Open a Pull Request.
This project is licensed under the AGPL 3.0 License - see the LICENSE file for details.
Disclaimer: This project is intended for educational purposes to illustrate the basic workings of HTM models. It is not optimized for performance and may not represent the most efficient or scalable implementation of HTM.
Acknowledgments: This project was inspired by the desire to make complex machine learning concepts more accessible through minimalistic and well-commented code.
The RDSE converts scalar input values into high-dimensional, sparse binary vectors (SDRs). It ensures that similar input values produce SDRs with overlapping active bits, capturing the similarity in the input space.
The Spatial Pooler processes the SDRs from the encoder and produces a new set of SDRs representing the spatial patterns in the input data. It uses inhibition to maintain sparsity and learns to recognize frequently occurring patterns by adjusting synapse permanences.
The Temporal Memory models sequences by connecting cells that become active in order. It learns temporal patterns and makes predictions by activating cells that anticipate future inputs based on learned sequences.
The animation provides a dynamic view of how the HTM model processes data over time, making it easier to understand the temporal aspects of learning and prediction.
- Interactive Exploration: Pause, rewind, or step through the animation to examine specific time steps.
- Customization: Modify parameters like the number of iterations, number of columns, or input sequences to observe different behaviors.
-
Animation Not Displaying: Ensure that you're running the script in an environment that supports GUI operations. If using SSH or a headless server, you may need to configure X11 forwarding or use a virtual display.
-
FFmpeg Not Found: If you encounter issues saving the animation, make sure FFmpeg is installed and accessible. You can specify the path to FFmpeg in the script if necessary.
import matplotlib as mpl mpl.rcParams['animation.ffmpeg_path'] = r'/path/to/ffmpeg'
-
Performance Issues: If the animation is slow or unresponsive, consider reducing
NUM_ITERATIONS
or the complexity of the model parameters.
- Add More Encoders: Implement additional encoders (e.g., categorical, multi-dimensional) to explore how different data types are processed.
- Enhance Visualization: Include more detailed plots or 3D visualizations to delve deeper into the model's internal states.
- Integrate Real Data: Use real-world datasets to test the model's ability to learn and predict complex patterns.
Thank you for your interest in HTM-Teacher! We hope this tool enhances your understanding of Hierarchical Temporal Memory models. If you have any questions or feedback, please don't hesitate to reach out.