This repository demonstrates how to leverage Meta's Ax platform for efficient and scalable hyperparameter optimization in reinforcement learning (RL) models. The repository focuses on optimizing key parameters across various RL algorithms, including Rainbow, FQF, and more, with support for discrete action spaces.
- Comprehensive Hyperparameter Tuning: Customize and tune key hyperparameters such as learning rates, discount factors, network architectures, and exploration rates for optimal performance.
- Support for Multiple RL Algorithms: Predefined configurations for a range of popular RL models.
- Meta Ax Integration: Fully integrated with Ax.dev for Bayesian optimization and adaptive experimentation.
- Sample Configurations: Includes predefined search spaces for hyperparameters to accelerate setup.
- Scalable and Flexible: Compatible with discrete action spaces and adaptable to custom RL environments.
- Define your RL environment and model setup.
- Configure the search space for hyperparameters.
- Run experiments to optimize parameters, leveraging Ax’s efficient sampling and optimization strategies.
- Analyze results and find optimal hyperparameters.
- Clone this repository:
gh repo clone CollinsJnr-001/Meta-Ax-RL-Hyperparameter-Optimization