Skip to content

Latest commit

 

History

History
75 lines (54 loc) · 3.01 KB

README.md

File metadata and controls

75 lines (54 loc) · 3.01 KB

UNCERT: Semi-Model-Based RL with Uncertainty

This project was developed by Simon Lund, Sophia Sigethy, Georg Staber, and Malte Wilhelm for the Applied Reinforcement Learning SS 21 course at LMU.

Cover image

📒 Index

📝 Deliverables

As part of the course we created an extensive report as well as a final presentation of the project.

📹 Videos

The RL agent swings up using either side.

cartpole_75k_cos.mp4

The RL agent avoids the noisy section on the left and swings up on the right side.

cartpole_75k_cos_uncert.mp4

⚙️ Installation

git clone https://github.com/github-throwaway/ARL-Model-RL-Unsicherheit.git
cd ARL-Model-RL-Unsicherheit/
pip install -r requirements.txt # or python setup.py install

How to run

🙂 Simple

Uses preconfigured system with trained model and agent with default configuration.

cd src/
python main.py

🏆 Advanced

For the sake of usability, we implemented an argument parser. By passing some predefined arguments to the python program call, it is possible to start different routines and also change hyperparameters needed by the algorithms. This enables the user to run multiple tests with different values without making alterations to the code. This is especially helpful when fine-tuning hyperparameters for reinforcement learning algorithms, like PPO. To get an overview of all the possible arguments, and how these arguments can be used, the user may call python main.py --help.

🛠️ Configuration

The project was evaluated using the following parameters.

1. Training Data Environment
noisysector = 0 - π
noise offset = 0.5
observation space = discrete
action space = 10 actions

2. Neural Network Settings
Epochs = 100
time steps = 4

3. RL policy
reward function = [simple,centered,right,boundaries,best,cos,xpos_theta_uncert]
RL algorithms = PPO

📚 Sources