decision transformer gym hopper medium
edbeechingIntroduction
The Decision Transformer model is designed for reinforcement learning tasks and has been specifically trained on medium trajectories from the Gym Hopper environment. This model applies a transformer architecture to decision-making processes in continuous control settings.
Architecture
The model utilizes a transformer-based architecture adapted for deep reinforcement learning. This approach allows for processing sequences of states and actions to predict optimal future actions in a continuous control environment like Gym Hopper.
Training
The model was trained using medium trajectories sampled from the Gym Hopper environment. Normalization coefficients required for model input include:
- Mean:
[1.311279, -0.08469521, -0.5382719, -0.07201576, 0.04932366, 2.1066856, -0.15017354, 0.00878345, -0.2848186, -0.18540096, -0.28461286]
- Standard Deviation:
[0.17790751, 0.05444621, 0.21297139, 0.14530419, 0.6124444, 0.85174465, 1.4515252, 0.6751696, 1.536239, 1.6160746, 5.6072536]
Guide: Running Locally
-
Clone the Repository:
Clone the model repository from Hugging Face using the command:git clone https://huggingface.co/edbeeching/decision-transformer-gym-hopper-medium
-
Install Dependencies:
Ensure you have the necessary Python packages installed, likely including PyTorch and Hugging Face Transformers. -
Set Up Environment:
Normalize your input data using the provided mean and standard deviation. -
Run the Script:
Use the provided example script from Hugging Face's GitHub repository to test the model locally. -
Cloud Options:
Consider using cloud-based GPU services like AWS EC2 with GPU, Google Cloud, or Azure for faster computations and better performance.
License
The model and code are likely shared under a license provided by Hugging Face, typically permitting research and educational use. Refer to the repository's license file for specific terms and conditions.