tqc Fetch Pick And Place v1

sb3

Introduction

This document provides information on a trained TQC (Truncated Quantile Critics) agent that plays the FetchPickAndPlace-v1 environment using the Stable-Baselines3 library and the RL Zoo framework for reinforcement learning.

Architecture

The TQC agent is implemented using the stable-baselines3 library, which provides a collection of reinforcement learning algorithms. The RL Zoo is used for training and includes features like hyperparameter optimization and pre-trained agents.

Training

To train the TQC agent within the RL Zoo framework, use the following command:

python train.py --algo tqc --env FetchPickAndPlace-v1 -f logs/

This command allows you to train a new model using specified hyperparameters. The trained model can be uploaded to a hub and a video generated when applicable.

Guide: Running Locally

  1. Set up Environment: Clone the RL Zoo repository and ensure all dependencies are installed.
  2. Download Model: Use the following command to download and save the model into the logs/ folder:
    python -m rl_zoo3.load_from_hub --algo tqc --env FetchPickAndPlace-v1 -orga sb3 -f logs/
    
  3. Run the Model: Execute the model using:
    python enjoy.py --algo tqc --env FetchPickAndPlace-v1 -f logs/
    
  4. Cloud GPUs: For enhanced performance and faster training, consider using cloud-based GPUs from providers like AWS, Google Cloud, or Azure.

License

The model and associated tools are released under licenses provided by the respective repositories, such as the stable-baselines3 library and RL Zoo. Ensure to review the specific license details in these repositories for compliance.

More Related APIs in Reinforcement Learning