metamotivo S 1

facebook

Introduction

Meta Motivo is a behavioral foundation model developed by Meta, designed to control virtual humanoid agents using a novel unsupervised reinforcement learning algorithm. It can perform tasks such as motion tracking and reward optimization without additional learning or fine-tuning. This model is based on the research paper "Zero-shot Whole-Body Humanoid Control via Behavioral Foundation Models."

Architecture

Meta Motivo comprises multiple networks:

  • Forward Net (F): Processes state, action, and latent variables.
  • Backward Net (B): Simple MLP for processing states.
  • Actor Net (π): Outputs the mean of a Gaussian distribution.
  • Discriminator Net (D): Evaluates states and latent variables with a sigmoidal output.
  • Critic Net (Q): Outputs a scalar evaluation of the action.

These networks employ multilayer perceptrons (MLPs) with ReLU activations, layer normalization, and tanh functions. The architecture includes embedding layers and an ensemble approach for the forward and critic networks.

Training

The model is pre-trained with an unsupervised learning algorithm and is designed to function out-of-the-box for various tasks without further training. Details on training specifics can be found in the config.json file within the repository.

Guide: Running Locally

To run Meta Motivo locally, follow these steps:

  1. Install the necessary package:

    pip install "metamotivo[all] @ git+https://github.com/facebookresearch/metamotivo.git"
    
  2. Load the model in your Python environment:

    from metamotivo.fb_cpr.huggingface import FBcprModel
    
    model = FBcprModel.from_pretrained("facebook/metamotivo-S-1")
    

For optimal performance, utilizing cloud GPUs such as those offered by AWS or Google Cloud is recommended.

License

Meta Motivo is released under the CC-BY-NC 4.0 license, which permits use with attribution and prohibits commercial use.

More Related APIs