mutopia_guitar_mmm
juancopi81Introduction
The MUTOPIA_GUITAR_MMM
model by juancopi81 is designed for music generation, leveraging a language model approach similar to that used in text generation. The model is a fine-tuned version of GPT-2, trained on the Mutopia Guitar Dataset, which includes solo guitar pieces from western classical composers. The primary goal of this model is educational, demonstrating the application of Hugging Face tools for music generation.
Architecture
The model uses the GPT-2 architecture, specifically the GPT2LMHeadModel from Hugging Face. Key features include a context size of 256 and a vocabulary size of 588. The model employs a WhitespaceSplit pre-tokenizer, and its tokenizer is available on the Hugging Face hub.
Training
The model was trained using the Mutopia Guitar Dataset, which features compositions from classical guitar composers. Training involved transposing the notes in initial epochs and later training without transposition to improve authenticity. Multiple rounds of training with varying hyperparameters were conducted to optimize model performance. The training utilized AdamWeightDecay optimizer with a learning rate schedule involving a warm-up phase and polynomial decay.
Guide: Running Locally
To run the model locally:
- Clone the Repository: Clone the project repository from GitHub.
- Install Dependencies: Ensure Python, TensorFlow, and Hugging Face Transformers are installed.
- Download Dataset: Obtain the Mutopia Guitar Dataset from the Hugging Face dataset hub.
- Run the Notebook: Use the provided Jupyter notebook to interact with the model and generate music.
- Use Cloud GPUs: For more intensive training, consider using cloud GPUs from providers like Google Colab or AWS.
License
The model and its associated code are available under the same terms as the original GPT-2 model. Users should refer to the specific licensing terms on the Hugging Face repository for detailed information.