HuggingTweets

Introduction

HuggingTweets is a project that enables users to create personalized AI bots based on the tweets of a specific Twitter user. It utilizes the Hugging Face transformer models, specifically a fine-tuned version of GPT-2, to generate text in the style of the user's tweets.

Architecture

The model architecture is based on the pre-trained GPT-2 model, which has been fine-tuned with tweets from the user @porns_xx. The fine-tuning process adjusts the model to generate text that reflects the style and content of the original tweets. The pipeline used for this process is illustrated in the project's documentation.

Training

The training data consists of tweets from the user PORN HUB 🔞, with a total of 1,392 tweets retained after filtering. The training procedure involves fine-tuning the GPT-2 model with this dataset. Hyperparameters and metrics for the training process are recorded using Weights & Biases (W&B) to ensure transparency and reproducibility. The final model is versioned and logged upon completion of the training.

Guide: Running Locally

To use the HuggingTweets model for text generation, follow these steps:

  1. Install the transformers library from Hugging Face.

  2. Use the following Python code to run the model:

    from transformers import pipeline
    generator = pipeline('text-generation', model='huggingtweets/porns_xx')
    generator("My dream is", num_return_sequences=5)
    
  3. For enhanced performance, it is recommended to use cloud GPUs such as those available on Google Colab, AWS, or Azure.

License

The HuggingTweets project follows the licensing terms of Hugging Face and the respective repositories and datasets it uses. Ensure compliance with these licenses when using the model or its outputs.

More Related APIs in Text Generation