Llama 3.2 1 B Instruct Encrypted

nesaorg

Introduction

The LLAMA-3.2-1B-INSTRUCT-ENCRYPTED model is hosted by NESAORG with the tokenizer available on Hugging Face. This model is designed to utilize secure encrypted inference, where the model's weights are kept on Nesa’s secure server.

Architecture

The model architecture is based on the LLAMA series, specifically tailored for instructive tasks with a focus on encrypted data handling. The tokenizer is accessible through Hugging Face, allowing users to encode and decode text locally before sending it for inference.

Training

Details on the training process of LLAMA-3.2-1B-INSTRUCT-ENCRYPTED are not explicitly provided, but it is built on the LLAMA framework, suggesting a similar training methodology focused on language instruction and secure data processing.

Guide: Running Locally

  1. Obtain an access token: Sign up or log in to Hugging Face to get your access token.

  2. Load the Tokenizer:

    from transformers import AutoTokenizer
    
    hf_token = "<HF TOKEN>"  # Replace with your token
    model_id = "nesaorg/Llama-3.2-1B-Instruct-Encrypted"
    tokenizer = AutoTokenizer.from_pretrained(model_id, token=hf_token, local_files_only=False)
    
  3. Tokenize and Decode Text:

    text = "I'm super excited to join Nesa's Equivariant Encryption initiative!"
    
    # Encode text into token IDs
    token_ids = tokenizer.encode(text)
    print("Token IDs:", token_ids)
    
    # Decode token IDs back to text
    decoded_text = tokenizer.decode(token_ids)
    print("Decoded Text:", decoded_text)
    
  4. Inference: Submit the tokenized data for inference through the Nesa network.

Cloud GPUs

For optimized processing, consider using cloud GPU services like AWS, Google Cloud, or Azure to handle large-scale inference tasks efficiently.

License

The license details for LLAMA-3.2-1B-INSTRUCT-ENCRYPTED are not specified in the documentation. Please refer to the project repository or contact NESAORG for specific licensing information.

More Related APIs