E X A O N E 3.5 7.8 B Instruct Llamafied
beomiIntroduction
The EXAONE-3.5-7.8B-INSTRUCT-LLAMAFIED model is a sophisticated text generation tool developed by BEOMI. It is a variant of the LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct model, optimized for specific text generation tasks in English and Korean.
Architecture
This model employs the LLaMA architecture, which is designed for efficient and effective text generation. It integrates transformers and safetensors libraries to enhance performance and reduce computational overhead. The model supports both English and Korean, catering to a wide range of text generation applications.
Training
The model was trained using an extensive dataset that supports multilingual capabilities, focusing on conversational and instructive text generation. It is part of the broader EXAONE family, known for its robust natural language processing abilities.
Guide: Running Locally
To run the EXAONE-3.5-7.8B-INSTRUCT-LLAMAFIED model locally, follow these steps:
- Install Dependencies: Ensure Python and pip are installed. Install the
transformers
library using:pip install transformers
- Download the Model: Access the model through the Hugging Face model hub and download it:
from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "beomi/EXAONE-3.5-7.8B-Instruct-Llamafied" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name)
- Run Inference: Use the model for text generation:
input_text = "Your input text here" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- GPU Recommendation: For optimal performance, especially with large models, consider using cloud GPU services like AWS, Google Cloud, or Azure.
License
The model is released under a custom license named "exaone." For more details, refer to the license document here.