Code Llama 7b Instruct hf
codellamaIntroduction
Code Llama is a suite of generative text models optimized for code synthesis and understanding, available in various scales ranging from 7 billion to 70 billion parameters. This repository hosts the 7B instruct-tuned version designed for general code tasks, including code completion, infilling, and instruction-based interactions, with a specialization in Python.
Architecture
Code Llama employs an auto-regressive language model structure using an optimized transformer architecture. It is available in different variants for specific tasks, including base models for general code synthesis, Python-specialized models, and Instruct models for safer deployment. The models are trained to take input text and generate output text.
Training
The models were developed by Meta and trained from January 2023 to July 2023 using custom training libraries on Meta’s Research Super Cluster. Training involved 400K GPU hours on A100-80GB hardware, resulting in an estimated total carbon emission of 65.3 tCO2eq, which Meta has offset. The training data used was the same as that for Llama 2 models, with variations in weights.
Guide: Running Locally
To run the Code Llama model locally, follow these steps:
- Install the required libraries:
pip install transformers accelerate
- Load and run the model using the Hugging Face Transformers library.
For optimal performance, consider using cloud-based GPUs, such as NVIDIA A100, available through platforms like AWS, Google Cloud, or Azure.
License
The use of Code Llama models is governed by a custom commercial license provided by Meta. More details and licensing agreements can be accessed at Meta's resources page.