Huatuo G P T o1 7 B G G U F
QuantFactoryIntroduction
HuatuoGPT-o1 is a medical language model (LLM) designed to facilitate advanced medical reasoning. It employs a "thinks-before-it-answers" approach, generating a thoughtful process before delivering a final response. The model supports reasoning in both English and Chinese.
Architecture
HuatuoGPT-o1-7B is a version of the model built on the Qwen2.5-7B architecture, capable of handling English and Chinese text. This model is a quantized variant created using llama.cpp
, derived from FreedomIntelligence's HuatuoGPT-o1-7B.
Training
The model is trained using datasets focused on medical reasoning and verifiable problems. These datasets include:
FreedomIntelligence/medical-o1-reasoning-SFT
FreedomIntelligence/medical-o1-verifiable-problem
Guide: Running Locally
To run HuatuoGPT-o1-7B locally, follow these steps:
- Environment Setup: Ensure you have Python installed, along with the
transformers
library. - Model and Tokenizer Loading:
from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-7B", torch_dtype="auto", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-7B")
- Inference Example:
input_text = "How to stop a cough?" messages = [{"role": "user", "content": input_text}] inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True), return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=2048) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Execution: Run the script on a machine equipped with a GPU for optimal performance. Cloud GPU providers such as AWS, Google Cloud, or Azure can be used if local resources are insufficient.
License
HuatuoGPT-o1-7B is released under the Apache-2.0 License, which allows for both personal and commercial use, modification, and distribution.