Huatuo G P T o1 7 B
FreedomIntelligenceIntroduction
HuatuoGPT-o1 is a medical language model (LLM) designed for advanced medical reasoning. It generates a complex thought process, involving reflection and refinement of its reasoning, before providing a final response. The model supports both English and Chinese languages.
Architecture
HuatuoGPT-o1-7B is based on the Qwen2.5-7B architecture. It is part of a series that includes models like HuatuoGPT-o1-8B and HuatuoGPT-o1-70B, which are based on LLaMA-3.1.
Training
The model was trained using datasets such as FreedomIntelligence/medical-o1-reasoning-SFT and FreedomIntelligence/medical-o1-verifiable-problem. This training facilitates its advanced reasoning capabilities in the medical domain.
Guide: Running Locally
To run HuatuoGPT-o1-7B locally, you can follow these steps:
-
Install Required Libraries: Install the
transformers
library from Hugging Face.pip install transformers
-
Load the Model and Tokenizer: Use the
transformers
library to load the model and tokenizer.from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-7B", torch_dtype="auto", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-7B")
-
Prepare Input and Generate Output: Format the input text, generate the output, and decode it.
input_text = "How to stop a cough?" messages = [{"role": "user", "content": input_text}] inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True), return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=2048) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
-
Use Cloud GPUs: For efficient performance, especially with large models, consider using cloud GPU services such as AWS, Google Cloud, or Azure.
License
HuatuoGPT-o1-7B is licensed under the Apache 2.0 License, which allows for commercial use, modification, distribution, and private use.