Huatuo G P T o1 70 B

FreedomIntelligence

HuatuoGPT-o1-70B

Introduction

HuatuoGPT-o1 is a medical large language model (LLM) tailored for advanced medical reasoning. It emphasizes a detailed thought process, reflecting and refining reasoning before delivering a final response. For further details, visit the GitHub repository.

Architecture

HuatuoGPT-o1-70B builds on the LLaMA-3.1-70B architecture, supporting English language processing. It features a "thinks-before-it-answers" methodology, systematically presenting a reasoning process followed by a final response.

Training

The model was trained using datasets from FreedomIntelligence, such as the medical-o1-reasoning-SFT and medical-o1-verifiable-problem datasets. It employs a structured approach to refine and validate its reasoning capabilities.

Guide: Running Locally

  1. Environment Setup: Ensure you have Python and the necessary libraries installed, such as transformers.

  2. Model Deployment: Use the following code to deploy the model:

    from transformers import AutoModelForCausalLM, AutoTokenizer
    
    model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-70B", torch_dtype="auto", device_map="auto")
    tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-70B")
    
    input_text = "How to stop a cough?"
    messages = [{"role": "user", "content": input_text}]
    
    inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True), return_tensors="pt").to(model.device)
    outputs = model.generate(**inputs, max_new_tokens=2048)
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))
    
  3. Cloud GPUs: Consider using cloud-based GPU services from providers like AWS, Google Cloud, or Azure for efficient model running and inference.

License

HuatuoGPT-o1-70B is released under the Apache-2.0 License, permitting use, distribution, and modification.

More Related APIs in Text Generation