Qwen2 V L 72 B Instruct G P T Q Int4
QwenIntroduction
Qwen2-VL is the latest iteration of the Qwen-VL model series, designed to offer state-of-the-art performance in visual understanding and multimodal processing. It supports various languages and can handle complex reasoning tasks, making it suitable for integration into devices like mobile phones and robots.
Architecture
Qwen2-VL introduces several architectural updates:
- Naive Dynamic Resolution: Supports arbitrary image resolutions by mapping them into a dynamic number of visual tokens.
- Multimodal Rotary Position Embedding (M-ROPE): Captures 1D textual, 2D visual, and 3D video positional information to enhance multimodal processing capabilities.
The model is available in three sizes: 2, 8, and 72 billion parameters, with the current repository containing the quantized 72B version.
Training
Qwen2-VL achieves state-of-the-art results on benchmarks like MathVista and DocVQA. It is trained to understand videos over 20 minutes and supports multilingual text understanding within images. Quantization techniques like GPTQ-Int4 and GPTQ-Int8 are used to optimize performance and reduce computational requirements.
Guide: Running Locally
To run Qwen2-VL locally, follow these steps:
- Install Dependencies: Ensure you have the latest version of
transformers
by running:pip install git+https://github.com/huggingface/transformers
- Install Utility: Install the
qwen-vl-utils
toolkit:pip install qwen-vl-utils
- Load the Model: Use the provided Python code to initialize the model and processor.
- Inference: Prepare input messages and perform inference using the model's
generate
method.
For optimal performance, especially when dealing with large models or batch processing, it is recommended to use cloud GPUs such as NVIDIA A100.
License
The Qwen2-VL model is licensed under the "tongyi-qianwen" license. For more details, refer to the license file.