Florence 2 base
microsoftIntroduction
Florence-2 is an advanced vision foundation model developed by Microsoft, designed to handle a variety of vision and vision-language tasks using a prompt-based approach. It excels in tasks such as captioning, object detection, and segmentation, utilizing the extensive FLD-5B dataset comprising 5.4 billion annotations across 126 million images. The model's sequence-to-sequence architecture makes it proficient in both zero-shot and fine-tuned settings.
Architecture
Florence-2 employs a sequence-to-sequence architecture, allowing it to process and generate visual and textual data. It is part of Microsoft's initiative to create a unified representation for multiple vision tasks. The model supports various operations by interpreting simple text prompts, enhancing its versatility in handling diverse vision tasks.
Training
Florence-2 models are pretrained using the FLD-5B dataset. The models are capable of zero-shot learning and can be fine-tuned for specific downstream tasks. Different versions of the model, including base and large variants, are available, each trained using float16 precision to optimize computational efficiency.
Guide: Running Locally
-
Installation and Setup:
- Ensure you have Python and pip installed.
- Install the required libraries:
pip install transformers torch Pillow
-
Load the Model:
- Use the following Python code to load and run the model:
import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-base", torch_dtype=torch_dtype, trust_remote_code=True).to(device) processor = AutoProcessor.from_pretrained("microsoft/Florence-2-base", trust_remote_code=True) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) prompt = "<OD>" inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype) generated_ids = model.generate(input_ids=inputs["input_ids"], pixel_values=inputs["pixel_values"], max_new_tokens=1024, do_sample=False, num_beams=3) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0] parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height)) print(parsed_answer)
- Use the following Python code to load and run the model:
-
Run on Cloud GPUs:
- For optimal performance, consider running the model on cloud-based GPUs such as those offered by AWS, Google Cloud, or Azure.
License
The Florence-2 model is licensed under the MIT License. For more details, refer to the license link.