Media Pipe Face Detection
qualcommIntroduction
MediaPipe-Face-Detection is a real-time face detection model optimized for mobile deployment, capable of detecting faces and their features in video and image streams. The model is designed for sub-millisecond processing and is available in various formats including PyTorch, TF Lite, and ONNX.
Architecture
- Model Type: Object detection
- Input Resolution: 256x256
- Output Classes: 6 (e.g., left eye, right eye, etc.)
- MediaPipeFaceDetector:
- Parameters: 135K
- Size: 565 KB
- MediaPipeFaceLandmarkDetector:
- Parameters: 603K
- Size: 2.34 MB
Training
The model is an implementation of MediaPipe-Face-Detection, designed to work efficiently across multiple Qualcomm® devices. It supports various chipsets and runtimes, offering inference times as low as 0.123 ms with minimal memory usage.
Guide: Running Locally
-
Installation:
- Install the model via pip:
pip install qai-hub-models
- Install the model via pip:
-
Configuration:
- Sign in to Qualcomm® AI Hub and obtain an API token.
- Configure your environment:
qai-hub configure --api_token API_TOKEN
-
Running the Demo:
- Execute the model demo locally:
python -m qai_hub_models.models.mediapipe_face.demo
- Execute the model demo locally:
-
Cloud Execution:
- Use Qualcomm® cloud-hosted devices for performance and accuracy checks. Run the export script for deployment compatibility.
-
Deploying to Android:
- Deploy using TensorFlow Lite or QNN runtimes. Follow specific Android deployment guides for each format.
Suggested Cloud GPUs: Use Qualcomm® AI Hub for cloud-hosted device execution and profiling.
License
- MediaPipe-Face-Detection: Licensed under Apache-2.0.
- Compiled Assets: Proprietary license for on-device deployment.
- For more details, refer to the original implementation license and Qualcomm® AI Hub license.