Deep Fake Detector Model
prithivMLmodsIntroduction
The Deep-Fake-Detector-Model, developed by prithivMLmods, is a machine learning model designed for image classification to detect deep fake images. It leverages the transformers library and is implemented using PyTorch, making it compatible with safetensors for efficient inference.
Architecture
The model utilizes Vision Transformer (ViT) architecture, which is known for its effectiveness in handling image classification tasks. This architecture allows the model to process and analyze the intricate patterns that differentiate real images from fake ones.
Training
The model has been trained on a dataset where the classification report indicates high precision and recall, both at 0.9935. The dataset consists of an equal distribution of real and fake images, with 4761 real and 4760 fake samples. The overall accuracy of the model on this dataset is 99.35%, showcasing its reliability in distinguishing deep fakes from real images.
Guide: Running Locally
To run the Deep-Fake-Detector-Model locally, follow these steps:
- Clone the Repository: Get the model files from the Hugging Face repository.
- Set Up Environment: Install the necessary libraries, such as PyTorch and transformers.
- Load the Model: Use the transformers library to load the model for inference.
- Prepare Data: Ensure your images are preprocessed according to the input requirements of the ViT model.
- Run Inference: Pass the images through the model to get predictions.
For optimal performance, it is recommended to use cloud GPUs such as those provided by AWS, Google Cloud, or Azure, especially when dealing with large datasets or requiring fast inference.
License
The Deep-Fake-Detector-Model is licensed under the CreativeML OpenRAIL-M license, allowing for open and flexible usage while protecting the rights of creators and users.