ccip
deepghsIntroduction
CCIP (Contrastive Anime Character Image Pre-Training) is a model designed to calculate the visual similarity between anime characters in two images, specifically for images containing a single anime character. The model provides a higher score for more visually similar characters.
Architecture
The model leverages a contrastive learning approach to differentiate visual features of anime characters. It utilizes datasets like deepghs/character_similarity
and deepghs/character_index
, and is evaluated using metrics such as F1 Score and adjusted random score.
Training
The performance of the CCIP model is evaluated with several configurations, each achieving different levels of F1 Score, Precision, and Recall. A key aspect of training involves tuning parameters for clustering algorithms like DBSCAN and OPTICS to find optimal solutions. The training results demonstrate varying scores across different model sizes and configurations.
Guide: Running Locally
- Install Dependencies: Follow the installation guide for
imgutils
. - Prepare Data: Ensure you have image files ready for comparison.
- Run Model: Use the following code snippet to calculate character similarity:
from imgutils.metrics import ccip_batch_differences ccip_batch_differences(['ccip/1.jpg', 'ccip/2.jpg', 'ccip/6.jpg', 'ccip/7.jpg'])
- View Results: The output will be a similarity matrix indicating the similarity scores.
Note: For optimal performance, consider using cloud GPUs from providers like AWS, Google Cloud, or Azure.
License
The CCIP model is released under the OpenRAIL license, ensuring open access to the model and its usage.