FashionCLIP is a domain-adapted CLIP model fine-tuned specifically for the fashion industry, enabling zero-shot classification and retrieval of fashion products. Developed by Patrick John Chia and collaborators, it builds on the CLIP ViT-B/32 architecture and was trained on over 800K image-text pairs from the Farfetch dataset. The model learns to align product images and descriptive text using contrastive learning, enabling it to perform well across various fashion-related tasks without additional supervision. FashionCLIP 2.0, the latest version, uses the laion/CLIP-ViT-B-32-laion2B-s34B-b79K checkpoint for improved accuracy, achieving better F1 scores across multiple benchmarks compared to earlier versions. It supports multilingual fashion queries and works best with clean, product-style images against white backgrounds. The model can be used for product search, recommendation systems, or visual tagging in e-commerce platforms.

Features

  • Fine-tuned on 800K+ fashion product image-text pairs
  • Built on CLIP ViT-B/32 architecture for vision-language alignment
  • Supports zero-shot fashion classification and retrieval
  • Improved accuracy in FashionCLIP 2.0 using a stronger pretrained checkpoint
  • Trained on English captions with fashion-specific descriptions
  • Optimized for clean, centered product images with white backgrounds
  • Outputs similarity scores for cross-modal input (image vs. text)
  • Compatible with Hugging Face Transformers and ONNX for deployment

Project Samples

Project Activity

See All Activity >

Categories

AI Models

Follow fashion-clip

fashion-clip Web Site

Other Useful Business Software
Go From AI Idea to AI App Fast Icon
Go From AI Idea to AI App Fast

One platform to build, fine-tune, and deploy ML models. No MLOps team required.

Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
Try Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of fashion-clip!

Additional Project Details

Registered

2025-07-02