While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset.

Features

  • Learned Perceptual Image Patch Similarity (LPIPS) metric
  • Berkeley-Adobe Perceptual Patch Similarity (BAPPS) dataset
  • "Perceptual Loss" usage
  • Evaluate the distance between image patches
  • Example scripts to take the distance between 2 specific images
  • Iteratively optimize using the metric

Project Samples

Project Activity

See All Activity >

License

BSD License

Follow Perceptual Similarity Metric and Dataset

Perceptual Similarity Metric and Dataset Web Site

Other Useful Business Software
Go From AI Idea to AI App Fast Icon
Go From AI Idea to AI App Fast

One platform to build, fine-tune, and deploy ML models. No MLOps team required.

Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
Try Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of Perceptual Similarity Metric and Dataset!

Additional Project Details

Programming Language

Python

Related Categories

Python Machine Learning Software, Python Deep Learning Frameworks

Registered

2022-08-08