The Learning Interpretability Tool (LIT, formerly known as the Language Interpretability Tool) is a visual, interactive ML model-understanding tool that supports text, image, and tabular data. It can be run as a standalone server, or inside of notebook environments such as Colab, Jupyter, and Google Cloud Vertex AI notebooks.

Features

  • Documentation available
  • Local explanations via salience maps and rich visualization of model predictions
  • Aggregate analysis including custom metrics, slicing and binning, and visualization of embedding spaces
  • Counterfactual generation via manual edits or generator plug-ins to dynamically create and evaluate new examples
  • Side-by-side mode to compare two or more models, or one model on a pair of examples
  • Framework-agnostic and compatible with TensorFlow, PyTorch, and more

Project Samples

Project Activity

See All Activity >

Categories

Machine Learning

License

Apache License V2.0

Follow Learning Interpretability Tool

Learning Interpretability Tool Web Site

Other Useful Business Software
Custom VMs From 1 to 96 vCPUs With 99.95% Uptime Icon
Custom VMs From 1 to 96 vCPUs With 99.95% Uptime

General-purpose, compute-optimized, or GPU/TPU-accelerated. Built to your exact specs.

Live migration and automatic failover keep workloads online through maintenance. One free e2-micro VM every month.
Try Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of Learning Interpretability Tool!

Additional Project Details

Operating Systems

Linux, Mac, Windows

Programming Language

TypeScript

Related Categories

TypeScript Machine Learning Software

Registered

2024-08-05