This package implements interpretability methods for black box models, with a focus on local explanations and attribution maps in input space. It is similar to Captum and Zennit for PyTorch and iNNvestigate for Keras models. Most of the implemented methods only require the model to be differentiable with Zygote. Layerwise Relevance Propagation (LRP) is implemented for use with Flux.jl models.
Features
- Explainable AI in Julia
- This package supports Julia ≥1.6. To install it, open the Julia REPL and run
- Examples available
- Documentation available
- Most of the implemented methods only require the model to be differentiable with Zygote
- It is similar to Captum and Zennit for PyTorch and iNNvestigate for Keras models
Categories
Data VisualizationLicense
MIT LicenseFollow ExplainableAI.jl
Other Useful Business Software
Forever Free Full-Stack Observability | Grafana Cloud
Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of ExplainableAI.jl!