...Model explainability for Diffusers. Get explanations for your generated images. Install directly from PyPI. It is possible to visualize pixel attributions of the input image as a saliency map. diffusers-interpret also computes these token/pixel attributions for generating a particular part of the image. To analyze how a token in the input prompt influenced the generation, you can study the token attribution scores. You can also check all the images that the diffusion process generated at the end of each step. Gradient checkpointing also reduces GPU usage, but makes computations a bit slower.