BART-Large-MNLI is a fine-tuned version of Facebook's BART-Large model, trained on the Multi-Genre Natural Language Inference (MultiNLI) dataset for natural language understanding tasks. Leveraging a textual entailment formulation, it enables powerful zero-shot classification by comparing a given input (premise) to multiple candidate labels phrased as hypotheses. The model determines how likely the premise entails each hypothesis, effectively ranking or scoring labels based on semantic similarity. This method allows users to classify any sequence into user-defined categories without task-specific fine-tuning. The model supports both single-label and multi-label classification using Hugging Face’s pipeline("zero-shot-classification"). It is implemented in PyTorch, with additional support for JAX and Rust, and is available under the MIT license. With 407 million parameters, it offers strong performance across a range of general-purpose text classification tasks.
Features
- Zero-shot classification using natural language inference
- Based on BART-Large pretrained transformer architecture
- Trained on the MultiNLI dataset
- Supports both single-label and multi-label classification
- Simple integration with Hugging Face pipelines
- Fine-tuned for entailment-based label matching
- Compatible with PyTorch, JAX, and Rust
- MIT-licensed and widely used for fast deployment