Open Source Python Natural Language Processing (NLP) Tools - Page 3

Python Natural Language Processing (NLP) Tools

View 188 business solutions

Browse free open source Python Natural Language Processing (NLP) Tools and projects below. Use the toggles on the left to filter open source Python Natural Language Processing (NLP) Tools by OS, license, language, programming language, and project status.

  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • Zenflow- The AI Workflow Engine for Software Devs Icon
    Zenflow- The AI Workflow Engine for Software Devs

    Parallel agents. Multi-agent orchestration. Specs that turn into shipped code. Zenflow automates planning, coding, testing, and verification.

    Zenflow is the AI workflow engine built for real teams. Parallel agents plan, code, test, and verify in one workflow. With spec-driven development and deep context, Zenflow turns requirements into production-ready output so teams ship faster and stay in flow.
    Try free now
  • 1
    Detoxify

    Detoxify

    Trained models & code to predict toxic comments

    Detoxify is a deep learning-based tool for detecting and filtering toxic language in online conversations, leveraging Transformer models for high accuracy.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Ecco

    Ecco

    Explain, analyze, and visualize NLP language models

    Ecco is an interpretability tool for transformers that helps visualize and analyze how language models generate text, making model behavior more transparent.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Emb-GAM

    Emb-GAM

    An interpretable and efficient predictor using pre-trained models

    Deep learning models have achieved impressive prediction performance but often sacrifice interpretability, a critical consideration in high-stakes domains such as healthcare or policymaking. In contrast, generalized additive models (GAMs) can maintain interpretability but often suffer from poor prediction performance due to their inability to effectively capture feature interactions. In this work, we aim to bridge this gap by using pre-trained neural language models to extract embeddings for each input before learning a linear model in the embedding space. The final model (which we call Emb-GAM) is a transparent, linear function of its input features and feature interactions. Leveraging the language model allows Emb-GAM to learn far fewer linear coefficients, model larger interactions, and generalize well to novel inputs. Across a variety of natural-language-processing datasets, Emb-GAM achieves strong prediction performance without sacrificing interpretability.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4

    Eng2BN CSV Translator

    Translate English to Bangla using CSV file format and range wise.

    Eng2BN CSV Translator user-friendly Python tool that enables efficient translation of English text to Bangla within CSV files. The application supports large datasets and allows users to translate specific row ranges, making it ideal for batch processing.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 5
    ExtractThinker

    ExtractThinker

    ExtractThinker is a Document Intelligence library for LLMs

    ExtractThinker is a tool designed to facilitate the extraction and analysis of information from various data sources, aiding in data processing and knowledge discovery.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    FARM

    FARM

    Fast & easy transfer learning for NLP

    FARM makes Transfer Learning with BERT & Co simple, fast and enterprise-ready. It's built upon transformers and provides additional features to simplify the life of developers: Parallelized preprocessing, highly modular design, multi-task learning, experiment tracking, easy debugging and close integration with AWS SageMaker. With FARM you can build fast proofs-of-concept for tasks like text classification, NER or question answering and transfer them easily into production. Easy fine-tuning of language models to your task and domain language. AMP optimizers (~35% faster) and parallel preprocessing (16 CPU cores => ~16x faster). Modular design of language models and prediction heads. Switch between heads or combine them for multitask learning. Full Compatibility with HuggingFace Transformers' models and model hub. Smooth upgrading to newer language models. Integration of custom datasets via Processor class. Powerful experiment tracking & execution.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    FastRAG

    FastRAG

    Efficient Retrieval Augmentation and Generation Framework

    fastRAG is a research framework for efficient and optimized retrieval augmented generative pipelines, incorporating state-of-the-art LLMs and Information Retrieval. fastRAG is designed to empower researchers and developers with a comprehensive tool set for advancing retrieval augmented generation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    GT NLP Class

    GT NLP Class

    Course materials for Georgia Tech CS 4650 and 7650

    This repository contains lecture notes, slides, assignments, and code for a university-level Natural Language Processing course. It spans core NLP topics such as language modeling, sequence tagging, parsing, semantics, and discourse, alongside modern machine learning methods used to solve them. Students work through programming exercises and problem sets that build intuition for both classical algorithms (like HMMs and CRFs) and neural approaches (like word embeddings and sequence models). The materials emphasize theory grounded in practical experimentation, often via Python notebooks or scripts that visualize results and encourage ablation studies. Clear organization and self-contained examples make it possible to follow along outside the classroom, using the repo as a self-study resource. For learners and instructors alike, the course provides a coherent path from foundational linguistics to current techniques, with reproducible code that makes concepts concrete.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    GluonNLP

    GluonNLP

    NLP made easy

    GluonNLP is a toolkit that helps you solve NLP problems. It provides easy-to-use tools that helps you load the text data, process the text data, and train models. To facilitate both the engineers and researchers, we provide command-line-toolkits for downloading and processing the NLP datasets. Gluon NLP makes it easy to evaluate and train word embeddings. Here are examples to evaluate the pre-trained embeddings included in the Gluon NLP toolkit as well as example scripts for training embeddings on custom datasets. Fasttext models trained with the library of Facebook research are exported both in text and a binary format. Unlike the text format, the binary format preserves information about subword units and consequently supports the computation of word vectors for words unknown during training (and not included in the text format). Besides training new fastText embeddings with Gluon NLP it is also possible to load the binary format into a Block provided by the Gluon NLP toolkit.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Grafana: The open and composable observability platform Icon
    Grafana: The open and composable observability platform

    Faster answers, predictable costs, and no lock-in built by the team helping to make observability accessible to anyone.

    Grafana is the open source analytics & monitoring solution for every database.
    Learn More
  • 10
    InferSent

    InferSent

    InferSent sentence embeddings

    InferSent is a supervised sentence embedding method that learns universal representations from Natural Language Inference data and transfers well to many downstream tasks. It uses a BiLSTM encoder with max-pooling to produce fixed-length sentence vectors that capture semantics beyond bag-of-words statistics. Trained on large NLI datasets, the embeddings generalize across tasks like sentiment analysis, entailment, paraphrase detection, and semantic similarity with simple linear classifiers. The repository provides pretrained vectors, training scripts, and clear examples for evaluating transfer on a wide suite of benchmarks. Because the encoder is compact and language-agnostic at the interface level, it’s easy to drop into production pipelines that need robust semantic features. InferSent helped popularize the idea that supervised objectives (like NLI) can yield strong general-purpose sentence encoders, and it remains a reliable baseline against which to compare newer models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Keras Hub

    Keras Hub

    Pretrained model hub for Keras 3

    Keras Hub is a repository of pre-trained models for Keras 3, offering a collection of ready-to-use models for various machine-learning tasks. KerasHub is an extension of the core Keras API; KerasHub components are provided as Layer and Model implementations. If you are familiar with Keras, congratulations. You already understand most of KerasHub.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    KoGPT

    KoGPT

    KakaoBrain KoGPT (Korean Generative Pre-trained Transformer)

    KoGPT is a Korean language model based on OpenAI’s GPT architecture, designed for various natural language processing (NLP) tasks such as text generation, summarization, and dialogue systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    KoNLPy

    KoNLPy

    Python package for Korean natural language processing

    KoNLPy is a natural language processing (NLP) library for the Korean language, offering tokenization, morphological analysis, and named entity recognition.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Lexifinder

    Lexifinder

    A tool to create the analytical index of a manuscript

    Lexifinder is a free and open source tool to automate the creation of an analytical index of a manuscript, based on a natural language processing model. First, convert your Docx or ODT file into a PDF. Choose the output text file, set the similarity index, and choose your desired keywords. Lexifinder will include in the index all words whose significance resemble that of at least one keyword. The similarity index spans from 1 to 100 and expresses the degree of resemblance required for a noun to be included.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    LightAutoML

    LightAutoML

    Fast and customizable framework for automatic ML model creation

    LightAutoML is an automated machine learning (AutoML) framework optimized for efficient model training and hyperparameter tuning, focusing on both tabular and text data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Lingua-Py

    Lingua-Py

    The most accurate natural language detection library for Python

    Its task is simple: It tells you which language some text is written in. This is very useful as a preprocessing step for linguistic data in natural language processing applications such as text classification and spell checking. Other use cases, for instance, might include routing e-mails to the right geographically located customer service department, based on the e-mails' languages. Language detection is often done as part of large machine learning frameworks or natural language processing applications. In cases where you don't need the full-fledged functionality of those systems or don't want to learn the ropes of those, a small flexible library comes in handy.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17

    MITRE Annotation Toolkit

    A toolkit for managing and manipulating text annotations

    The MITRE Annotation Toolkit (MAT) is a suite of tools which can be used for automated and human tagging of annotations. Annotation is a process, used mostly by researchers in natural language processing, of enhancing documents with information about the various phrase types the documents contain. MAT supports both UI interaction and command-line interaction, and provides various levels of control over the overall annotation process. It can be customized for specific tasks (e.g., named entity identification, de-identification of medical records). The goal of MAT is not to help you configure your training engine (in the default case, the Carafe CRF system) to achieve the best possible performance on your data. MAT is for "everything else": all the tools you end up wishing you had.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    MutationFinder is a biomedical natural language processing (NLP) system for extracting mentions of point mutations from free text. MutationFinder achieves high performance (99% precision, 81% recall on blind test data) as an information extraction system
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    NLG-Eval

    NLG-Eval

    Evaluation code for various unsupervised automated metrics

    NLG-Eval is a toolkit for evaluating the quality of natural language generation (NLG) outputs using multiple automated metrics such as BLEU, METEOR, and ROUGE.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    NLP Architect

    NLP Architect

    A model library for exploring state-of-the-art deep learning

    NLP Architect is an open-source Python library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing and Natural Language Understanding neural networks. The library includes our past and ongoing NLP research and development efforts as part of Intel AI Lab. NLP Architect is designed to be flexible for adding new models, neural network components, data handling methods, and for easy training and running models. NLP Architect is a model-oriented library designed to showcase novel and different neural network optimizations. The library contains NLP/NLU-related models per task, different neural network topologies (which are used in models), procedures for simplifying workflows in the library, pre-defined data processors and dataset loaders and misc utilities. The library is designed to be a tool for model development: data pre-processing, build model, train, validate, infer, save or load a model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    NLP Best Practices

    NLP Best Practices

    Natural Language Processing Best Practices & Examples

    In recent years, natural language processing (NLP) has seen quick growth in quality and usability, and this has helped to drive business adoption of artificial intelligence (AI) solutions. In the last few years, researchers have been applying newer deep learning methods to NLP. Data scientists started moving from traditional methods to state-of-the-art (SOTA) deep neural network (DNN) algorithms which use language models pretrained on large text corpora. This repository contains examples and best practices for building NLP systems, provided as Jupyter notebooks and utility functions. The focus of the repository is on state-of-the-art methods and common scenarios that are popular among researchers and practitioners working on problems involving text and language. The goal of this repository is to build a comprehensive set of tools and examples that leverage recent advances in NLP algorithms, neural architectures, and distributed machine learning systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    NNCF

    NNCF

    Neural Network Compression Framework for enhanced OpenVINO

    NNCF (Neural Network Compression Framework) is an optimization toolkit for deep learning models, designed to apply quantization, pruning, and other techniques to improve inference efficiency.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    NVIDIA NeMo

    NVIDIA NeMo

    Toolkit for conversational AI

    NVIDIA NeMo, part of the NVIDIA AI platform, is a toolkit for building new state-of-the-art conversational AI models. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. Every module can easily be customized, extended, and composed to create new conversational AI model architectures. Conversational AI architectures are typically large and require a lot of data and compute for training. NeMo uses PyTorch Lightning for easy and performant multi-GPU/multi-node mixed-precision training. Supported models: Jasper, QuartzNet, CitriNet, Conformer-CTC, Conformer-Transducer, Squeezeformer-CTC, Squeezeformer-Transducer, ContextNet, LSTM-Transducer (RNNT), LSTM-CTC. NGC collection of pre-trained speech processing models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Collection of Python scripts providing interactions between the sociological investigation platform and webservices (such as NLP, search engine, web database).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    NeuroNER

    NeuroNER

    Named-entity recognition using neural networks

    Named-entity recognition (NER) aims at identifying entities of interest in the text, such as location, organization and temporal expression. Identified entities can be used in various downstream applications such as patient note de-identification and information extraction systems. They can also be used as features for machine learning systems for other natural language processing tasks. Leverages the state-of-the-art prediction capabilities of neural networks (a.k.a. "deep learning") Is cross-platform, open source, freely available, and straightforward to use. Enables the users to create or modify annotations for a new or existing corpus. Train the neural network that performs the NER. During the training, NeuroNER allows monitoring of the network. Evaluate the quality of the predictions made by NeuroNER. The performance metrics can be calculated and plotted by comparing the predicted labels with the gold labels.
    Downloads: 0 This Week
    Last Update:
    See Project