Showing 440 open source projects for "python language"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Your top-rated shield against malware and online scams | Avast Free Antivirus Icon
    Your top-rated shield against malware and online scams | Avast Free Antivirus

    Browse and email in peace, supported by clever AI

    Our antivirus software scans for security and performance issues and helps you to fix them instantly. It also protects you in real time by analyzing unknown files before they reach your desktop PC or laptop — all for free.
    Free Download
  • 1
    Guardrails

    Guardrails

    Adding guardrails to large language models

    Guardrails is a Python package that lets a user add structure, type and quality guarantees to the outputs of large language models (LLMs). At the heart of Guardrails is the rail spec. rail is intended to be a language-agnostic, human-readable format for specifying structure and type information, validators and corrective actions over LLM outputs. We create a RAIL spec to describe the expected structure and types of the LLM output, the quality criteria for the output to be considered valid...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 2
    LLaMA Efficient Tuning

    LLaMA Efficient Tuning

    Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon

    Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM2)
    Downloads: 11 This Week
    Last Update:
    See Project
  • 3
    FastRAG

    FastRAG

    Efficient Retrieval Augmentation and Generation Framework

    fastRAG is a research framework for efficient and optimized retrieval augmented generative pipelines, incorporating state-of-the-art LLMs and Information Retrieval. fastRAG is designed to empower researchers and developers with a comprehensive tool set for advancing retrieval augmented generation.
    Downloads: 12 This Week
    Last Update:
    See Project
  • 4
    doccano

    doccano

    Open source annotation tool for machine learning practitioners

    doccano is an open-source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequence-to-sequence tasks. So, you can create labeled data for sentiment analysis, named entity recognition, text summarization and so on. Just create a project, upload data and start annotating. You can build a dataset in hours.
    Downloads: 14 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 5
    LangChain

    LangChain

    ⚡ Building applications with LLMs through composability ⚡

    Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge. This library is aimed at assisting in the development of those types of applications.
    Downloads: 14 This Week
    Last Update:
    See Project
  • 6
    Llama Stack

    Llama Stack

    Composable building blocks to build Llama Apps

    Llama-Stack is an open-source framework designed to facilitate the deployment and fine-tuning of large language models (LLMs) for various natural language processing tasks.
    Downloads: 11 This Week
    Last Update:
    See Project
  • 7
    Chroma MCP

    Chroma MCP

    A Model Context Protocol (MCP) server implementation

    Chroma MCP Server is an implementation of the Model Context Protocol (MCP) designed to integrate large language model (LLM) applications with external data sources or tools. It offers a standardized framework to seamlessly provide LLMs with the context they require for effective operation. ​
    Downloads: 13 This Week
    Last Update:
    See Project
  • 8
    ShellGPT

    ShellGPT

    A command-line productivity tool powered by AI large language models

    A command-line productivity tool powered by AI large language models (LLM). This command-line tool offers a streamlined generation of shell commands, code snippets, and documentation, eliminating the need for external resources (like Google search). Supports Linux, macOS, and Windows and is compatible with all major Shells like PowerShell, CMD, Bash, Zsh, etc. By default, ShellGPT uses OpenAI's API and GPT-4 model. You'll need an API key, you can generate one here. You will be prompted for your...
    Downloads: 13 This Week
    Last Update:
    See Project
  • 9
    UniEM

    UniEM

    Unified embedding model

    UniEM is a unified embedding model designed to create high-quality text embeddings for various natural language processing tasks.
    Downloads: 9 This Week
    Last Update:
    See Project
  • Powering the best of the internet | Fastly Icon
    Powering the best of the internet | Fastly

    Fastly's edge cloud platform delivers faster, safer, and more scalable sites and apps to customers.

    Ensure your websites, applications and services can effortlessly handle the demands of your users with Fastly. Fastly’s portfolio is designed to be highly performant, personalized and secure while seamlessly scaling to support your growth.
    Try for free
  • 10
    Nexa SDK

    Nexa SDK

    Nexa SDK is a comprehensive toolkit for supporting ONNX and GGML

    Nexa SDK is a comprehensive toolkit for supporting ONNX and GGML models. It supports text generation, image generation, vision-language models (VLM), and speech-to-text (ASR), and text-to-speech (TTS) capabilities. Additionally, it offers an OpenAI-compatible API server with JSON schema mode for function calling and streaming support, and a user-friendly Streamlit UI. Users can run Nexa SDK in any device with Python environment, and GPU acceleration is supported, including CUDA, Metal, and ROCm...
    Downloads: 13 This Week
    Last Update:
    See Project
  • 11
    OpenLLM

    OpenLLM

    Operating LLMs in production

    An open platform for operating large language models (LLMs) in production. Fine-tune, serve, deploy, and monitor any LLMs with ease. With OpenLLM, you can run inference with any open-source large-language models, deploy to the cloud or on-premises, and build powerful AI apps. Built-in supports a wide range of open-source LLMs and model runtime, including Llama 2, StableLM, Falcon, Dolly, Flan-T5, ChatGLM, StarCoder, and more. Serve LLMs over RESTful API or gRPC with one command, query via WebUI...
    Downloads: 9 This Week
    Last Update:
    See Project
  • 12
    FlashInfer

    FlashInfer

    FlashInfer: Kernel Library for LLM Serving

    FlashInfer is a kernel library designed to enhance the serving of Large Language Models (LLMs) by optimizing inference performance. It provides a high-performance framework that integrates seamlessly with existing systems, aiming to reduce latency and improve efficiency in LLM deployments. FlashInfer supports various hardware architectures and is built to scale with the demands of production environments.
    Downloads: 12 This Week
    Last Update:
    See Project
  • 13
    Notte

    Notte

    Opensource browser using agents

    Notte is an open-source browser framework that enables the development and deployment of web-based AI agents. It introduces a perception layer that transforms web pages into structured, navigable maps described in natural language, allowing agents to interact with the internet more effectively. Notte is designed for building scalable and efficient browser-based AI applications.
    Downloads: 10 This Week
    Last Update:
    See Project
  • 14
    Curated Transformers

    Curated Transformers

    PyTorch library of curated Transformer models and their components

    State-of-the-art transformers, brick by brick. Curated Transformers is a transformer library for PyTorch. It provides state-of-the-art models that are composed of a set of reusable components. Supports state-of-the-art transformer models, including LLMs such as Falcon, Llama, and Dolly v2. Implementing a feature or bugfix benefits all models. For example, all models support 4/8-bit inference through the bitsandbytes library and each model can use the PyTorch meta device to avoid unnecessary...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 15
    MedicalGPT

    MedicalGPT

    MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training

    MedicalGPT training medical GPT model with ChatGPT training pipeline, implementation of Pretraining, Supervised Finetuning, Reward Modeling and Reinforcement Learning. MedicalGPT trains large medical models, including secondary pre-training, supervised fine-tuning, reward modeling, and reinforcement learning training.
    Downloads: 12 This Week
    Last Update:
    See Project
  • 16
    LLM CLI

    LLM CLI

    Access large language models from the command-line

    A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 17
    BertViz

    BertViz

    BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)

    BertViz is an interactive tool for visualizing attention in Transformer language models such as BERT, GPT2, or T5. It can be run inside a Jupyter or Colab notebook through a simple Python API that supports most Huggingface models. BertViz extends the Tensor2Tensor visualization tool by Llion Jones, providing multiple views that each offer a unique lens into the attention mechanism. The head view visualizes attention for one or more attention heads in the same layer. It is based on the excellent...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 18
    Text Generation Inference

    Text Generation Inference

    Large Language Model Text Generation Inference

    Text Generation Inference is a high-performance inference server for text generation models, optimized for Hugging Face's Transformers. It is designed to serve large language models efficiently with optimizations for performance and scalability.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 19
    DSPy

    DSPy

    DSPy: The framework for programming—not prompting—language models

    Developed by the Stanford NLP Group, DSPy (Declarative Self-improving Python) is a framework that enables developers to program language models through compositional Python code rather than relying solely on prompt engineering. It facilitates the construction of modular AI systems and provides algorithms for optimizing prompts and weights, enhancing the quality and reliability of language model outputs.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 20
    NLG-Eval

    NLG-Eval

    Evaluation code for various unsupervised automated metrics

    NLG-Eval is a toolkit for evaluating the quality of natural language generation (NLG) outputs using multiple automated metrics such as BLEU, METEOR, and ROUGE.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 21
    Petals

    Petals

    Run 100B+ language models at home, BitTorrent-style

    Run 100B+ language models at home, BitTorrent‑style. Run large language models like BLOOM-176B collaboratively — you load a small part of the model, then team up with people serving the other parts to run inference or fine-tuning. Single-batch inference runs at ≈ 1 sec per step (token) — up to 10x faster than offloading, enough for chatbots and other interactive apps. Parallel inference reaches hundreds of tokens/sec. Beyond classic language model APIs — you can employ any fine-tuning...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 22
    NVIDIA NeMo

    NVIDIA NeMo

    Toolkit for conversational AI

    NVIDIA NeMo, part of the NVIDIA AI platform, is a toolkit for building new state-of-the-art conversational AI models. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. Every module can easily be customized, extended, and composed to create new conversational AI model architectures. Conversational AI...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 23
    Skyvern

    Skyvern

    Automate browser-based workflows with LLMs and Computer Vision

    Skyvern uses a combination of computer vision and AI to understand content on a webpage, making it adaptable to any website. Skyvern takes instructions in natural language, allowing it to execute complex objectives with simple commands. Skyvern is an API-first product. Workflows execute in the cloud, allowing it to run hundreds of workflows at the same time. Skyvern's AI decisions come with built-in explanations, providing clear summaries and justifications for every action. Support for proxies...
    Downloads: 9 This Week
    Last Update:
    See Project
  • 24
    MarkItDown

    MarkItDown

    Python tool for converting files and office documents to Markdown

    MarkItDown is a lightweight Python utility developed by Microsoft for converting various files and office documents to Markdown format. It is particularly useful for preparing documents for use with large language models and related text analysis pipelines. ​
    Downloads: 7 This Week
    Last Update:
    See Project
  • 25
    ADX MCP Server

    ADX MCP Server

    A Model Context Protocol (MCP) server that enables AI assistants

    The Azure Data Explorer MCP Server is a Model Context Protocol (MCP) server that enables AI assistants to query and analyze Azure Data Explorer databases through standardized interfaces. It allows the execution of Kusto Query Language (KQL) queries and exploration of data within Azure Data Explorer clusters. ​
    Downloads: 10 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.