Showing 221 open source projects for "artificial intelligence python"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Award-Winning Medical Office Software Designed for Your Specialty Icon
    Award-Winning Medical Office Software Designed for Your Specialty

    Succeed and scale your practice with cloud-based, data-backed, AI-powered healthcare software.

    RXNT is an ambulatory healthcare technology pioneer that empowers medical practices and healthcare organizations to succeed and scale through innovative, data-backed, AI-powered software.
    Learn More
  • 1
    ChatGLM2-6B

    ChatGLM2-6B

    An Open Bilingual Chat LLM | Open Source Bilingual Conversation LLM

    ChatGLM2-6B is an advanced open-source bilingual dialogue model developed by THUDM. It is the second iteration of the ChatGLM series, designed to offer enhanced performance while maintaining the strengths of its predecessor, including smooth conversation flow and low deployment barriers. The model is fine-tuned for both Chinese and English languages, making it a versatile tool for various multilingual applications. ChatGLM2-6B aims to push the boundaries of natural language understanding and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    LangChain Apps on Production with Jina

    LangChain Apps on Production with Jina

    Langchain Apps on Production with Jina & FastAPI

    Jina is an open-source framework for building scalable multi-modal AI apps on Production. LangChain is another open-source framework for building applications powered by LLMs. long-chain-serve helps you deploy your LangChain apps on Jina AI Cloud in a matter of seconds. You can benefit from the scalability and serverless architecture of the cloud without sacrificing the ease and convenience of local development. And if you prefer, you can also deploy your LangChain apps on your own...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Qwen2.5

    Qwen2.5

    Open source large language model by Alibaba

    Qwen2.5 is a series of large language models developed by the Qwen team at Alibaba Cloud, designed to enhance natural language understanding and generation across multiple languages. The models are available in various sizes, including 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B parameters, catering to diverse computational requirements. Trained on a comprehensive dataset of up to 18 trillion tokens, Qwen2.5 models exhibit significant improvements in instruction following, long-text generation...
    Downloads: 25 This Week
    Last Update:
    See Project
  • 4

    Companionem Linguae

    Ultra Large Language Model

    Companionem Linguae is an ultra large language model in early stages of development. Companionem Linguae is being reworked. I have uploaded a part of the new training data (Latein.json). Although English is the standard for international communication, I decided to train the model with Latin and German first, because these languages have many grammatical features the English language doesn't have, but they could be useful for translations into French, Portuguese, Spanish, or other...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Skillfully - The future of skills based hiring Icon
    Skillfully - The future of skills based hiring

    Realistic Workplace Simulations that Show Applicant Skills in Action

    Skillfully transforms hiring through AI-powered skill simulations that show you how candidates actually perform before you hire them. Our platform helps companies cut through AI-generated resumes and rehearsed interviews by validating real capabilities in action. Through dynamic job specific simulations and skill-based assessments, companies like Bloomberg and McKinsey have cut screening time by 50% while dramatically improving hire quality.
    Learn More
  • 5
    LLaMA

    LLaMA

    Inference code for Llama models

    “Llama” is the repository from Meta (formerly Facebook/Meta Research) containing the inference code for LLaMA (Large Language Model Meta AI) models. It provides utilities to load pre-trained LLaMA model weights, run inference (text generation, chat, completions), and work with tokenizers. Tokenizer utilities, download scripts, shell helpers to fetch model weights with correct licensing/permissions. Includes example scripts for chat completions and text completions to show how to call the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Genoss GPT

    Genoss GPT

    One API for all LLMs either Private or Public

    One line replacement for openAI ChatGPT & Embeddings powered by OSS models. Genoss is a pioneering open-source initiative that aims to offer a seamless alternative to OpenAI models such as GPT 3.5 & 4, using open-source models like GPT4ALL.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    gptee

    gptee

    LLMs done the UNIX-y way

    Output from a language model using standard input as the prompt. Now supporting GPT3.5 chat completions! gptee was designed for use within shell scripts and other programs and also works in interactive shells. You can compose commands and execute them in a script. Proceed with caution before running arbitrary shell scripts. Using a chat completion model (like gpt-3.5-turbo), you can then inject a system message with -s or --system messages. For davinci and other non-chat models, the output...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    ThoughtSource

    ThoughtSource

    A central, open resource for data and tools

    ThoughtSource is a central, open resource and community centered on data and tools for chain-of-thought reasoning in large language models (Wei 2022). Our long-term goal is to enable trustworthy and robust reasoning in advanced AI systems for driving scientific research and medical practice.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    OpenFlamingo

    OpenFlamingo

    An open-source framework for training large multimodal models

    Welcome to our open source version of DeepMind's Flamingo model! In this repository, we provide a PyTorch implementation for training and evaluating OpenFlamingo models. We also provide an initial OpenFlamingo 9B model trained on a new Multimodal C4 dataset (coming soon). Please refer to our blog post for more details. This repo is still under development, and we hope to release better-performing and larger OpenFlamingo models soon. If you have any questions, please feel free to open an...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Trumba is an All-in-one Calendar Management and Event Registration platform Icon
    Trumba is an All-in-one Calendar Management and Event Registration platform

    Great for live, virtual and hybrid events

    Publish, promote and track your events more affordably and effectively—all in one place.
    Learn More
  • 10
    GLM-130B

    GLM-130B

    GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)

    GLM-130B is an open bilingual (English and Chinese) dense language model with 130 billion parameters, released by the Tsinghua KEG Lab and collaborators as part of the General Language Model (GLM) series. It is designed for large-scale inference and supports both left-to-right generation and blank filling, making it versatile across NLP tasks. Trained on over 400 billion tokens (200B English, 200B Chinese), it achieves performance surpassing GPT-3 175B, OPT-175B, and BLOOM-176B on multiple...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    Gorilla CLI

    Gorilla CLI

    LLMs for your CLI

    Gorilla CLI powers your command-line interactions with a user-centric tool. Simply state your objective, and Gorilla CLI will generate potential commands for execution. Gorilla today supports ~1500 APIs, including Kubernetes, AWS, GCP, Azure, GitHub, Conda, Curl, Sed, and many more. No more recalling intricate CLI arguments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    FastEdit

    FastEdit

    Editing large language models within 10 seconds

    FastEdit focuses on rapid “model editing,” letting you surgically update facts or behaviors in an LLM without full fine-tuning. It implements practical editing algorithms that insert or revise knowledge with targeted parameter updates, aiming to preserve model quality outside the edited scope. This approach is valuable when you need urgent corrections—think product names, APIs, or fast-changing facts—without retraining on large corpora. The repository provides evaluation harnesses so you can...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese LLaMA & Alpaca large language model + local CPU/GPU training

    This project has open-sourced the Chinese LLaMA model and the Alpaca large model with instruction fine-tuning to further promote the open research of large models in the Chinese NLP community. Based on the original LLaMA , these models expand the Chinese vocabulary and use Chinese data for secondary pre-training, which further improves the basic semantic understanding of Chinese. At the same time, the Chinese Alpaca model further uses Chinese instruction data for fine-tuning, which...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    mindflow

    mindflow

    AI-powered CLI git wrapper, boilerplate code generator, chat history

    I-powered CLI git wrapper, boilerplate code generator, chat history manager, and code search engine to streamline your dev workflow. The ChatGPT-powered swiss army knife for the modern developer! We provide an AI-powered CLI git wrapper, boilerplate code generator, code search engine, a conversation history manager, and much more! Configure the model used for generating responses by running mf config and selecting either GPT 3.5 Turbo (default) or GPT 4. In order to use GPT 4, you'll need...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Repo of Tree of Thoughts (ToT)

    Repo of Tree of Thoughts (ToT)

    Implementation of "Tree of Thoughts

    Language models are increasingly being deployed for general problem-solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    langchain-prefect

    langchain-prefect

    Tools for using Langchain with Prefect

    Large Language Models (LLMs) are interesting and useful  -  building apps that use them responsibly feels like a no-brainer. Tools like Langchain make it easier to build apps using LLMs. We need to know details about how our apps work, even when we want to use tools with convenient abstractions that may obfuscate those details. Prefect is built to help data people build, run, and observe event-driven workflows wherever they want. It provides a framework for creating deployments on a whole...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    aqueduct LLM

    aqueduct LLM

    Aqueduct allows you to run LLM and ML workloads on any infrastructure

    Aqueduct is an MLOps framework that allows you to define and deploy machine learning and LLM workloads on any cloud infrastructure. Aqueduct is an open-source MLOps framework that allows you to write code in vanilla Python, run that code on any cloud infrastructure you'd like to use, and gain visibility into the execution and performance of your models and predictions. Aqueduct's Python native API allows you to define ML tasks in regular Python code. You can connect Aqueduct to your existing...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    llm-chain

    llm-chain

    Rust crate for building chains in large language models

    We offer a collection of Rust crates packed with features that make working with Large Language Models easy and seamless. With llm-chain, you can focus on building powerful AI applications. Create reusable and easily customizable prompt templates for consistent and structured interactions with LLMs. Build powerful chains of prompts that allow you to execute more complex tasks, step by step, leveraging the full potential of LLMs. Provides seamless integration with LLaMa models, enabling...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    aigc

    aigc

    An e-book about the real-world application of LLM

    "Building Large Language Model Applications: Application Development and Architecture Design" is an open source e-book about the real-world application of LLM. It introduces the basics and applications of large language models, as well as how to build your own models. These include writing, developing, and managing prompts, exploring what the best large language models can bring, and pattern and architecture design for LLM application development.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    SkyAGI

    SkyAGI

    SkyAGI: Emerging human-behavior simulation capability in LLM

    SkyAGI is a python package that demonstrates LLM's emerging capability in simulating believable human behaviors. Specifically, SkyAGI implements the idea of Generative Agents and delivers a role-playing game that creates a very interesting user experience. Different from previous AI-based NPC systems, SkyAGI's NPC generates very believable human responses. The interesting observations in this demo show a huge potential for rethinking game development in many aspects, such as NPC script...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    unit-minions

    unit-minions

    AI R&D Efficiency Improvement Research: Do-It-Yourself Training LoRA

    "AI R&D Efficiency Improvement Research: Do-It-Yourself Training LoRA", including Llama (Alpaca LoRA) model, ChatGLM (ChatGLM Tuning) related Lora training. Training content: user story generation, test code generation, code-assisted generation, text to SQL, text generation code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    react-llm

    react-llm

    Easy-to-use headless React Hooks to run LLMs in the browser with WebGP

    Easy-to-use headless React Hooks to run LLMs in the browser with WebGPU. As simple as useLLM().
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    WebLLM

    WebLLM

    Bringing large-language models and chat to web browsers

    WebLLM is a modular, customizable javascript package that directly brings language model chats directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU. We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration. WebLLM offers a minimalist and modular interface to access the chatbot in the browser. The WebLLM package itself does not come...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    pyllama

    pyllama

    LLaMA: Open and Efficient Foundation Language Models

    📢 pyllama is a hacked version of LLaMA based on original Facebook's implementation but more convenient to run in a Single consumer grade GPU.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    LLaMA.go

    LLaMA.go

    llama.go is like llama.cpp in pure Golang

    llama.go is like llama.cpp in pure Golang. The code of the project is based on the legendary ggml.cpp framework of Georgi Gerganov written in C++ with the same attitude to performance and elegance. Both models store FP32 weights, so you'll needs at least 32Gb of RAM (not VRAM or GPU RAM) for LLaMA-7B. Double to 64Gb for LLaMA-13B.
    Downloads: 0 This Week
    Last Update:
    See Project