42 projects for "python chatbot artificial intelligence" with 2 filters applied:

  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Cloud-based help desk software with ServoDesk Icon
    Cloud-based help desk software with ServoDesk

    Full access to Enterprise features. No credit card required.

    What if You Could Automate 90% of Your Repetitive Tasks in Under 30 Days? At ServoDesk, we help businesses like yours automate operations with AI, allowing you to cut service times in half and increase productivity by 25% - without hiring more staff.
    Try ServoDesk for free
  • 1
    MiniMax-01

    MiniMax-01

    Large-language-model & vision-language-model based on Linear Attention

    MiniMax-01 is the official repository for two flagship models: MiniMax-Text-01, a long-context language model, and MiniMax-VL-01, a vision-language model built on top of it. MiniMax-Text-01 uses a hybrid attention architecture that blends Lightning Attention, standard softmax attention, and Mixture-of-Experts (MoE) routing to achieve both high throughput and long-context reasoning. It has 456 billion total parameters with 45.9 billion activated per token and is trained with advanced parallel...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Learn AI Engineering

    Learn AI Engineering

    Learn AI and LLMs from scratch using free resources

    Learn AI Engineering is a learning path for AI engineering that consolidates high-quality, free resources across the full stack: math, Python foundations, machine learning, deep learning, LLMs, agents, tooling, and deployment. Rather than a loose bookmark list, it organizes topics into a progression so learners can start from fundamentals and move toward practical, production-oriented skills. It mixes courses, articles, code labs, and videos, emphasizing materials that teach both concepts...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    OSS-Fuzz Gen

    OSS-Fuzz Gen

    LLM powered fuzzing via OSS-Fuzz

    OSS-Fuzz-Gen is a companion project that helps automatically create or improve fuzz targets for open-source codebases, aiming to increase coverage in OSS-Fuzz with minimal maintainer effort. It analyses a library’s APIs, examples, and tests to propose harnesses that exercise parsers, decoders, or protocol handlers—precisely the code where fuzzing pays off. The system integrates with modern LLM-assisted workflows to draft harness code and then iterates based on build errors or low coverage...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    BIG-bench

    BIG-bench

    Beyond the Imitation Game collaborative benchmark for measuring

    BIG-bench (Beyond the Imitation Game Benchmark) is a large, collaborative benchmark suite designed to probe the capabilities and limitations of large language models across hundreds of diverse tasks. Rather than focusing on a single metric or domain, it aggregates many hand-authored tasks that test reasoning, commonsense, math, linguistics, ethics, and creativity. Tasks are intentionally heterogeneous: some are multiple-choice with exact scoring, others are free-form generation judged by...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Grafana: The open and composable observability platform Icon
    Grafana: The open and composable observability platform

    Faster answers, predictable costs, and no lock-in built by the team helping to make observability accessible to anyone.

    Grafana is the open source analytics & monitoring solution for every database.
    Learn More
  • 5
    Automated Interpretability

    Automated Interpretability

    Code for Language models can explain neurons in language models paper

    The automated-interpretability repository implements tools and pipelines for automatically generating, simulating, and scoring explanations of neuron (or latent feature) behavior in neural networks. Instead of relying purely on manual, ad hoc interpretability probing, this repo aims to scale interpretability by using algorithmic methods that produce candidate explanations and assess their quality. It includes a “neuron explainer” component that, given a target neuron or latent feature,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Evals

    Evals

    Evals is a framework for evaluating LLMs and LLM systems

    The openai/evals repository is a framework and registry for evaluating large language models and systems built with LLMs. It’s designed to let you define “evals” (evaluation tasks) in a structured way and run them against different models or agents, with the ability to score, compare, and analyze results. The framework supports templated YAML eval definitions, solver-based evaluations, custom metrics, and composition of multi-step evaluations. It includes utilities and APIs to plug in...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Controllable-RAG-Agent

    Controllable-RAG-Agent

    This repository provides an advanced RAG

    Controllable-RAG-Agent is an advanced Retrieval-Augmented Generation (RAG) system designed specifically for complex, multi-step question answering over your own documents. Instead of relying solely on simple semantic search, it builds a deterministic control graph that acts as the “brain” of the agent, orchestrating planning, retrieval, reasoning, and verification across many steps. The pipeline ingests PDFs, splits them into chapters, cleans and preprocesses text, then constructs vector...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    FinGPT

    FinGPT

    Open-Source Financial Large Language Models!

    FinGPT is an open-source large language model tailored specifically for financial tasks. Developed by AI4Finance Foundation, it is designed to assist with various financial applications, such as forecasting, financial sentiment analysis, and portfolio management. FinGPT has been trained on a diverse range of financial datasets, making it a powerful tool for finance professionals looking to leverage AI for data-driven decision-making. The model is freely available on platforms like Hugging...
    Leader badge
    Downloads: 18 This Week
    Last Update:
    See Project
  • 9
    Grok-1

    Grok-1

    Open-source, high-performance Mixture-of-Experts large language model

    Grok-1 is a 314-billion-parameter Mixture-of-Experts (MoE) large language model developed by xAI. Designed to optimize computational efficiency, it activates only 25% of its weights for each input token. In March 2024, xAI released Grok-1's model weights and architecture under the Apache 2.0 license, making them openly accessible to developers. The accompanying GitHub repository provides JAX example code for loading and running the model. Due to its substantial size, utilizing Grok-1...
    Downloads: 7 This Week
    Last Update:
    See Project
  • Keep company data safe with Chrome Enterprise Icon
    Keep company data safe with Chrome Enterprise

    Protect your business with AI policies and data loss prevention in the browser

    Make AI work your way with Chrome Enterprise. Block unapproved sites and set custom data controls that align with your company's policies.
    Download Chrome
  • 10
    Qwen2.5-Coder

    Qwen2.5-Coder

    Qwen2.5-Coder is the code version of Qwen2.5, the large language model

    Qwen2.5-Coder, developed by QwenLM, is an advanced open-source code generation model designed for developers seeking powerful and diverse coding capabilities. It includes multiple model sizes—ranging from 0.5B to 32B parameters—providing solutions for a wide array of coding needs. The model supports over 92 programming languages and offers exceptional performance in generating code, debugging, and mathematical problem-solving. Qwen2.5-Coder, with its long context length of 128K tokens, is...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 11
    ChatGLM2-6B

    ChatGLM2-6B

    An Open Bilingual Chat LLM | Open Source Bilingual Conversation LLM

    ChatGLM2-6B is an advanced open-source bilingual dialogue model developed by THUDM. It is the second iteration of the ChatGLM series, designed to offer enhanced performance while maintaining the strengths of its predecessor, including smooth conversation flow and low deployment barriers. The model is fine-tuned for both Chinese and English languages, making it a versatile tool for various multilingual applications. ChatGLM2-6B aims to push the boundaries of natural language understanding and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    GLM-130B

    GLM-130B

    GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)

    GLM-130B is an open bilingual (English and Chinese) dense language model with 130 billion parameters, released by the Tsinghua KEG Lab and collaborators as part of the General Language Model (GLM) series. It is designed for large-scale inference and supports both left-to-right generation and blank filling, making it versatile across NLP tasks. Trained on over 400 billion tokens (200B English, 200B Chinese), it achieves performance surpassing GPT-3 175B, OPT-175B, and BLOOM-176B on multiple...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 13
    FastEdit

    FastEdit

    Editing large language models within 10 seconds

    FastEdit focuses on rapid “model editing,” letting you surgically update facts or behaviors in an LLM without full fine-tuning. It implements practical editing algorithms that insert or revise knowledge with targeted parameter updates, aiming to preserve model quality outside the edited scope. This approach is valuable when you need urgent corrections—think product names, APIs, or fast-changing facts—without retraining on large corpora. The repository provides evaluation harnesses so you can...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    LM Human Preferences

    LM Human Preferences

    Code for the paper Fine-Tuning Language Models from Human Preferences

    lm-human-preferences is the official OpenAI codebase that implements the method from the paper Fine-Tuning Language Models from Human Preferences. Its purpose is to show how to align language models with human judgments by training a reward model from human comparisons and then fine-tuning a policy model using that reward signal. The repository includes scripts to train the reward model (learning to rank or score pairs of outputs), and to fine-tune a policy (a language model) with...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Alpa

    Alpa

    Training and serving large-scale neural networks

    Alpa is a system for training and serving large-scale neural networks. Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Grade School Math

    Grade School Math

    8.5K high quality grade school math problems

    The grade-school-math repository (sometimes called GSM8K) is a curated dataset of 8,500 high-quality grade school math word problems intended for evaluating mathematical reasoning capabilities of language models. It is structured into 7,500 training problems and 1,000 test problems. These aren’t trivial exercises — many require multi-step reasoning, combining arithmetic operations, and handling intermediate steps (e.g. “If she sold half as many in May… how many in total?”). The problems are...
    Downloads: 0 This Week
    Last Update:
    See Project