48 projects for "llms and python-pptx" with 1 filter applied:

  • Managed MySQL, PostgreSQL, and SQL Databases on Google Cloud Icon
    Managed MySQL, PostgreSQL, and SQL Databases on Google Cloud

    Get back to your application and leave the database to us. Cloud SQL automatically handles backups, replication, and scaling.

    Cloud SQL is a fully managed relational database for MySQL, PostgreSQL, and SQL Server. We handle patching, backups, replication, encryption, and failover—so you can focus on your app. Migrate from on-prem or other clouds with free Database Migration Service. IDC found customers achieved 246% ROI. New customers get $300 in credits plus a 30-day free trial.
    Try Cloud SQL Free
  • Easily Host LLMs and Web Apps on Cloud Run Icon
    Easily Host LLMs and Web Apps on Cloud Run

    Run everything from popular models with on-demand NVIDIA L4 GPUs to web apps without infrastructure management.

    Run frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure. Cloud Run gives you on-demand GPU access for hosting LLMs and running real-time AI—with 5-second cold starts and automatic scale-to-zero so you only pay for actual usage. New customers get $300 in free credit to start.
    Try Cloud Run Free
  • 1
    Ollama Python

    Ollama Python

    Ollama Python library

    ollama-python is an open-source Python SDK that wraps the Ollama CLI, allowing seamless interaction with local large language models (LLMs) managed by Ollama. Developers use it to load models, send prompts, manage sessions, and stream responses directly from Python code. It simplifies integration of Ollama-based models into applications, supporting synchronous and streaming modes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    GPT4All

    GPT4All

    Run Local LLMs on Any Device. Open-source

    ...This project also supports Python integrations for easy automation and customization. GPT4All is ideal for individuals and businesses seeking private, offline access to powerful LLMs.
    Downloads: 199 This Week
    Last Update:
    See Project
  • 3
    LangChain

    LangChain

    ⚡ Building applications with LLMs through composability ⚡

    Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge. This library is aimed at assisting in the development of those types of applications.
    Downloads: 11 This Week
    Last Update:
    See Project
  • 4
    bitnet.cpp

    bitnet.cpp

    Official inference framework for 1-bit LLMs

    bitnet.cpp is the official open-source inference framework and ecosystem designed to enable ultra-efficient execution of 1-bit large language models (LLMs), which quantize most model parameters to ternary values (-1, 0, +1) while maintaining competitive performance with full-precision counterparts. At its core is bitnet.cpp, a highly optimized C++ backend that supports fast, low-memory inference on both CPUs and GPUs, enabling models such as BitNet b1.58 to run without requiring enormous...
    Downloads: 7 This Week
    Last Update:
    See Project
  • Ship AI Apps Faster with Vertex AI Icon
    Ship AI Apps Faster with Vertex AI

    Go from idea to deployed AI app without managing infrastructure. Vertex AI offers one platform for the entire AI development lifecycle.

    Ship AI apps and features faster with Vertex AI—your end-to-end AI platform. Access Gemini 3 and 200+ foundation models, fine-tune for your needs, and deploy with enterprise-grade MLOps. Build chatbots, agents, or custom models. New customers get $300 in free credit.
    Try Vertex AI Free
  • 5
    Burr

    Burr

    Build applications that make decisions. Chatbots, agents, simulations

    Burr makes it easy to develop applications that make decisions (chatbots, agents, simulations, etc...) from simple python building blocks. Burr works well for any application that uses LLMs and can integrate with any of your favorite frameworks. Burr includes a UI that can track/monitor/trace your system in real-time, along with pluggable persisters (e.g. for memory) to save & load application state.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    PydanticAI

    PydanticAI

    Agent Framework / shim to use Pydantic with LLMs

    When I first found FastAPI, I got it immediately. I was excited to find something so innovative and ergonomic built on Pydantic. Virtually every Agent Framework and LLM library in Python uses Pydantic, but when we began to use LLMs in Pydantic Logfire, I couldn't find anything that gave me the same feeling. PydanticAI is a Python Agent Framework designed to make it less painful to build production-grade applications with Generative AI. Built by the team behind Pydantic (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor, and many more).
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    Backtrack Sampler

    Backtrack Sampler

    An easy-to-understand framework for LLM samplers

    Backtrack Sampler is a framework designed for experimenting with custom sampling strategies for language models (LLMs), enabling the ability to rewind and revise generated tokens. It allows developers to create and test their own token generation strategies by providing a base structure for manipulating logits and probabilities, making it a flexible tool for those interested in fine-tuning the behavior of LLMs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    LLM Council

    LLM Council

    LLM Council works together to answer your hardest questions

    LLM Council is a creative open-source web application by Andrej Karpathy that lets you consult multiple large language models together to answer questions more reliably than querying a single model. Instead of relying on one provider, this application sends your query simultaneously to several LLMs supported via OpenRouter, collects each model’s independent response, and then orchestrates a multi-stage evaluation where the models critique and rank each other’s outputs anonymously. After this...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    LangExtract

    LangExtract

    A Python library for extracting structured information

    LangExtract is a Python library developed by Google that leverages large language models (LLMs) to extract structured information from unstructured text—such as clinical notes, research papers, or literary works—based on user-defined instructions. It is designed to transform free-form text into reliable, schema-constrained data while maintaining traceability back to the source material.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Build on Google Cloud with $300 in Free Credit Icon
    Build on Google Cloud with $300 in Free Credit

    New to Google Cloud? Get $300 in free credit to explore Compute Engine, BigQuery, Cloud Run, Vertex AI, and 150+ other products.

    Start your next project with $300 in free Google Cloud credit. Spin up VMs, run containers, query exabytes in BigQuery, or build AI apps with Vertex AI and Gemini. Once your credits are used, keep building with 20+ products with free monthly usage, including Compute Engine, Cloud Storage, GKE, and Cloud Run functions. Sign up to start building right away.
    Start Free Trial
  • 10
    Learn AI Engineering

    Learn AI Engineering

    Learn AI and LLMs from scratch using free resources

    Learn AI Engineering is a learning path for AI engineering that consolidates high-quality, free resources across the full stack: math, Python foundations, machine learning, deep learning, LLMs, agents, tooling, and deployment. Rather than a loose bookmark list, it organizes topics into a progression so learners can start from fundamentals and move toward practical, production-oriented skills. It mixes courses, articles, code labs, and videos, emphasizing materials that teach both concepts and hands-on implementation. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Raglite

    Raglite

    RAGLite is a Python toolkit for Retrieval-Augmented Generation

    Raglite is a lightweight framework for building Retrieval-Augmented Generation (RAG) pipelines with minimal configuration. It connects large language models to vector databases for context-aware responses, enabling developers to prototype and deploy RAG systems quickly. Raglite focuses on simplicity and modularity for fast experimentation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Parsera

    Parsera

    Lightweight library for scraping web-sites with LLMs

    Scrape data from any website with only a link and column descriptions. Parsera is a tool designed to scrape web content, specifically handling poorly structured or messy websites.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    Semantix

    Semantix

    Non-Pydantic, Non-JSON Schema, efficient AutoPrompting

    Semantix empowers developers to infuse meaning into their code through enhanced variable typing (semantic typing). By leveraging the power of large language models (LLMs) behind the scenes, Semantix transforms ordinary functions into intelligent, context-aware operations without explicit LLM calls.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Atropos

    Atropos

    Language Model Reinforcement Learning Environments frameworks

    Atropos is a comprehensive open-source framework for reinforcement learning (RL) environments tailored specifically to work with large language models (LLMs). Designed as a scalable ecosystem of environment microservices, Atropos allows researchers and developers to collect, evaluate, and manage trajectories (sequences of actions and outcomes) generated by LLMs across a variety of tasks—from static dataset benchmarks to dynamic interactive games and real-world scenario environments. It...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Heretic

    Heretic

    Fully automatic censorship removal for language models

    Heretic is an open-source Python tool that automatically removes the built-in censorship or “safety alignment” from transformer-based language models so they respond to a broader range of prompts with fewer refusals. It works by applying directional ablation techniques and a parameter optimization strategy to adjust internal model behaviors without expensive post-training or altering the core capabilities. Designed for researchers and advanced users, Heretic makes it possible to study and...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 16
    Chonkie

    Chonkie

    The no-nonsense RAG chunking library

    Chonkie is an AI-powered framework designed for building conversational agents and chatbots with natural language understanding and multi-turn conversation support.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Open Vision Agents by Stream

    Open Vision Agents by Stream

    Build Vision Agents quickly with any model or video provider

    Open Vision Agents by Stream is an open source framework from Stream for building real time, multimodal AI agents that watch, listen, and respond to live video streams. It focuses on combining video understanding models, such as YOLO and Roboflow based detectors, with real time large language models like OpenAI Realtime and Gemini Live to create interactive experiences. The framework uses Stream’s ultra low latency edge network so agents can join sessions quickly and maintain very low audio...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 18
    Speech-AI-Forge

    Speech-AI-Forge

    Speech-AI-Forge is a project developed around TTS generation model

    Speech-AI-Forge is a full-stack project built around modern text-to-speech generation models, providing both an API server and a Gradio-based web UI for interactive use. At its core, it acts as a hub that wires together multiple speech-related capabilities, including TTS, speech-to-text and LLM-based control flows, behind a consistent interface. The system is designed to be deployed in several ways: you can try it online via hosted demos, spin it up in a one-click Colab environment, run it...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 19
    rLLM

    rLLM

    Democratizing Reinforcement Learning for LLMs

    rLLM is an open-source framework for building and training post-training language agents via reinforcement learning — that is, using reinforcement signals to fine-tune or adapt language models (LLMs) into customizable agents for real-world tasks. With rLLM, developers can define custom “agents” and “environments,” and then train those agents via reinforcement learning workflows, possibly surpassing what vanilla fine-tuning or supervised learning might provide. The project is designed to...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Evals

    Evals

    Evals is a framework for evaluating LLMs and LLM systems

    The openai/evals repository is a framework and registry for evaluating large language models and systems built with LLMs. It’s designed to let you define “evals” (evaluation tasks) in a structured way and run them against different models or agents, with the ability to score, compare, and analyze results. The framework supports templated YAML eval definitions, solver-based evaluations, custom metrics, and composition of multi-step evaluations. It includes utilities and APIs to plug in...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Free LLM API resources

    Free LLM API resources

    A list of free LLM inference resources accessible via API

    Free LLM API resources repository curated by cheahjs is a community-driven index of free and open API endpoints, tools, datasets, runtimes, and utilities for working with large language models (LLMs) without cost-barriers. It collects a wide range of resources including hosted free-tier LLM APIs, documentation links, public model endpoints, open datasets useful for training or evaluation, tooling integrations, and examples showing how to interact with these services in real applications....
    Downloads: 12 This Week
    Last Update:
    See Project
  • 22
    bu-agent-sdk

    bu-agent-sdk

    An agent is just a for-loop

    The bu-agent-sdk from the Browser Use project is a minimalistic Python framework that defines an AI agent as a simple loop of tool calls, aiming to keep abstractions low so developers can build autonomous agents without unnecessary complexity. At its core, the agent loop repeatedly queries a large language model, interprets its output, and executes defined “tools” — functions annotated with task names — to perform actions, allowing the agent to complete tasks like arithmetic,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Matrix

    Matrix

    Multi-Agent daTa geneRation Infra and eXperimentation framework

    Matrix is a distributed, large-scale engine for multi-agent synthetic data generation and experiments: it provides the infrastructure to run thousands of “agentic” workflows concurrently (e.g. multiple LLMs interacting, reasoning, generating content, data-processing pipelines) by leveraging distributed computing (like Ray + cluster management). The idea is to treat data generation as a “data-to-data” transformation: each input item defines a task, and the runtime orchestrates asynchronous,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    AI Runner

    AI Runner

    Offline inference engine for art, real-time voice conversations

    AI Runner is an offline inference engine designed to run a collection of AI workloads on your own machine, including image generation for art, real-time voice conversations, LLM-powered chatbots and automated workflows. It is implemented as a desktop-oriented Python application and emphasizes privacy and self-hosting, allowing users to work with text-to-speech, speech-to-text, text-to-image and multimodal models without sending data to external services. At the core of its LLM stack is a...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 25
    Reader 3

    Reader 3

    Quick illustration of how one can easily read books together with LLMs

    This project is a minimalist, self-hosted EPUB reader designed to help users browse and read EPUB books one chapter at a time through a lightweight local server, making it especially easy to extract or work with chapters in external tools like large language models. It was created primarily as a simple demonstration of how to combine local book reading with LLM workflows without heavy dependencies or complicated setup, and it runs with just a small Python script and a basic HTTP server. The...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
MongoDB Logo MongoDB