39 projects for "sql server for python" with 2 filters applied:

  • Deploy Apps in Seconds with Cloud Run Icon
    Deploy Apps in Seconds with Cloud Run

    Host and run your applications without the need to manage infrastructure. Scales up from and down to zero automatically.

    Cloud Run is the fastest way to deploy containerized apps. Push your code in Go, Python, Node.js, Java, or any language and Cloud Run builds and deploys it automatically. Get fast autoscaling, pay only when your code runs, and skip the infrastructure headaches. Two million requests free per month. And new customers get $300 in free credit.
    Try Cloud Run Free
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 1
    OAGI Python SDK

    OAGI Python SDK

    Python SDK for the Computer Use model Lux, developed by OpenAGI

    OAGI Python SDK is a Python client library for the Lux computer-use model that turns Lux into a programmable automation layer for operating human-facing software via vision and actions. It exposes the OAGI API in an ergonomic way, letting you trigger Lux in three main modes: Tasker for precise scripted sequences, Actor for fast one-shot tasks, and Thinker for open-ended, multi-step objectives.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 2
    Anthropic SDK Python

    Anthropic SDK Python

    Provides convenient access to the Anthropic REST API from any Python 3

    The anthropic-sdk-python repository is the official Python client library for interacting with the Anthropic (Claude) REST API. It is designed to provide a user-friendly, type-safe, and asynchronous/synchronous capable interface for making chat/completion requests to models like Claude. The library includes definitions for all request and response parameters using Python typed objects, automatically handles serialization and deserialization, and wraps HTTP logic (timeouts, retries, error mapping) so that developers can call the API in a clean, high-level way. ...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 3
    ArXiv MCP Server

    ArXiv MCP Server

    A Model Context Protocol server for searching and analyzing arXiv

    arxiv-mcp-server bridges AI assistants and the arXiv repository through a clean MCP interface, enabling search, metadata retrieval, and content access without bespoke scraping. With simple tools like “search” and “fetch,” an agent can find papers, pull abstracts, and download PDFs for downstream summarization or analysis. The project includes packaging and CI to publish to PyPI, plus tests and linting for reliability. Issue threads show feature requests such as extracting embedded LaTeX and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Dash Data Agent

    Dash Data Agent

    Self-learning data agent that grounds its answers in layers of content

    Dash is a self-learning data agent built by the Agno AI community that generates grounded answers to English queries over structured data by synthesizing SQL and reasoning based on six layers of context, improving automatically with each run. It sidesteps common limitations of simple text-to-SQL agents by incorporating multiple context layers — including schema structure, human annotations, known query patterns, institutional knowledge from docs, machine-discovered error patterns, and live...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Managed MySQL, PostgreSQL, and SQL Databases on Google Cloud Icon
    Managed MySQL, PostgreSQL, and SQL Databases on Google Cloud

    Get back to your application and leave the database to us. Cloud SQL automatically handles backups, replication, and scaling.

    Cloud SQL is a fully managed relational database for MySQL, PostgreSQL, and SQL Server. We handle patching, backups, replication, encryption, and failover—so you can focus on your app. Migrate from on-prem or other clouds with free Database Migration Service. IDC found customers achieved 246% ROI. New customers get $300 in credits plus a 30-day free trial.
    Try Cloud SQL Free
  • 5
    Memobase

    Memobase

    Fast backend for long-term AI user memory via structured profiles

    ...The system focuses on three principal performance metrics: high search performance, reduced large language model (LLM) costs through batch processing techniques, and low latency with minimal SQL operations. Memobase supports integration with existing LLM workflows via APIs and SDKs (including Python, Node, and Go), making it easy to adopt within diverse application stacks.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    WhisperLive

    WhisperLive

    A nearly-live implementation of OpenAI's Whisper

    WhisperLive is a “nearly live” implementation of OpenAI’s Whisper model focused on real-time transcription. It runs as a server–client system in which the server hosts a Whisper backend and clients stream audio to be transcribed with very low delay. The project supports multiple inference backends, including Faster-Whisper, NVIDIA TensorRT, and OpenVINO, allowing you to target GPUs and different CPU architectures efficiently. It can handle microphone input, pre-recorded audio files, and...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 7
    RealtimeSTT

    RealtimeSTT

    A robust, efficient, low-latency speech-to-text library

    RealtimeSTT is a Python-based realtime speech-to-text engine emphasizing low latency, wake-word detection, voice activity detection, and automatic speech segmentation. It provides asynchronous callbacks, nanosecond-precision timestamps, and CLI tools, suitable for building voice assistants, meeting transcribers, or live caption systems.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 8
    mcpo

    mcpo

    A simple, secure MCP-to-OpenAPI proxy server

    mcpo is a minimal bridge that exposes any MCP tool as an OpenAPI-compatible HTTP server. Instead of writing glue code, you point mcpo at an MCP server command and it generates REST endpoints and an OpenAPI spec that other systems (or LLM agent frameworks) can call immediately. This design lets you reuse a growing library of MCP servers with platforms that only understand HTTP+OpenAPI, unifying tool access across ecosystems. The project emphasizes “dead-simple” setup and pairs with Open WebUI...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Reader 3

    Reader 3

    Quick illustration of how one can easily read books together with LLMs

    This project is a minimalist, self-hosted EPUB reader designed to help users browse and read EPUB books one chapter at a time through a lightweight local server, making it especially easy to extract or work with chapters in external tools like large language models. It was created primarily as a simple demonstration of how to combine local book reading with LLM workflows without heavy dependencies or complicated setup, and it runs with just a small Python script and a basic HTTP server.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Cut Cloud Costs with Google Compute Engine Icon
    Cut Cloud Costs with Google Compute Engine

    Save up to 91% with Spot VMs and get automatic sustained-use discounts. One free VM per month, plus $300 in credits.

    Save on compute costs with Compute Engine. Reduce your batch jobs and workload bill 60-91% with Spot VMs. Compute Engine's committed use offers customers up to 70% savings through sustained use discounts. Plus, you get one free e2-micro VM monthly and $300 credit to start.
    Try Compute Engine
  • 10
    MemMachine

    MemMachine

    Universal memory layer for AI Agents

    ...Unlike ephemeral LLM prompt state, MemMachine supports distinct memory types—short-term conversational context, long-term persistent knowledge, and profile memory for personalized facts—persisted in optimized stores (e.g., graph databases for episodic lines of reasoning and SQL for user facts) to support robust, context-aware intelligence in agents. It offers flexible APIs, a Python SDK, REST interfaces, and MCP (Model Context Protocol) connectivity to integrate seamlessly with agent frameworks receiving and storing memories over time, effectively boosting relevance, continuity, and tailored behavior.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    OpenTinker

    OpenTinker

    OpenTinker is an RL-as-a-Service infrastructure for foundation models

    OpenTinker is an open-source Reinforcement Learning-as-a-Service (RLaaS) infrastructure intended to democratize reinforcement learning for large language model (LLM) agents. Traditional RL setups can be monolithic and difficult to configure, but OpenTinker separates concerns across agent definition, environment interaction, and execution, which lets developers focus on defining the logic of agents and environments separately from how training and inference are run. It introduces a...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    Pocket TTS

    Pocket TTS

    A TTS that fits in your CPU (and pocket)

    Pocket TTS is a lightweight text-to-speech project designed to run efficiently on CPUs, targeting developers who want local speech generation without depending on GPUs or hosted web APIs. It is built to feel practical in everyday applications, where installation and usage should be as simple as adding a dependency and calling a function. The project focuses on keeping the runtime footprint manageable while still producing natural-sounding speech, which makes it attractive for offline tools,...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 13
    FlowLens MCP

    FlowLens MCP

    Open-source MCP server that gives your coding agent

    FlowLens MCP Server is an open-source tool designed to give AI-powered coding agents (like Claude Code, Cursor, GitHub Copilot / Codex, and others) full, replayable browser context to dramatically improve debugging, bug reporting, and regression testing for web applications. It works together with a companion browser extension: when a user reproduces a bug or a complicated UI interaction, the extension captures a rich session log, including screen/video recording, network traffic, console...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Sopro TTS

    Sopro TTS

    A lightweight text-to-speech model with zero-shot voice cloning

    ...The model is designed to work with a small set of dependencies and to be accessible for developers who want offline TTS with customizable voice style, including options for streaming or non-streaming generation modes. Users can install it with standard Python tools, run a demo server locally, and experiment with CLI or Python API usage for producing synthetic speech.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Speech-AI-Forge

    Speech-AI-Forge

    Speech-AI-Forge is a project developed around TTS generation model

    Speech-AI-Forge is a full-stack project built around modern text-to-speech generation models, providing both an API server and a Gradio-based web UI for interactive use. At its core, it acts as a hub that wires together multiple speech-related capabilities, including TTS, speech-to-text and LLM-based control flows, behind a consistent interface. The system is designed to be deployed in several ways: you can try it online via hosted demos, spin it up in a one-click Colab environment, run it...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 16
    LlamaIndex

    LlamaIndex

    Central interface to connect your LLM's with external data

    LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. LlamaIndex is a simple, flexible interface between your external data and LLMs. It provides the following tools in an easy-to-use fashion. Provides indices over your unstructured and structured data for use with LLM's. These indices help to abstract away common boilerplate and pain points for in-context learning. Dealing with prompt limitations (e.g. 4096 tokens for Davinci) when...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 17
    mcp-use

    mcp-use

    A solution to build and deploy MCP agents and applications

    ...It enables connection to multiple MCP servers, each exposing specific tool capabilities like browsing, file operations, or specialized integrations, through a unified MCPClient. Developers can create custom agents (via MCPAgent) that dynamically select the most appropriate server for each task using configurable pipelines or a built-in server manager. It simplifies authentication, access control, audit logging, observability, sandboxed runtime environments, and deployment workflows, whether self-hosted or managed, making MCP development production-ready. With integrations for popular frameworks like LangChain (Python) and LangChain.js (TypeScript), mcp-use accelerates the creation of tool-enabled AI agents.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    HY-MT

    HY-MT

    Hunyuan Translation Model Version 1.5

    HY-MT (Hunyuan Translation) is a high-quality multilingual machine translation model suite developed to support mutual translation across dozens of languages with strong performance even at smaller model scales. It ships with both an 1.8 B parameter model and a larger 7 B model, the latter optimized not only for direct translation but also for formatted and contextualized output, allowing better handling of terminology and mixed-language content. The project emphasizes both speed and...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 19
    Kimi Code CLI

    Kimi Code CLI

    Kimi Code CLI is your next CLI agent

    Kimi CLI is a command-line AI agent that brings an intelligent software development assistant directly into your terminal, helping you with coding tasks, shell operations, and workflow automation without leaving your command prompt. It supports an interactive shell-like user interface where you can chat with the agent, request code edits, run shell commands, and receive contextual suggestions as you work, creating a seamless blend of AI-augmented development and traditional terminal usage....
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    FastAPI-MCP

    FastAPI-MCP

    Expose your FastAPI endpoints as Model Context Protocol (MCP) tools

    fastapi_mcp lets you expose existing FastAPI endpoints as Model Context Protocol (MCP) tools with minimal setup, so AI agents can call your app as first-class tools. Rather than acting as a thin converter, it’s built as a native FastAPI extension that understands dependency injection, so you can reuse Depends() for authentication and authorization across your MCP tools. The server speaks directly to your app over its ASGI interface, avoiding extra HTTP hops between the MCP layer and your...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    ChatGPT Retrieval Plugin

    ChatGPT Retrieval Plugin

    The ChatGPT Retrieval Plugin lets you easily find personal documents

    The chatgpt-retrieval-plugin repository implements a semantic retrieval backend that lets ChatGPT (or GPT-powered tools) access private or organizational documents in natural language by combining vector search, embedding models, and plugin infrastructure. It can serve as a custom GPT plugin or function-calling backend so that a chat session can “look up” relevant documents based on user queries, inject those results into context, and respond more knowledgeably about a private knowledge...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    OuteTTS

    OuteTTS

    Interface for OuteTTS models

    ...It provides a high-level Interface API that wraps model configuration, speaker handling, and audio generation so you can focus on integrating speech into your application rather than wiring up low-level engines. The project supports multiple backends including llama.cpp (Python bindings and server), Hugging Face Transformers, ExLlamaV2, VLLM and a JavaScript interface via Transformers.js, allowing it to run on CPUs, NVIDIA CUDA GPUs, AMD ROCm, Vulkan-capable GPUs, and Apple Metal. It also includes a notion of speaker profiles: you can create a speaker from a short audio sample, save it as JSON, and reuse it for consistent voice identity across generations and sessions. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Joget

    Joget

    AI Powered Open Source Platform to Easily Build Enterprise Web Apps

    Joget offers an open-source, AI-powered platform that converges no-code/low-code development with AI to rapidly build and customize enterprise applications at scale. By combining AI with visual app builders—not raw code—Joget makes app generation faster, safer, and more accessible for everyone. With Generative AI and Agentic AI capabilities, Joget Intelligence enables organizations to automate and enhance processes while maintaining oversight and compliance. Unlike typical AI code...
    Leader badge
    Downloads: 86 This Week
    Last Update:
    See Project
  • 24
    FastViT

    FastViT

    This repository contains the official implementation of research

    FastViT is an efficient vision backbone family that blends convolutional inductive biases with transformer capacity to deliver strong accuracy at mobile and real-time inference budgets. Its design pursues a favorable latency-accuracy Pareto curve, targeting edge devices and server scenarios where throughput and tail latency matter. The models use lightweight attention and carefully engineered blocks to minimize token mixing costs while preserving representation power. Training and inference...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    GLM-130B

    GLM-130B

    GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)

    GLM-130B is an open bilingual (English and Chinese) dense language model with 130 billion parameters, released by the Tsinghua KEG Lab and collaborators as part of the General Language Model (GLM) series. It is designed for large-scale inference and supports both left-to-right generation and blank filling, making it versatile across NLP tasks. Trained on over 400 billion tokens (200B English, 200B Chinese), it achieves performance surpassing GPT-3 175B, OPT-175B, and BLOOM-176B on multiple...
    Downloads: 6 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
MongoDB Logo MongoDB