Showing 150 open source projects for "http server python"

View related business solutions
  • Picsart Enterprise Background Removal API for Stunning eCommerce Visuals Icon
    Picsart Enterprise Background Removal API for Stunning eCommerce Visuals

    Instantly remove the background from your images in just one click.

    With our Remove Background API tool, you can access the transformative capabilities of automation , which will allow you to turn any photo asset into compelling product imagery. With elevated visuals quality on your digital platforms, you can captivate your audience, and therefore achieve higher engagement and sales.
    Learn More
  • Secure remote access solution to your private network, in the cloud or on-prem. Icon
    Secure remote access solution to your private network, in the cloud or on-prem.

    Deliver secure remote access with OpenVPN.

    OpenVPN is here to bring simple, flexible, and cost-effective secure remote access to companies of all sizes, regardless of where their resources are located.
    Get started — no credit card required.
  • 1
    MCP ZoomEye

    MCP ZoomEye

    A Model Context Protocol server that provides network asset info

    The ZoomEye MCP Server is a Model Context Protocol server that provides network asset information based on query conditions, allowing Large Language Models to obtain data by querying ZoomEye using dorks and other search parameters. ​
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    LoRAX

    LoRAX

    Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs

    Lorax is a multi-LoRA (Low-Rank Adaptation) inference server that scales to thousands of fine-tuned Large Language Models (LLMs). It enables efficient deployment and management of numerous fine-tuned models, facilitating scalable AI applications. Lorax is designed to handle high concurrency and provides a robust infrastructure for serving multiple LLMs simultaneously.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Hamilton DAGWorks

    Hamilton DAGWorks

    Helps scientists define testable, modular, self-documenting dataflow

    Hamilton is a lightweight Python library for directed acyclic graphs (DAGs) of data transformations. Your DAG is portable; it runs anywhere Python runs, whether it's a script, notebook, Airflow pipeline, FastAPI server, etc. Your DAG is expressive; Hamilton has extensive features to define and modify the execution of a DAG (e.g., data validation, experiment tracking, remote execution). To create a DAG, write regular Python functions that specify their dependencies with their parameters...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Legion MCP

    Legion MCP

    A server that helps people access and query data in databases

    The Legion MCP Server is designed to help users access and query data in databases using the Legion Query Runner, integrated with the Model Context Protocol (MCP) Python SDK. It facilitates efficient data retrieval and analysis through standardized interfaces. ​
    Downloads: 0 This Week
    Last Update:
    See Project
  • Sales CRM and Pipeline Management Software | Pipedrive Icon
    Sales CRM and Pipeline Management Software | Pipedrive

    The easy and effective CRM for closing deals

    Pipedrive’s simple interface empowers salespeople to streamline workflows and unite sales tasks in one workspace. Unlock instant sales insights with Pipedrive’s visual sales pipeline and fine-tune your strategy with robust reporting features and a personalized AI Sales Assistant.
    Try it for free
  • 5
    Appfl

    Appfl

    Advanced Privacy-Preserving Federated Learning framework

    APPFL (Advanced Privacy-Preserving Federated Learning) is a Python framework enabling researchers to easily build and benchmark privacy-aware federated learning solutions. It supports flexible algorithm development, differential privacy, secure communications, and runs efficiently on HPC and multi-GPU setups.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    MCP Text Editor

    MCP Text Editor

    Provides line-oriented text file editing capabilities

    The MCP Text Editor Server provides line-oriented text file editing capabilities through a standardized API, optimized for integration with Large Language Models (LLMs). It enables efficient partial file access, minimizing token usage while ensuring safe concurrent editing.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    MCP Timeplus

    MCP Timeplus

    Execute SQL queries and manage databases seamlessly with Timeplus

    An MCP server designed for integration with Timeplus, enabling real-time data streaming and analytics through natural language interactions. ​
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Langcorn

    Langcorn

    Serving LangChain LLM apps automagically with FastApi

    LangCorn is an API server that enables you to serve LangChain models and pipelines with ease, leveraging the power of FastAPI for a robust and efficient experience.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    OpenLLM

    OpenLLM

    Operating LLMs in production

    ..., CLI, our Python/Javascript client, or any HTTP client.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Test your software product anywhere in the world Icon
    Test your software product anywhere in the world

    Get feedback from real people across 190+ countries with the devices, environments, and payment instruments you need for your perfect test.

    Global App Testing is a managed pool of freelancers used by Google, Meta, Microsoft, and other world-beating software companies.
    Try us today.
  • 10
    SageMaker Inference Toolkit

    SageMaker Inference Toolkit

    Serve machine learning models within a Docker container

    ... the container is deployed. Containerizing your model and code enables fast and reliable deployment of your model. The SageMaker Inference Toolkit implements a model serving stack and can be easily added to any Docker container, making it deployable to SageMaker. This library's serving stack is built on Multi Model Server, and it can serve your own models or those you trained on SageMaker using machine learning frameworks with native SageMaker support.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    ClearML

    ClearML

    Streamline your ML workflow

    ClearML is an open source platform that automates and simplifies developing and managing machine learning solutions for thousands of data science teams all over the world. It is designed as an end-to-end MLOps suite allowing you to focus on developing your ML code & automation, while ClearML ensures your work is reproducible and scalable. The ClearML Python Package for integrating ClearML into your existing scripts by adding just two lines of code, and optionally extending your experiments...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Binary Ninja MCP

    Binary Ninja MCP

    A Binary Ninja plugin, MCP server

    The Binary Ninja MCP is a plugin and bridge that integrates Binary Ninja with Large Language Model clients via the Model Context Protocol, enhancing reverse engineering workflows with AI assistance. ​
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    API-for-Open-LLM

    API-for-Open-LLM

    Openai style api for open large language models

    API-for-Open-LLM is a lightweight API server designed for deploying and serving open large language models (LLMs), offering a simple way to integrate LLMs into applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Text Generation Inference

    Text Generation Inference

    Large Language Model Text Generation Inference

    Text Generation Inference is a high-performance inference server for text generation models, optimized for Hugging Face's Transformers. It is designed to serve large language models efficiently with optimizations for performance and scalability.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    dstack

    dstack

    Open-source tool designed to enhance the efficiency of workloads

    dstack is an open-source tool designed to enhance the efficiency of running ML workloads in any cloud (AWS, GCP, Azure, Lambda, etc). It streamlines development and deployment, reduces cloud costs, and frees users from vendor lock-in.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    aisuite

    aisuite

    Simple, unified interface to multiple Generative AI providers

    Simple, unified interface to multiple Generative AI providers. aisuite makes it easy for developers to use multiple LLM through a standardized interface. Using an interface similar to OpenAI's, aisuite makes it easy to interact with the most popular LLMs and compare the results. It is a thin wrapper around Python client libraries and allows creators to seamlessly swap out and test responses from different LLM providers without changing their code. Today, the library is primarily focused on chat...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Basaran

    Basaran

    Basaran, an open-source alternative to the OpenAI text completion API

    .... Support both decoder-only and encoder-decoder models. Detokenizer that handles surrogates and whitespace. Multi-GPU support with optional 8-bit quantization. Real-time partial progress using server-sent events. Compatible with OpenAI API and client libraries. Comes with a fancy web-based playground. Docker images are available on Docker Hub and GitHub Packages.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    LLMStack

    LLMStack

    No-code multi-agent framework to build LLM Agents, workflows

    LLMStack is a no-code platform for building generative AI agents, workflows and chatbots, connecting them to your data and business processes. Build tailor-made generative AI agents, applications and chatbots that cater to your unique needs by chaining multiple LLMs. Seamlessly integrate your own data, internal tools and GPT-powered models without any coding experience using LLMStack's no-code builder. Trigger your AI chains from Slack or Discord. Deploy to the cloud or on-premise.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Swirl

    Swirl

    Swirl queries any number of data sources with APIs

    ... the unified results without extracting and indexing anything. It's intended for use by developers and data scientists who want to solve multi-silo search problems from enterprise search to new monitoring & alerting solutions that push information to users continuously. Built on the Python/Django/RabbitMQ stack, SWIRL includes connectors to Apache Solr, ChatGPT, Elastic, OpenSearch | PostgreSQL, Google BigQuery plus generic HTTP/GET/JSON with configurations for premium services.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Infinity

    Infinity

    Low-latency REST API for serving text-embeddings

    Infinity is a high-throughput, low-latency REST API for serving vector embeddings, supporting all sentence-transformer models and frameworks. Infinity is developed under MIT License. Infinity powers inference behind Gradient.ai and other Embedding API providers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Argilla

    Argilla

    The open-source data curation platform for LLMs

    Argilla is a production-ready framework for building and improving datasets for NLP projects. Deploy your own Argilla Server on Spaces with a few clicks. Use embeddings to find the most similar records with the UI. This feature uses vector search combined with traditional search (keyword and filter based). Argilla is free, open-source, and 100% compatible with major NLP libraries (Hugging Face transformers, spaCy, Stanford Stanza, Flair, etc.). In fact, you can use and combine your preferred...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    SageMaker Hugging Face Inference Toolkit

    SageMaker Hugging Face Inference Toolkit

    Library for serving Transformers models on Amazon SageMaker

    SageMaker Hugging Face Inference Toolkit is an open-source library for serving Transformers models on Amazon SageMaker. This library provides default pre-processing, predict and postprocessing for certain Transformers models and tasks. It utilizes the SageMaker Inference Toolkit for starting up the model server, which is responsible for handling inference requests. For the Dockerfiles used for building SageMaker Hugging Face Containers, see AWS Deep Learning Containers. The SageMaker Hugging...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Errbot

    Errbot

    Chatbot daemon that connects to your favorite chat services

    Errbot is a chatbot, a daemon that connects to your favorite chat service and brings your tools into the conversation. The goal of the project is to make it easy for you to write your own plugins so you can make it do whatever you want, a deployment, retrieving some information online, trigger a tool via an API, troll a co-worker, etc. Errbot is being used in a lot of different contexts, chatops (tools for devops), online gaming chatrooms like EVE, video streaming chatrooms like...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Red Discord Bot

    Red Discord Bot

    A multi-function Discord bot

    Red is a fully modular bot, meaning all features and commands can be enabled/disabled to your liking, making it completely customizable. This is a self-hosted bot, meaning you will need to host and maintain your own instance. You can turn Red into an admin bot, music bot, trivia bot, new best friend or all of these together! CustomCommands allows you to create simple commands for your bot without requiring you to code your own cog for Red. If the command you attempt to create shares a name...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    langchain-prefect

    langchain-prefect

    Tools for using Langchain with Prefect

    Large Language Models (LLMs) are interesting and useful  -  building apps that use them responsibly feels like a no-brainer. Tools like Langchain make it easier to build apps using LLMs. We need to know details about how our apps work, even when we want to use tools with convenient abstractions that may obfuscate those details. Prefect is built to help data people build, run, and observe event-driven workflows wherever they want. It provides a framework for creating deployments on a whole...
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.