Showing 32 open source projects for "code server"

View related business solutions
  • Easily Host LLMs and Web Apps on Cloud Run Icon
    Easily Host LLMs and Web Apps on Cloud Run

    Run everything from popular models with on-demand NVIDIA L4 GPUs to web apps without infrastructure management.

    Run frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure. Cloud Run gives you on-demand GPU access for hosting LLMs and running real-time AI—with 5-second cold starts and automatic scale-to-zero so you only pay for actual usage. New customers get $300 in free credit to start.
    Try Cloud Run Free
  • Build on Google Cloud with $300 in Free Credit Icon
    Build on Google Cloud with $300 in Free Credit

    New to Google Cloud? Get $300 in free credit to explore Compute Engine, BigQuery, Cloud Run, Vertex AI, and 150+ other products.

    Start your next project with $300 in free Google Cloud credit. Spin up VMs, run containers, query exabytes in BigQuery, or build AI apps with Vertex AI and Gemini. Once your credits are used, keep building with 20+ products with free monthly usage, including Compute Engine, Cloud Storage, GKE, and Cloud Run functions. Sign up to start building right away.
    Start Free Trial
  • 1
    Basaran

    Basaran

    Basaran, an open-source alternative to the OpenAI text completion API

    ...The open source community will eventually witness the Stable Diffusion moment for large language models (LLMs), and Basaran allows you to replace OpenAI's service with the latest open-source model to power your application without modifying a single line of code. Stream generation using various decoding strategies. Support both decoder-only and encoder-decoder models. Detokenizer that handles surrogates and whitespace. Multi-GPU support with optional 8-bit quantization. Real-time partial progress using server-sent events. Compatible with OpenAI API and client libraries. Comes with a fancy web-based playground. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    AI-Agent-Host

    AI-Agent-Host

    The AI Agent Host is a module-based development environment.

    ...The AI Agent Host is a module-based environment designed to facilitate rapid experimentation and testing. It includes a docker-compose configuration with QuestDB, Grafana, Code-Server and Nginx. The AI Agent Host provides a seamless interface for managing and querying data, visualizing results, and coding in real-time. The AI Agent Host is built specifically for LangChain, a framework dedicated to developing applications powered by language models. LangChain recognizes that the most powerful and distinctive applications go beyond simply utilizing a language model and strive to be data-aware and agentic. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    TensorFlowOnSpark

    TensorFlowOnSpark

    TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters

    By combining salient features from the TensorFlow deep learning framework with Apache Spark and Apache Hadoop, TensorFlowOnSpark enables distributed deep learning on a cluster of GPU and CPU servers. It enables both distributed TensorFlow training and inferencing on Spark clusters, with a goal to minimize the amount of code changes required to run existing TensorFlow programs on a shared grid.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Aquila X

    Aquila X

    Easy build your personal search engine with Aquila Network

    Easy build your personal search engine with Aquila Network. Aquila X is the gateway to Aquila Network and it's applications. AquilaX is a smart bookmarking tool. You can keep your bookmarks and search through it's contents. Choose to keep all your data in a local server or in the cloud. This is an open source software and thus is auditable.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Build AI Apps with Gemini 3 on Vertex AI Icon
    Build AI Apps with Gemini 3 on Vertex AI

    Access Google’s most capable multimodal models. Train, test, and deploy AI with 200+ foundation models on one platform.

    Vertex AI gives developers access to Gemini 3—Google’s most advanced reasoning and coding model—plus 200+ foundation models including Claude, Llama, and Gemma. Build generative AI apps with Vertex AI Studio, customize with fine-tuning, and deploy to production with enterprise-grade MLOps. New customers get $300 in free credits.
    Try Vertex AI Free
  • 5
    BudgetML

    BudgetML

    Deploy a ML inference service on a budget in 10 lines of code

    ...It is supposed to be fast, easy, and developer-friendly. It is by no means meant to be used in a full-fledged production-ready setup. It is simply a means to get a server up and running as fast as possible with the lowest costs possible.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    commit-autosuggestions

    commit-autosuggestions

    A tool that AI automatically recommends commit messages

    ...CodeBERT: A Pre-Trained Model for Programming and Natural Languages introduces a pre-trained model in a combination of Program Language and Natural Language(PL-NL). It also introduces the problem of converting code into natural language (Code Documentation Generation). We can use CodeBERT to create a model that generates a commit message when code is added. However, most code changes are not made only by add of the code, and some parts of the code are deleted. We plan to slowly conquer languages that are not currently supported. To run this project, you need a flask-based inference server (GPU) and a client (commit module). ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    EvalAI

    EvalAI

    Evaluating state of the art in AI

    ...If the challenge needs extra computational power, challenge organizers can easily add their own cluster of worker nodes to process participant submissions while we take care of hosting the challenge, handling user submissions, and maintaining the leaderboard. EvalAI lets participants submit code for their agent in the form of docker images which are evaluated against test environments on the evaluation server. During the evaluation, the worker fetches the image, test environment, and model snapshot and spins up a new container to perform the evaluation.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB