Showing 48 open source projects for "remote-server"

View related business solutions
  • New Plans, same great Auth0 | Auth0 by Okta Icon
    New Plans, same great Auth0 | Auth0 by Okta

    You asked, we delivered! Auth0 has expanded our Free and Paid plans to make it even easier for you to protect your customers identities.

    In our new Free Plan, you'll receive more MAUs than ever. You'll also be able to add Passwordless authentication, use your own custom domain, and more. Our expanded Paid Plans include increased connections, more MFA offerings, and more. Check out what's new.
    Learn more
  • Top-Rated Free CRM Software Icon
    Top-Rated Free CRM Software

    216,000+ customers in over 135 countries grow their businesses with HubSpot

    HubSpot is an AI-powered customer platform with all the software, integrations, and resources you need to connect your marketing, sales, and customer service. HubSpot's connected platform enables you to grow your business faster by focusing on what matters most: your customers.
    Get started free
  • 1
    Triton Inference Server

    Triton Inference Server

    The Triton Inference Server provides an optimized cloud

    Triton Inference Server is an open-source inference serving software that streamlines AI inferencing. Triton enables teams to deploy any AI model from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton supports inference across cloud, data center, edge, and embedded devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton delivers optimized performance for many query types, including...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 2
    Netron

    Netron

    Visualizer for neural network, deep learning, machine learning models

    ... using the browser version. It is supported by macOS, Windows, Linux, Python Server and browser.
    Downloads: 71 This Week
    Last Update:
    See Project
  • 3
    TensorFlow

    TensorFlow

    TensorFlow is an open source library for machine learning

    Originally developed by Google for internal use, TensorFlow is an open source platform for machine learning. Available across all common operating systems (desktop, server and mobile), TensorFlow provides stable APIs for Python and C as well as APIs that are not guaranteed to be backwards compatible or are 3rd party for a variety of other languages. The platform can be easily deployed on multiple CPUs, GPUs and Google's proprietary chip, the tensor processing unit (TPU). TensorFlow...
    Downloads: 20 This Week
    Last Update:
    See Project
  • 4
    Neural Network Intelligence

    Neural Network Intelligence

    AutoML toolkit for automate machine learning lifecycle

    ...' trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different training environments like Local Machine, Remote Servers, OpenPAI, Kubeflow, FrameworkController on K8S (AKS etc.) DLWorkspace (aka. DLTS) AML (Azure Machine Learning) and other cloud options. NNI provides CommandLine Tool as well as an user friendly WebUI to manage training experiements.
    Downloads: 6 This Week
    Last Update:
    See Project
  • Intelligent Automation Solutions Built for Modern Finance Teams Icon
    Intelligent Automation Solutions Built for Modern Finance Teams

    We do CFO stuff.

    Digitally transform your business with workflow automation and integrated payment solutions. Digitally store and secure your data with advanced search and accessibility features that keeps your documents at the tip of your team’s fingers.
    Learn More
  • 5
    Paperless-ngx

    Paperless-ngx

    A community-supported supercharged version of paperless

    Paperless-ngx is a community-supported open-source document management system that transforms your physical documents into a searchable online archive so you can keep, well, less paper.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    CARLA Simulator

    CARLA Simulator

    Open-source simulator for autonomous driving research.

    CARLA has been developed from the ground up to support development, training, and validation of autonomous driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites, environmental conditions, full control of all static and dynamic actors, maps generation and much more. Multiple clients...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    ClearML

    ClearML

    Streamline your ML workflow

    ... and other workflows with ClearML powerful and versatile set of classes and methods. The ClearML Server storing experiment, model, and workflow data, and supports the Web UI experiment manager, and ML-Ops automation for reproducibility and tuning. It is available as a hosted service and open source for you to deploy your own ClearML Server. The ClearML Agent for ML-Ops orchestration, experiment and workflow reproducibility, and scalability.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    CLIP-as-service

    CLIP-as-service

    Embed images and sentences into fixed-length vectors

    ... on client and server. Intuitive and consistent API for image and sentence embedding. Async client support. Easily switch between gRPC, HTTP, WebSocket protocols with TLS and compression. Smooth integration with neural search ecosystem including Jina and DocArray. Build cross-modal and multi-modal solutions in no time.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    Hamilton DAGWorks

    Hamilton DAGWorks

    Helps scientists define testable, modular, self-documenting dataflow

    Hamilton is a lightweight Python library for directed acyclic graphs (DAGs) of data transformations. Your DAG is portable; it runs anywhere Python runs, whether it's a script, notebook, Airflow pipeline, FastAPI server, etc. Your DAG is expressive; Hamilton has extensive features to define and modify the execution of a DAG (e.g., data validation, experiment tracking, remote execution). To create a DAG, write regular Python functions that specify their dependencies with their parameters...
    Downloads: 0 This Week
    Last Update:
    See Project
  • A new approach to fast data transfer | IBM Aspera Icon
    A new approach to fast data transfer | IBM Aspera

    For organizations interested in a file transfer and streaming solution

    IBM Aspera takes a different approach to tackling the challenges of big data movement over global WANs. Rather than optimize or accelerate data transfer, Aspera eliminates underlying bottlenecks by using a breakthrough transport technology that fully utilizes available network bandwidth to maximize speed and quickly scale up with no theoretical limit.
    Learn More
  • 10
    Elyra

    Elyra

    Elyra extends JupyterLab with an AI centric approach

    Elyra is a set of AI-centric extensions to JupyterLab Notebooks. The Elyra Getting Started Guide includes more details on these features. A version-specific summary of new features is located on the releases page.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    segment-geospatial

    segment-geospatial

    A Python package for segmenting geospatial data with the SAM

    The segment-geospatial package draws its inspiration from segment-anything-eo repository authored by Aliaksandr Hancharenka. To facilitate the use of the Segment Anything Model (SAM) for geospatial data, I have developed the segment-anything-py and segment-geospatial Python packages, which are now available on PyPI and conda-forge. My primary objective is to simplify the process of leveraging SAM for geospatial data analysis by enabling users to achieve this with minimal coding effort. I...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    SageMaker Inference Toolkit

    SageMaker Inference Toolkit

    Serve machine learning models within a Docker container

    ... the container is deployed. Containerizing your model and code enables fast and reliable deployment of your model. The SageMaker Inference Toolkit implements a model serving stack and can be easily added to any Docker container, making it deployable to SageMaker. This library's serving stack is built on Multi Model Server, and it can serve your own models or those you trained on SageMaker using machine learning frameworks with native SageMaker support.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    tika-python

    tika-python

    Python binding to the Apache Tika™ REST services

    A Python port of the Apache Tika library that makes Tika available using the Tika REST Server. This makes Apache Tika available as a Python library, installable via Setuptools, Pip and easy to install. To use this library, you need to have Java 7+ installed on your system as tika-python starts up the Tika REST server in the background. To get this working in a disconnected environment, download a tika server file (both tika-server.jar and tika-server.jar.md5, which can be found here) and set...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    DeepDetect

    DeepDetect

    Deep Learning API and Server in C++14 support for Caffe, PyTorch

    The core idea is to remove the error sources and difficulties of Deep Learning applications by providing a safe haven of commoditized practices, all available as a single core. While the Open Source Deep Learning Server is the core element, with REST API, and multi-platform support that allows training & inference everywhere, the Deep Learning Platform allows higher level management for training neural network models and using them as if they were simple code snippets. Ready for applications...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Learning Interpretability Tool

    Learning Interpretability Tool

    Interactively analyze ML models to understand their behavior

    The Learning Interpretability Tool (LIT, formerly known as the Language Interpretability Tool) is a visual, interactive ML model-understanding tool that supports text, image, and tabular data. It can be run as a standalone server, or inside of notebook environments such as Colab, Jupyter, and Google Cloud Vertex AI notebooks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    TurboPilot

    TurboPilot

    Open source large-language-model based code completion engine

    TurboPilot is a self-hosted copilot clone that uses the library behind llama.cpp to run the 6 Billion Parameter Salesforce Codegen model in 4GiB of RAM. It is heavily based and inspired by on the fauxpilot project. This is a proof of concept right now rather than a stable tool. Autocompletion is quite slow in this version of the project. Feel free to play with it, but your mileage may vary.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    dstack

    dstack

    Open-source tool designed to enhance the efficiency of workloads

    dstack is an open-source tool designed to enhance the efficiency of running ML workloads in any cloud (AWS, GCP, Azure, Lambda, etc). It streamlines development and deployment, reduces cloud costs, and frees users from vendor lock-in.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Jina

    Jina

    Build cross-modal and multimodal applications on the cloud

    Jina is a framework that empowers anyone to build cross-modal and multi-modal applications on the cloud. It uplifts a PoC into a production-ready service. Jina handles the infrastructure complexity, making advanced solution engineering and cloud-native technologies accessible to every developer. Build applications that deliver fresh insights from multiple data types such as text, image, audio, video, 3D mesh, PDF with Jina AI’s DocArray. Polyglot gateway that supports gRPC, Websockets, HTTP,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    AirSim

    AirSim

    A simulator for drones, cars and more, built on Unreal Engine

    AirSim is an open-source, cross platform simulator for drones, cars and more vehicles, built on Unreal Engine with an experimental Unity release in the works. It supports software-in-the-loop simulation with popular flight controllers such as PX4 & ArduPilot and hardware-in-loop with PX4 for physically and visually realistic simulations. It is developed as an Unreal plugin that can simply be dropped into any Unreal environment. AirSim's development is oriented towards the goal of creating a...
    Downloads: 17 This Week
    Last Update:
    See Project
  • 20
    Captcha Server

    Captcha Server

    A fast and stable captcha auto solving server with API.

    * Captcha Server * https://captchas.io Do you want to host your own captcha server? Earn money from customers who wants to auto solve captchas? This is it.. Make a website and an API interface and integrate the Captcha Server API, walla.. a new business! We Provide A+ Captcha Solving Software For Better Captcha Solving Business. Launch Captcha Server from any Windows OS servers and set the server IP and Port to make it available to endusers and developers alike. Captcha Server has...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 21

    KoboldAI

    Your gateway to GPT writing

    This is a browser-based front-end for AI-assisted writing with multiple local & remote AI models. It offers the standard array of tools, including Memory, Author's Note, World Info, Save & Load, adjustable AI settings, formatting options, and the ability to import existing AI Dungeon adventures. You can also turn on Adventure mode and play the game like AI Dungeon Unleashed. Stories can be played like a Novel, a text adventure game or used as a chatbot with an easy toggles to change between...
    Leader badge
    Downloads: 371 This Week
    Last Update:
    See Project
  • 22
    TensorFlowOnSpark

    TensorFlowOnSpark

    TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters

    By combining salient features from the TensorFlow deep learning framework with Apache Spark and Apache Hadoop, TensorFlowOnSpark enables distributed deep learning on a cluster of GPU and CPU servers. It enables both distributed TensorFlow training and inferencing on Spark clusters, with a goal to minimize the amount of code changes required to run existing TensorFlow programs on a shared grid.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Bandicoot

    Bandicoot

    fast C++ library for GPU linear algebra & scientific computing

    * Fast GPU linear algebra library (matrix maths) for the C++ language, aiming towards a good balance between speed and ease of use * Provides high-level syntax and functionality deliberately similar to Matlab * Provides an API that is aiming to be compatible with Armadillo for easy transition between CPU and GPU linear algebra code * Useful for algorithm development directly in C++, or quick conversion of research code into production environments * Distributed under the permissive...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 24
    Exadel CompreFace

    Exadel CompreFace

    Leading free and open-source face recognition system

    Exadel CompreFace is a free and open-source face recognition GitHub project. Essentially, it is a docker-based application that can be used as a standalone server or deployed in the cloud. You don’t need prior machine learning skills to set up and use CompreFace. The system provides REST API for face recognition, face verification, face detection, face mask detection, landmark detection, age, and gender recognition. The solution also features a role management system that allows you...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 25
    Hugging Face Transformer

    Hugging Face Transformer

    CPU/GPU inference server for Hugging Face transformer models

    .... Both are great tools but not very performant in inference. Then, if you spend some time, you can build something over ONNX Runtime and Triton inference server. You will usually get from 2X to 4X faster inference compared to vanilla Pytorch. It's cool! However, if you want the best in class performances on GPU, there is only a single possible combination: Nvidia TensorRT and Triton. You will usually get 5X faster inference compared to vanilla Pytorch.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next