Alternatives to Graphcore

Compare Graphcore alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Graphcore in 2026. Compare features, ratings, user reviews, pricing, and more from Graphcore competitors and alternatives in order to make an informed decision for your business.

  • 1
    Vertex AI
    Build, deploy, and scale machine learning (ML) models faster, with fully managed ML tools for any use case. Through Vertex AI Workbench, Vertex AI is natively integrated with BigQuery, Dataproc, and Spark. You can use BigQuery ML to create and execute machine learning models in BigQuery using standard SQL queries on existing business intelligence tools and spreadsheets, or you can export datasets from BigQuery directly into Vertex AI Workbench and run your models from there. Use Vertex Data Labeling to generate highly accurate labels for your data collection. Vertex AI Agent Builder enables developers to create and deploy enterprise-grade generative AI applications. It offers both no-code and code-first approaches, allowing users to build AI agents using natural language instructions or by leveraging frameworks like LangChain and LlamaIndex.
    Compare vs. Graphcore View Software
    Visit Website
  • 2
    Vercel

    Vercel

    Vercel

    Vercel is an AI-powered cloud platform that helps developers build, deploy, and scale high-performance web experiences with speed and security. It provides a unified set of tools, templates, and infrastructure designed to streamline development workflows from idea to global deployment. With support for modern frameworks like Next.js, Svelte, Vite, and Nuxt, teams can ship fast, responsive applications without managing complex backend operations. Vercel’s AI Cloud includes an AI Gateway, SDKs, workflow automation tools, and fluid compute, enabling developers to integrate large language models and advanced AI features effortlessly. The platform emphasizes instant global distribution, enabling deployments to become available worldwide immediately after a git push. Backed by strong security and performance optimizations, Vercel helps companies deliver personalized, reliable digital experiences at massive scale.
  • 3
    SambaNova

    SambaNova

    SambaNova Systems

    SambaNova is the leading purpose-built AI system for generative and agentic AI implementations, from chips to models, that gives enterprises full control over their model and private data. We take the best models, optimize them for fast tokens and higher batch sizes, the largest inputs and enable customizations to deliver value with simplicity. The full suite includes the SambaNova DataScale system, the SambaStudio software, and the innovative SambaNova Composition of Experts (CoE) model architecture. These components combine into a powerful platform that delivers unparalleled performance, ease of use, accuracy, data privacy, and the ability to power every use case across the world's largest organizations. We give our customers the optionality to experience through the cloud or on-premise.
  • 4
    OpenCL

    OpenCL

    The Khronos Group

    OpenCL (Open Computing Language) is an open, royalty-free standard for cross-platform parallel programming of heterogeneous computing systems that lets developers accelerate computing tasks by leveraging diverse processors such as CPUs, GPUs, DSPs, and FPGAs across supercomputers, cloud servers, personal computers, mobile devices, and embedded platforms. It defines a programming framework including a C-based language for writing compute kernels and a runtime API to control devices, manage memory, and execute parallel code, giving portable and efficient access to heterogeneous hardware. OpenCL improves speed and responsiveness for a wide range of applications including creative tools, scientific and medical software, vision processing, and neural network training and inferencing by offloading compute-intensive work to accelerator processors.
  • 5
    Google Cloud AI Infrastructure
    Options for every business to train deep learning and machine learning models cost-effectively. AI accelerators for every use case, from low-cost inference to high-performance training. Simple to get started with a range of services for development and deployment. Tensor Processing Units (TPUs) are custom-built ASIC to train and execute deep neural networks. Train and run more powerful and accurate models cost-effectively with faster speed and scale. A range of NVIDIA GPUs to help with cost-effective inference or scale-up or scale-out training. Leverage RAPID and Spark with GPUs to execute deep learning. Run GPU workloads on Google Cloud where you have access to industry-leading storage, networking, and data analytics technologies. Access CPU platforms when you start a VM instance on Compute Engine. Compute Engine offers a range of both Intel and AMD processors for your VMs.
  • 6
    NVIDIA AI Enterprise
    The software layer of the NVIDIA AI platform, NVIDIA AI Enterprise accelerates the data science pipeline and streamlines development and deployment of production AI including generative AI, computer vision, speech AI and more. With over 50 frameworks, pretrained models and development tools, NVIDIA AI Enterprise is designed to accelerate enterprises to the leading edge of AI, while also simplifying AI to make it accessible to every enterprise. The adoption of artificial intelligence and machine learning has gone mainstream, and is core to nearly every company’s competitive strategy. One of the toughest challenges for enterprises is the struggle with siloed infrastructure across the cloud and on-premises data centers. AI requires their environments to be managed as a common platform, instead of islands of compute.
  • 7
    Tencent Cloud TI Platform
    Tencent Cloud TI Platform is a one-stop machine learning service platform designed for AI engineers. It empowers AI development throughout the entire process from data preprocessing to model building, model training, model evaluation, and model service. Preconfigured with diverse algorithm components, it supports multiple algorithm frameworks to adapt to different AI use cases. Tencent Cloud TI Platform delivers a one-stop machine learning experience that covers a complete and closed-loop workflow from data preprocessing to model building, model training, and model evaluation. With Tencent Cloud TI Platform, even AI beginners can have their models constructed automatically, making it much easier to complete the entire training process. Tencent Cloud TI Platform's auto-tuning tool can also further enhance the efficiency of parameter tuning. Tencent Cloud TI Platform allows CPU/GPU resources to elastically respond to different computing power needs with flexible billing modes.
  • 8
    AIDDISON

    AIDDISON

    Merck KGaA

    AIDDISON™ drug discovery software combines the power of artificial intelligence (AI), machine learning (ML), and 3D computer-aided drug design (CADD) methods to act as a valuable toolkit for medicinal chemistry needs. As a unified platform for efficient and effective ligand-based and structure-based drug design, it integrates all the facets for virtual screening and supports methods for in-silico lead discovery and lead optimization.
  • 9
    Atomwise

    Atomwise

    Atomwise

    We use our AI engine to transform drug discovery. Our discoveries help create better medicines faster. Our AI-enabled discovery portfolio includes wholly-owned and co-developed pipeline assets, and is backed by prominent investors. Atomwise developed a machine-learning-based discovery engine that combines the power of convolutional neural networks with massive chemical libraries to discover new small-molecule medicines. The secret to reinventing drug discovery with AI is people. We are dedicated to developing the best AI platform and using it to transform small molecule drug discovery. We have to tackle the most challenging, seemingly impossible targets and streamline the drug discovery process to give drug developers more shots on goal. Computational efficiency enables screening of trillions of compounds in silico, increasing the likelihood of success. Demonstrated exquisite model accuracy, overcoming the challenge of false positives.
  • 10
    Domino Enterprise AI Platform
    Domino is an enterprise AI platform designed to help organizations build, deploy, and scale AI systems that deliver real business outcomes. It provides end-to-end support for the AI lifecycle, from data science experimentation to production deployment and governance. The platform enables teams to access data, tools, and compute resources through a self-service environment with built-in IT controls. Domino supports the development of machine learning models, generative AI applications, and AI agents using preferred tools and frameworks. It also includes governance features such as model tracking, audit trails, and policy enforcement to ensure compliance and transparency. With hybrid and multi-cloud capabilities, organizations can run AI workloads across on-premises and cloud environments. Overall, Domino helps enterprises operationalize AI at scale while maintaining control, security, and efficiency.
  • 11
    Teachable Machine

    Teachable Machine

    Teachable Machine

    A fast, easy way to create machine learning models for your sites, apps, and more – no expertise or coding required. Teachable Machine is flexible – use files or capture examples live. It’s respectful of the way you work. You can even choose to use it entirely on-device, without any webcam or microphone data leaving your computer. Teachable Machine is a web-based tool that makes creating machine learning models fast, easy, and accessible to everyone. Educators, artists, students, innovators, makers of all kinds – really, anyone who has an idea they want to explore. No prerequisite machine learning knowledge required. You train a computer to recognize your images, sounds, and poses without writing any machine learning code. Then, use your model in your own projects, sites, apps, and more.
  • 12
    Azure Machine Learning
    Accelerate the end-to-end machine learning lifecycle with Azure Machine Learning Studio. Empower developers and data scientists with a wide range of productive experiences for building, training, and deploying machine learning models faster. Accelerate time to market and foster team collaboration with industry-leading MLOps—DevOps for machine learning. Innovate on a secure, trusted platform, designed for responsible ML. Productivity for all skill levels, with code-first and drag-and-drop designer, and automated machine learning. Robust MLOps capabilities that integrate with existing DevOps processes and help manage the complete ML lifecycle. Responsible ML capabilities – understand models with interpretability and fairness, protect data with differential privacy and confidential computing, and control the ML lifecycle with audit trials and datasheets. Best-in-class support for open-source frameworks and languages including MLflow, Kubeflow, ONNX, PyTorch, TensorFlow, Python, and R.
  • 13
    DataChain

    DataChain

    iterative.ai

    DataChain connects unstructured data in cloud storage with AI models and APIs, enabling instant data insights by leveraging foundational models and API calls to quickly understand your unstructured files in storage. Its Pythonic stack accelerates development tenfold by switching to Python-based data wrangling without SQL data islands. DataChain ensures dataset versioning, guaranteeing traceability and full reproducibility for every dataset to streamline team collaboration and ensure data integrity. It allows you to analyze your data where it lives, keeping raw data in storage (S3, GCP, Azure, or local) while storing metadata in inefficient data warehouses. DataChain offers tools and integrations that are cloud-agnostic for both storage and computing. With DataChain, you can query your unstructured multi-modal data, apply intelligent AI filters to curate data for training and snapshot your unstructured data, the code for data selection, and any stored or computed metadata.
  • 14
    MapReduce

    MapReduce

    Baidu AI Cloud

    You can perform on-demand deployment and automatic scaling of the cluster, and focus on the big data processing, analysis, and reporting only. Thanks to many years’ of massively distributed computing technology accumulation, Our operations team can undertake the cluster operations. It automatically scales up clusters to improve the computing ability in peak periods and scales down clusters to reduce the cost in the valley period. It provides the management console to facilitate cluster management, template customization, task submission, and alarm monitoring. By deploying together with the BCC, it focuses on its own business in a busy time and helps the BMR to compute the big data in free time, reducing the overall IT expenditure.
  • 15
    Azure Blob Storage
    Massively scalable and secure object storage for cloud-native workloads, archives, data lakes, high-performance computing, and machine learning. Azure Blob Storage helps you create data lakes for your analytics needs, and provides storage to build powerful cloud-native and mobile apps. Optimize costs with tiered storage for your long-term data, and flexibly scale up for high-performance computing and machine learning workloads. Blob storage is built from the ground up to support the scale, security, and availability needs of mobile, web, and cloud-native application developers. Use it as a cornerstone for serverless architectures such as Azure Functions. Blob storage supports the most popular development frameworks, including Java, .NET, Python, and Node.js, and is the only cloud storage service that offers a premium, SSD-based object storage tier for low-latency and interactive scenarios.
    Starting Price: $0.00099
  • 16
    SpliceCore

    SpliceCore

    Envisagenics

    Using RNA sequencing (RNA-seq) data and Artificial Intelligence are both a necessity and an opportunity to develop therapeutics that target splicing errors. The use of machine learning enables us to discover new splicing errors and quickly design therapeutic compounds to correct them. SpliceCore is our dedicated AI platform for RNA therapeutics discovery. We developed this technology platform specifically for the analysis of RNA sequencing data. It can identify, test and validate hypothetical drug targets faster than traditional methods. At the heart of SpliceCore is our proprietary database of more than 5 million potential RNA splicing errors. It is the largest database of splicing errors in the world and it is used to test every RNA sequencing dataset that is input for analysis. Scalable cloud computing enables us to process massive amounts of RNA sequencing data efficiently, at higher speed and lower cost, exponentially accelerating therapeutic innovation.
  • 17
    XRCLOUD

    XRCLOUD

    XRCLOUD

    GPU cloud computing is a GPU-based computing service with real-time, high-speed parallel computing and floating-point computing capacity. It is ideal for various scenarios such as 3D graphics applications, video decoding, deep learning, and scientific computing. GPU instances can be managed just like a standard ECS with speed and ease, which effectively relieves computing pressures. RTX6000 GPU contains thousands of computing units and shows substantial advantages in parallel computing. For optimized deep learning, massive computing can be completed in a short time. GPU Direct seamlessly supports the transmission of big data among networks. Built-in acceleration framework, it can focus on the core tasks by quick deployment and fast instance distribution. We offer optimal cloud performance at a transparent price. The price of our cloud solution is open and cost-effective. You may choose to charge on-demand, and you can also get more discounts by subscribing to resources.
    Starting Price: $4.13 per month
  • 18
    Autogon

    Autogon

    Autogon

    Autogon is a leading AI and machine learning company, that simplifies complex technology to empower businesses with accessible, cutting-edge solutions for data-driven decisions and global competitiveness. Discover the empowering potential of Autogon models as they enable industries to leverage the power of AI, fostering innovation and fueling growth across diverse sectors. Experience the future of AI with Autogon Qore, your all-in-one solution for image classification, text generation, visual Q&A, sentiment analysis, voice cloning, and more. Empower your business with cutting-edge AI capabilities and innovation. Make informed decisions, streamline operations, and drive growth without the need for extensive technical expertise. Empower engineers, analysts, and scientists to harness the full potential of artificial intelligence and machine learning for their projects and research. Create custom software using clear APIs and integration SDKs.
  • 19
    IONOS Cloud GPU Servers
    IONOS GPU Servers provide an accelerated computing infrastructure designed to handle workloads that require significantly more processing power than traditional CPU-based systems. It integrates enterprise-grade NVIDIA GPUs such as the H100, H200, and L40s, as well as specialized AI accelerators like Intel Gaudi, enabling massive parallel processing for compute-intensive applications. GPU-accelerated instances extend cloud infrastructure with dedicated graphics processors so virtual machines can perform complex calculations and data-heavy operations much faster than conventional servers. It is particularly suitable for artificial intelligence, deep learning, and data science tasks that involve training models on large datasets or performing high-speed inference operations. It also supports big data analytics, scientific simulations, and visualization workloads such as 3D rendering or modeling that require high computational throughput.
    Starting Price: $3,990 per month
  • 20
    Amazon SageMaker HyperPod
    Amazon SageMaker HyperPod is a purpose-built, resilient compute infrastructure that simplifies and accelerates the development of large AI and machine-learning models by handling distributed training, fine-tuning, and inference across clusters with hundreds or thousands of accelerators, including GPUs and AWS Trainium chips. It removes the heavy lifting involved in building and managing ML infrastructure by providing persistent clusters that automatically detect and repair hardware failures, automatically resume workloads, and optimize checkpointing to minimize interruption risk, enabling months-long training jobs without disruption. HyperPod offers centralized resource governance; administrators can set priorities, quotas, and task-preemption rules so compute resources are allocated efficiently among tasks and teams, maximizing utilization and reducing idle time. It also supports “recipes” and pre-configured settings to quickly fine-tune or customize foundation models.
  • 21
    Voltage Park

    Voltage Park

    Voltage Park

    Voltage Park is a next-generation GPU cloud infrastructure provider, offering on-demand and reserved access to NVIDIA HGX H100 GPUs housed in Dell PowerEdge XE9680 servers, each equipped with 1TB of RAM and v52 CPUs. Their six Tier 3+ data centers across the U.S. ensure high availability and reliability, featuring redundant power, cooling, network, fire suppression, and security systems. A state-of-the-art 3200 Gbps InfiniBand network facilitates high-speed communication and low latency between GPUs and workloads. Voltage Park emphasizes uncompromising security and compliance, utilizing Palo Alto firewalls and rigorous protocols, including encryption, access controls, monitoring, disaster recovery planning, penetration testing, and regular audits. With a massive inventory of 24,000 NVIDIA H100 Tensor Core GPUs, Voltage Park enables scalable compute access ranging from 64 to 8,176 GPUs.
    Starting Price: $1.99 per hour
  • 22
    Monster API

    Monster API

    Monster API

    Effortlessly access powerful generative AI models with our auto-scaling APIs, zero management required. Generative AI models like stable diffusion, pix2pix and dreambooth are now an API call away. Build applications on top of such generative AI models using our scalable rest APIs which integrate seamlessly and come at a fraction of the cost of other alternatives. Seamless integrations with your existing systems, without the need for extensive development. Easily integrate our APIs into your workflow with support for stacks like CURL, Python, Node.js and PHP. We access the unused computing power of millions of decentralised crypto mining rigs worldwide and optimize them for machine learning and package them with popular generative AI models like Stable Diffusion. By harnessing these decentralized resources, we can provide you with a scalable, globally accessible, and, most importantly, affordable platform for Generative AI delivered through seamlessly integrable APIs.
  • 23
    Microsoft Foundry Models
    Microsoft Foundry Models is a unified model catalog that gives enterprises access to more than 11,000 AI models from Microsoft, OpenAI, Anthropic, Mistral AI, Meta, Cohere, DeepSeek, xAI, and others. It allows teams to explore, test, and deploy models quickly using a task-centric discovery experience and integrated playground. Organizations can fine-tune models with ready-to-use pipelines and evaluate performance using their own datasets for more accurate benchmarking. Foundry Models provides secure, scalable deployment options with serverless and managed compute choices tailored to enterprise needs. With built-in governance, compliance, and Azure’s global security framework, businesses can safely operationalize AI across mission-critical workflows. The platform accelerates innovation by enabling developers to build, iterate, and scale AI solutions from one centralized environment.
  • 24
    Viso Suite

    Viso Suite

    Viso Suite

    Viso Suite is the world’s only end-to-end platform for computer vision. It enables teams to rapidly train, create, deploy and manage computer vision applications – without writing code from scratch. Use Viso Suite to deliver industry-leading computer vision and real-time deep learning systems with low-code and automated software infrastructure. The use of traditional development methods, fragmented software tools, and the lack of experienced engineers are costing organizations lots of time and leading to inefficient, low-performing, and expensive computer vision systems. Build and deploy better computer vision applications faster by abstracting and automating the entire lifecycle with Viso Suite, the all-in-one enterprise vision platform.​ Collect data for computer vision annotation with Viso Suite. Use automated collection capabilities to gather high-quality training data. Control and secure all data collection. Enable continuous data collection to further improve your AI models.
  • 25
    Gradio

    Gradio

    Gradio

    Build & Share Delightful Machine Learning Apps. Gradio is the fastest way to demo your machine learning model with a friendly web interface so that anyone can use it, anywhere! Gradio can be installed with pip. Creating a Gradio interface only requires adding a couple lines of code to your project. You can choose from a variety of interface types to interface your function. Gradio can be embedded in Python notebooks or presented as a webpage. A Gradio interface can automatically generate a public link you can share with colleagues that lets them interact with the model on your computer remotely from their own devices. Once you've created an interface, you can permanently host it on Hugging Face. Hugging Face Spaces will host the interface on its servers and provide you with a link you can share.
  • 26
    INAP DRaaS
    Powered by our partners Veeam and Zerto, INAP DRaaS is flexible enough to meet a diverse range of recovery objectives and budgets without sacrificing anything else. INAP On-Demand DRaaS is a cost-efficient way to build redundancy into your infrastructure. It integrates seamlessly with our Virtual Private Cloud platform and features continuous replication. Pay only for the compute resources that you use during an actual failover event. INAP Dedicated DRaaS is a robust service perfect for mission-critical applications or those with strict compliance requirements. Built on our Dedicated Private Cloud infrastructure, it accommodates complex architecture and advanced configurations equally well. Failover events are painful. Your Disaster Recovery solution shouldn’t ever be—and we’ve made sure of it with our purposefully engineered DRaaS solutions.
  • 27
    Pryon

    Pryon

    Pryon

    Natural Language Processing is Artificial Intelligence that enables computers to analyze and understand human language. Pryon’s AI is trained to perform read, organize and search in ways that previously required humans. This powerful capability is used in every interaction, both to understand a request and to retrieve the accurate response. The success of any NLP project is directly correlated to the sophistication of the underlying natural language technologies used. To make your content ready for use in chatbots, search, automations, etc. – it must be broken into specific pieces so a user can get the exact answer, result or snippet needed. This can be done manually as when a specialist breaks information into intents and entities. Pryon creates a dynamic model of your content for automatically identifying and attaching rich metadata to each piece of information. When you need to add, change or remove content this model is regenerated with a click.
  • 28
    StoneFly

    StoneFly

    StoneFly

    StoneFly is the provider of high-performing, elastic and always available IT infrastructure solutions. Coupled with StoneFusion, our intelligent & patented operating system architecture, we can support your data dependent processes and applications seamlessly anywhere, anytime. Configure backup, replication, disaster recovery, scale out block, file and object storage in private and / or public clouds. Support virtual, container hosting & more. StoneFly also offers Cloud data migration services for email, archives, documents, SharePoint and physical and virtual storage. Total backup and disaster recovery solutions in a single appliance or cloud solution. Hyperconverged options allow physical machines to be restored as virtual machines running directly on the StoneFly disaster recovery appliance for instant recovery.
  • 29
    Tune Studio

    Tune Studio

    NimbleBox

    Tune Studio is an intuitive and versatile platform designed to streamline the fine-tuning of AI models with minimal effort. It empowers users to customize pre-trained machine learning models to suit their specific needs without requiring extensive technical expertise. With its user-friendly interface, Tune Studio simplifies the process of uploading datasets, configuring parameters, and deploying fine-tuned models efficiently. Whether you're working on NLP, computer vision, or other AI applications, Tune Studio offers robust tools to optimize performance, reduce training time, and accelerate AI development, making it ideal for both beginners and advanced users in the AI space.
    Starting Price: $10/user/month
  • 30
    Zeus Cloud

    Zeus Cloud

    Zeus Cloud

    Zeus Cloud (ZC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Zeus Cloud ZC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Zeus Cloud's proven computing environment. Zeus Cloud ZC2 offers the broadest and deepest compute platform with a choice of processor, storage, networking, operating system, and purchase model. We offer the fastest processors in the cloud and we are the only cloud with 400 Gbps ethernet networking. We have the most powerful GPU instances for machine learning training and graphics workloads, as well as the lowest cost-per-inference instances in the cloud. More SAP, HPC, Machine Learning, and Windows workloads running on ZC2 than any other cloud. Click here to learn What's New with Zeus Cloud ZC2.
  • 31
    Massed Compute

    Massed Compute

    Massed Compute

    Massed Compute offers high-performance GPU computing solutions tailored for AI, machine learning, scientific simulations, and data analytics. As an NVIDIA Preferred Partner, it provides access to a comprehensive catalog of enterprise-grade NVIDIA GPUs, including A100, H100, L40, and A6000, ensuring optimal performance for various workloads. Users can choose between bare metal servers for maximum control and performance or on-demand compute instances for flexibility and scalability. Massed Compute's Inventory API allows seamless integration of GPU resources into existing business platforms, enabling provisioning, rebooting, and management of instances with ease. Massed Compute's infrastructure is housed in Tier III data centers, offering consistent uptime, advanced redundancy, and efficient cooling systems. With SOC 2 Type II compliance, the platform ensures high standards of security and data protection.
    Starting Price: $21.60 per hour
  • 32
    NeuroSplit
    NeuroSplit is a patent-pending adaptive-inferencing technology that dynamically “slices” a model’s neural network connections in real time to create two synchronized sub-models, executing initial layers on the end user’s device and offloading the remainder to cloud GPUs, thereby harnessing idle local compute and reducing server costs by up to 60% without sacrificing performance or accuracy. Integrated into Skymel’s Orchestrator Agent platform, NeuroSplit routes each inference request across devices and clouds based on specified latency, cost, or resource constraints, automatically applying fallback logic and intent-driven model selection to maintain reliability under varying network conditions. Its decentralized architecture ensures end-to-end encryption, role-based access controls, and isolated execution contexts, while real-time analytics dashboards provide insights into cost, throughput, and latency metrics.
  • 33
    Sangfor Cloud Platform

    Sangfor Cloud Platform

    Sangfor Technologies

    Sangfor Cloud Platform SCP can manage cross-region clusters and provide heterogeneous management support for VMware data centers, which can divide the managed pool of resources into multiple logically Resource Pools, realizes the customized approval process and billing functions through the setting of classified administrator authority. It also enhances the network management and security among tenants, and tenants can configure their own firewall, and the flexible image management can effectively reduce the workload of platform management personnel in operation and maintenance. On the other hand, in terms of business reliability, through remote disaster recovery services, it provides users with a complete virtual machine-level remote disaster recovery plan. Reduce the complexity of cloud data center construction and management through standardized, process-oriented, and automated Sangfor cloud computing platform.
  • 34
    Qontigo

    Qontigo

    Qontigo

    At Qontigo, we partner with our clients to create solutions that empower investment intelligence to drive targeted sustainable returns. With our award-winning STOXX and DAX indices and institutionally-proven Axioma analytics, we deliver sophisticated solutions at scale backed by modern technology, open architecture and unparalleled client focus. We believe that financial markets must become open, explainable and create positive societal impact. Qontigo will be a catalyst for this transformation, by enabling our clients to provide sophisticated institutional-quality financial solutions at scale to the broad investing public. Working with us means you get a variety of tools for generating alpha and better decision making all under one roof. Open architecture and API-first philosophy allows you to realize more efficiencies, time, cost and from a process perspective.
  • 35
    Alibaba Cloud
    As a business unit of Alibaba Group (NYSE: BABA), Alibaba Cloud provides a comprehensive suite of global cloud computing services to power both our international customers’ online businesses and Alibaba Group’s own e-commerce ecosystem. In January 2017, Alibaba Cloud became the official Cloud Services Partner of the International Olympic Committee. By harnessing, and improving on, the latest cloud technology and security systems, we tirelessly work towards our vision - to make it easier for you to do business anywhere, with anyone in the world. Alibaba Cloud provides cloud computing services for large and small businesses, individual developers, and the public sector in over 200 countries and regions.
  • 36
    PredictSense
    PredictSense is an end-to-end Machine Learning platform powered by AutoML to create AI-powered analytical solutions. Fuel the new technological revolution of tomorrow by accelerating machine intelligence. AI is key to unlocking value from enterprise data investments. PredictSense enables businesses to monetize critical data infrastructure and technology investments by creating AI driven advanced analytical solutions rapidly. Empower data science and business teams with advanced capabilities to quickly build and deploy robust technology solutions at scale. Easily integrate AI into the current product ecosystem and fast track GTM for new AI solutions. Incur huge savings in cost, time and effort by building complex ML models in AutoML. PredictSense democratizes AI for every individual in the organization and creates a simple, user-friendly collaboration platform to seamlessly manage critical ML deployments.
  • 37
    Pangea

    Pangea

    Pangea

    Pangea is the first Security Platform as a Service (SPaaS) delivering comprehensive security functionality which app developers can leverage with a simple call to Pangea’s APIs. The platform offers foundational security services such as Authentication, Authorization, Audit Logging, Secrets Management, Entitlement and Licensing. Other security functions include PII Redaction, Embargo, as well as File, IP, URL and Domain intelligence. Just as you would use AWS for compute, Twilio for communications, or Stripe for payments - Pangea provides security functions directly into your apps. Pangea unifies security for developers, delivering a single platform where API-first security services are streamlined and easy for any developer to deliver secure user experiences.
  • 38
    QCT QuantaGrid
    QCT (Quanta Cloud Technology) QuantaGrid servers are a family of high-performance, scalable, and energy-efficient rackmount servers designed for use in data centers and cloud computing environments. These servers are engineered to deliver exceptional performance and flexibility for a wide range of workloads, including virtualization, high-performance computing (HPC), big data analytics, artificial intelligence, and machine learning (ML). QuantaGrid servers are known for their modular design, allowing for easy customization and configuration based on the specific needs of the deployment. Key features of the QuantaGrid series include support for the latest Intel or AMD processors, high memory capacity, various storage options including NVMe drives, and efficient thermal management for optimized performance and energy savings. With a focus on reliability, scalability, and ease of management, QCT QuantaGrid servers provide organizations with robust solutions for handling data workloads.
  • 39
    Federator.ai

    Federator.ai

    ProphetStor Data Services

    Federator.ai®, ProphetStor’s Artificial Intelligence for IT Operations (AIOps) platform, provides intelligence to orchestrate container resources on top of VMs (virtual machines) or bare metal, allowing users to operate applications without the need to manage the underlying computing resources. Container adoption is growing, and Kubernetes is becoming the de facto standard of container management platforms. Whether container adoption occurs on-premises, in public clouds, or both, the operational overhead is enormous. Using AI/Machine Learning technology, Federator.ai® makes workload and resource predictions for containerized applications. It assists IT administrators foresee computing resource demands of applications and manage computing resources while optimizing costs without sacrificing performance.
  • 40
    Intel Open Edge Platform
    The Intel Open Edge Platform simplifies the development, deployment, and scaling of AI and edge computing solutions on standard hardware with cloud-like efficiency. It provides a curated set of components and workflows that accelerate AI model creation, optimization, and application development. From vision models to generative AI and large language models (LLM), the platform offers tools to streamline model training and inference. By integrating Intel’s OpenVINO toolkit, it ensures enhanced performance on Intel CPUs, GPUs, and VPUs, allowing organizations to bring AI applications to the edge with ease.
  • 41
    Ollama

    Ollama

    Ollama

    Ollama is an innovative platform that focuses on providing AI-powered tools and services, designed to make it easier for users to interact with and build AI-driven applications. Run AI models locally. By offering a range of solutions, including natural language processing models and customizable AI features, Ollama empowers developers, businesses, and organizations to integrate advanced machine learning technologies into their workflows. With an emphasis on usability and accessibility, Ollama strives to simplify the process of working with AI, making it an appealing option for those looking to harness the potential of artificial intelligence in their projects.
  • 42
    Schrödinger

    Schrödinger

    Schrödinger

    Transform drug discovery and materials research with advanced molecular modeling. Our physics-based computational platform integrates differentiated solutions for predictive modeling, data analytics, and collaboration to enable rapid exploration of chemical space. Our platform is deployed by industry leaders worldwide for drug discovery, as well as for materials science in fields as diverse as aerospace, energy, semiconductors, and electronics displays. The platform powers our own drug discovery efforts, from target identification to hit discovery to lead optimization. It also drives our research collaborations to develop novel medicines for critical public health needs. With more than 150 Ph.D. scientists on our team, we invest heavily in R&D. We’ve published over 400 peer-reviewed papers that demonstrate the strength of our physics-based approaches, and we’re continually pushing the limits of computer modeling.
  • 43
    Cisco Plus
    It's all the benefits of Cisco, now as-a-service. Boost speed, agility, and scale with on-demand solutions that intelligently adapt to your business needs. Cisco Plus solutions deliver cross-portfolio technologies to help solve your biggest problems and provide faster time to value. The initial offerings deliver hybrid cloud technologies and will later expand to a broader catalog of services built and delivered with our partner ecosystem. As-a-service models accelerate the shift from infrastructure management to business outcomes. Cisco Plus makes it simpler for customers to acquire and manage Cisco solutions. Cisco Plus offers data center networking, bare metal computing, Edge computing, virtualization services, and VDI.
  • 44
    Compute with Hivenet
    Compute with Hivenet is the world's first truly distributed cloud computing platform, providing reliable and affordable on-demand computing power from a certified network of contributors. Designed for AI model training, inference, and other compute-intensive tasks, it provides secure, scalable, and on-demand GPU resources at up to 70% cost savings compared to traditional cloud providers. Powered by RTX 4090 GPUs, Compute rivals top-tier platforms, offering affordable, transparent pricing with no hidden fees. Compute is part of the Hivenet ecosystem, a comprehensive suite of distributed cloud solutions that prioritizes sustainability, security, and affordability. Through Hivenet, users can leverage their underutilized hardware to contribute to a powerful, distributed cloud infrastructure.
    Starting Price: $0.10/hour
  • 45
    Cerbrec Graphbook
    Construct your model directly as a live, interactive graph. Preview data flowing through your visualized model architecture. View and edit your visualized model architecture down to the atomic level. Graphbook provides X-ray transparency with no black boxes. Graphbook live checks data type and shape with understandable error messages, making your model debugging quick and easy. Abstracting out software dependencies and environment configuration, Graphbook allows you to focus on model architecture and data flow with the handy computing resources needed. Cerbrec Graphbook is a visual IDE for AI modeling, transforming cumbersome model development into a user-friendly experience. With a growing community of machine learning engineers and data scientists, Graphbook helps developers work with their text and tabular data to fine-tune language models such as BERT and GPT. Everything is fully managed out of the box so you can preview your model exactly as it will behave.
  • 46
    Substrate

    Substrate

    Substrate

    Substrate is the platform for agentic AI. Elegant abstractions and high-performance components, optimized models, vector database, code interpreter, and model router. Substrate is the only compute engine designed to run multi-step AI workloads. Describe your task by connecting components and let Substrate run it as fast as possible. We analyze your workload as a directed acyclic graph and optimize the graph, for example, merging nodes that can be run in a batch. The Substrate inference engine automatically schedules your workflow graph with optimized parallelism, reducing the complexity of chaining multiple inference APIs. No more async programming, just connect nodes and let Substrate parallelize your workload. Our infrastructure guarantees your entire workload runs in the same cluster, often on the same machine. You won’t spend fractions of a second per task on unnecessary data roundtrips and cross-region HTTP transport.
    Starting Price: $30 per month
  • 47
    VPLS

    VPLS

    VPLS

    With 19 data centers worldwide and over 68,000 servers under management, VPLS has the expertise and global reach for all your cloud, colocation, hosting, backup, and disaster recovery needs. VPLS is your partner for Backup as a Service (BaaS) and Disaster Recovery as a Service (DRaaS). We incorporate various backup technologies coupled with our Cloud enterprise to deliver tailored solutions to meet your RTO/RPO. VPLS Cloud Pool resources can also be consumed on a Pay-Per-Use or subscription basis for on-demand infrastructure. VPLS’s Managed Network service comes with 24/7/365 monitoring, alerting, and incident response by our in-house NOC/SOC staffed with certified experts. Our trained professionals will have visibility into your network via real-time status monitoring and up-to-the-second logs, which will allow them to easily identify and correct any issues that occur within a guaranteed SLA.
  • 48
    Databarracks DRaaS
    Disaster Recovery is one of the best uses for cloud computing. The demand for computing resources varies from very little for months and then a sharp rise for a short period. With DRaaS, you only pay for the resources when you need them. The concept is really simple. In the past, IT disaster recovery was a matter of simply buying an exact replica of your physical environment and failing over to it when the primary environment experienced downtime. Now, we replicate your critical systems and data, and only spin up the resources when you need them – usually for testing or failover. It’s cheaper, faster, more efficient and more flexible
  • 49
    BenevolentAI

    BenevolentAI

    BenevolentAI

    BenevolentAI is an AI-enabled drug discovery platform and scientific technology company that unites advanced artificial intelligence, machine learning, and domain-specific science to accelerate the discovery, design, and development of new medicines for complex diseases by making sense of vast, diverse biomedical data and generating actionable scientific insights faster than traditional methods. Its proprietary Benevolent Platform ingests and harmonizes structured and unstructured biomedical information, including literature, genomics, clinical information, and multi-omics data, into a comprehensive knowledge graph, enabling scientists to reason across biological systems, generate hypotheses, predict novel drug targets, and design candidate molecules with higher confidence and lower failure rates.
  • 50
    MCPTotal

    MCPTotal

    MCPTotal

    MCPTotal is a secure, enterprise-grade platform designed to manage, host, and govern MCP (Model Context Protocol) servers and AI-tool integrations in a controlled, audit-ready environment rather than letting them run ad hoc on developers’ machines. It offers a “Hub”, a centralized, sandboxed runtime environment where MCP servers are containerized, hardened, and pre-vetted for security. A built-in “MCP Gateway” acts like an AI-native firewall: it inspects MCP traffic in real time, enforces policies, monitors all tool calls and data flows, and prevents common risks such as data exfiltration, prompt-injection attacks, or uncontrolled credential usage. All API keys, environment variables, and credentials are stored securely in an encrypted vault, avoiding the risk of credential-sprawl or storing secrets in plaintext files on local machines. MCPTotal supports discovery and governance; security teams can scan desktops and cloud instances to detect where MCP servers are in use.