Alternatives to Foundry Local
Compare Foundry Local alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Foundry Local in 2026. Compare features, ratings, user reviews, pricing, and more from Foundry Local competitors and alternatives in order to make an informed decision for your business.
-
1
LM-Kit.NET
LM-Kit
LM-Kit.NET is a cutting-edge, high-level inference SDK designed specifically to bring the advanced capabilities of Large Language Models (LLM) into the C# ecosystem. Tailored for developers working within .NET, LM-Kit.NET provides a comprehensive suite of powerful Generative AI tools, making it easier than ever to integrate AI-driven functionality into your applications. The SDK is versatile, offering specialized AI features that cater to a variety of industries. These include text completion, Natural Language Processing (NLP), content retrieval, text summarization, text enhancement, language translation, and much more. Whether you are looking to enhance user interaction, automate content creation, or build intelligent data retrieval systems, LM-Kit.NET offers the flexibility and performance needed to accelerate your project. -
2
StackAI
StackAI
StackAI is an enterprise AI automation platform to build end-to-end internal tools and processes with AI agents in a fully compliant and secure way. Designed for large organizations, it enables teams to automate complex workflows across operations, compliance, finance, IT, and support without heavy engineering. With StackAI you can: • Connect knowledge bases (SharePoint, Confluence, Notion, Google Drive, databases) with versioning, citations, and access controls. • Deploy AI agents as chat assistants, advanced forms, or APIs integrated into Slack, Teams, Salesforce, HubSpot, or ServiceNow. • Govern usage with enterprise security: SSO (Okta, Azure AD, Google), RBAC, audit logs, PII masking, data residency, and cost controls. • Route across OpenAI, Anthropic, Google, or local LLMs with guardrails, evaluations, and testing. • Start fast with templates for Contract Analyzer, Support Desk, RFP Response, Investment Memo Generator, and more. -
3
TensorFlow
TensorFlow
An end-to-end open source machine learning platform. TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications. Build and train ML models easily using intuitive high-level APIs like Keras with eager execution, which makes for immediate model iteration and easy debugging. Easily train and deploy models in the cloud, on-prem, in the browser, or on-device no matter what language you use. A simple and flexible architecture to take new ideas from concept to code, to state-of-the-art models, and to publication faster. Build, deploy, and experiment easily with TensorFlow.Starting Price: Free -
4
Microsoft Foundry Models
Microsoft
Microsoft Foundry Models is a unified model catalog that gives enterprises access to more than 11,000 AI models from Microsoft, OpenAI, Anthropic, Mistral AI, Meta, Cohere, DeepSeek, xAI, and others. It allows teams to explore, test, and deploy models quickly using a task-centric discovery experience and integrated playground. Organizations can fine-tune models with ready-to-use pipelines and evaluate performance using their own datasets for more accurate benchmarking. Foundry Models provides secure, scalable deployment options with serverless and managed compute choices tailored to enterprise needs. With built-in governance, compliance, and Azure’s global security framework, businesses can safely operationalize AI across mission-critical workflows. The platform accelerates innovation by enabling developers to build, iterate, and scale AI solutions from one centralized environment. -
5
Microsoft Foundry
Microsoft
Microsoft Foundry is an end-to-end platform for building, optimizing, and governing AI apps and agents at scale. It gives developers access to more than 11,000 models — from foundational to multimodal — all available through one unified interface. With a simple, interoperable API and SDK, teams can build faster, ship confidently, and reduce integration complexity. Foundry connects seamlessly with your business systems, enabling AI solutions that understand your data and operate securely across your organization. Built-in governance, monitoring, and fleetwide controls ensure responsible AI deployment from day one. Microsoft Foundry helps companies turn AI into real business impact with speed, security, and precision. -
6
LEAP
Liquid AI
The LEAP Edge AI Platform offers a full-stack on-device AI toolchain that enables developers to build edge AI applications, from model selection through inference, entirely on device. It includes a best-model search engine to find the most appropriate model for a given task and device constraint, a curated library of pre-trained model bundles ready for download, and fine-tuning tools (such as GPU-optimized scripts) for customizing models like LFM2 to specific use cases. It supports vision-enabled capabilities across iOS, Android, and laptop devices, and includes function-calling so AI models can interact with external systems via structured outputs. For deployment, LEAP provides an Edge SDK that lets developers load and query models locally, just like a cloud API, but entirely offline, and a model bundling service to package any supported model or checkpoint into a bundle optimized for edge deployment.Starting Price: Free -
7
Google AI Edge
Google
Google AI Edge offers a comprehensive suite of tools and frameworks designed to facilitate the deployment of artificial intelligence across mobile, web, and embedded applications. By enabling on-device processing, it reduces latency, allows offline functionality, and ensures data remains local and private. It supports cross-platform compatibility, allowing the same model to run seamlessly across embedded systems. It is also multi-framework compatible, working with models from JAX, Keras, PyTorch, and TensorFlow. Key components include low-code APIs for common AI tasks through MediaPipe, enabling quick integration of generative AI, vision, text, and audio functionalities. Visualize the transformation of your model through conversion and quantification. Overlays the results of the comparisons to debug the hotspots. Explore, debug, and compare your models visually. Overlays comparisons and numerical performance data to identify problematic hotspots.Starting Price: Free -
8
NeuroSplit
Skymel
NeuroSplit is a patent-pending adaptive-inferencing technology that dynamically “slices” a model’s neural network connections in real time to create two synchronized sub-models, executing initial layers on the end user’s device and offloading the remainder to cloud GPUs, thereby harnessing idle local compute and reducing server costs by up to 60% without sacrificing performance or accuracy. Integrated into Skymel’s Orchestrator Agent platform, NeuroSplit routes each inference request across devices and clouds based on specified latency, cost, or resource constraints, automatically applying fallback logic and intent-driven model selection to maintain reliability under varying network conditions. Its decentralized architecture ensures end-to-end encryption, role-based access controls, and isolated execution contexts, while real-time analytics dashboards provide insights into cost, throughput, and latency metrics. -
9
LiteRT
Google
LiteRT (Lite Runtime), formerly known as TensorFlow Lite, is Google's high-performance runtime for on-device AI. It enables developers to deploy machine learning models across various platforms and microcontrollers. LiteRT supports models from TensorFlow, PyTorch, and JAX, converting them into the efficient FlatBuffers format (.tflite) for optimized on-device inference. Key features include low latency, enhanced privacy by processing data locally, reduced model and binary sizes, and efficient power consumption. The runtime offers SDKs in multiple languages such as Java/Kotlin, Swift, Objective-C, C++, and Python, facilitating integration into diverse applications. Hardware acceleration is achieved through delegates like GPU and iOS Core ML, improving performance on supported devices. LiteRT Next, currently in alpha, introduces a new set of APIs that streamline on-device hardware acceleration.Starting Price: Free -
10
Ministral 3B
Mistral AI
Mistral AI introduced two state-of-the-art models for on-device computing and edge use cases, named "les Ministraux": Ministral 3B and Ministral 8B. These models set a new frontier in knowledge, commonsense reasoning, function-calling, and efficiency in the sub-10B category. They can be used or tuned for various applications, from orchestrating agentic workflows to creating specialist task workers. Both models support up to 128k context length (currently 32k on vLLM), and Ministral 8B features a special interleaved sliding-window attention pattern for faster and memory-efficient inference. These models were built to provide a compute-efficient and low-latency solution for scenarios such as on-device translation, internet-less smart assistants, local analytics, and autonomous robotics. Used in conjunction with larger language models like Mistral Large, les Ministraux also serve as efficient intermediaries for function-calling in multi-step agentic workflows.Starting Price: Free -
11
Oumi
Oumi
Oumi is a fully open source platform that streamlines the entire lifecycle of foundation models, from data preparation and training to evaluation and deployment. It supports training and fine-tuning models ranging from 10 million to 405 billion parameters using state-of-the-art techniques such as SFT, LoRA, QLoRA, and DPO. The platform accommodates both text and multimodal models, including architectures like Llama, DeepSeek, Qwen, and Phi. Oumi offers tools for data synthesis and curation, enabling users to generate and manage training datasets effectively. For deployment, it integrates with popular inference engines like vLLM and SGLang, ensuring efficient model serving. The platform also provides comprehensive evaluation capabilities across standard benchmarks to assess model performance. Designed for flexibility, Oumi can run on various environments, from local laptops to cloud infrastructures such as AWS, Azure, GCP, and Lambda.Starting Price: Free -
12
Microsoft Foundry Agent Service
Microsoft
Microsoft Foundry Agent Service is a secure, enterprise-ready platform for designing, deploying, and orchestrating AI agents at scale. It gives teams a streamlined interface and toolset to automate complex workflows using multi-agent systems. Developers can build with hosted agents, custom code, or agent frameworks while taking advantage of Azure’s reliability, scalability, and integrated observability. Built-in tools, enterprise connectors, and Model Context Protocol support make it easy for agents to interact with business systems and organizational data. Security, access governance, and compliance are embedded throughout, allowing companies to maintain full control while deploying intelligent automation across critical processes. With one-click deployment to Microsoft 365 experiences, Foundry Agent Service accelerates how organizations operationalize AI in everyday work. -
13
Llama Stack
Meta
Llama Stack is a modular framework designed to streamline the development of applications powered by Meta's Llama language models. It offers a client-server architecture with flexible configurations, allowing developers to mix and match various providers for components such as inference, memory, agents, telemetry, and evaluations. The framework includes pre-configured distributions tailored for different deployment scenarios, enabling seamless transitions from local development to production environments. Developers can interact with the Llama Stack server using client SDKs available in multiple programming languages, including Python, Node.js, Swift, and Kotlin. Comprehensive documentation and example applications are provided to assist users in building and deploying Llama-based applications efficiently.Starting Price: Free -
14
OpenVINO
Intel
The Intel® Distribution of OpenVINO™ toolkit is an open-source AI development toolkit that accelerates inference across Intel hardware platforms. Designed to streamline AI workflows, it allows developers to deploy optimized deep learning models for computer vision, generative AI, and large language models (LLMs). With built-in tools for model optimization, the platform ensures high throughput and lower latency, reducing model footprint without compromising accuracy. OpenVINO™ is perfect for developers looking to deploy AI across a range of environments, from edge devices to cloud servers, ensuring scalability and performance across Intel architectures.Starting Price: Free -
15
Phi-4-mini-flash-reasoning
Microsoft
Phi-4-mini-flash-reasoning is a 3.8 billion‑parameter open model in Microsoft’s Phi family, purpose‑built for edge, mobile, and other resource‑constrained environments where compute, memory, and latency are tightly limited. It introduces the SambaY decoder‑hybrid‑decoder architecture with Gated Memory Units (GMUs) interleaved alongside Mamba state‑space and sliding‑window attention layers, delivering up to 10× higher throughput and a 2–3× reduction in latency compared to its predecessor without sacrificing advanced math and logic reasoning performance. Supporting a 64 K‑token context length and fine‑tuned on high‑quality synthetic data, it excels at long‑context retrieval, reasoning tasks, and real‑time inference, all deployable on a single GPU. Phi-4-mini-flash-reasoning is available today via Azure AI Foundry, NVIDIA API Catalog, and Hugging Face, enabling developers to build fast, scalable, logic‑intensive applications. -
16
Ministral 8B
Mistral AI
Mistral AI has introduced two advanced models for on-device computing and edge applications, named "les Ministraux": Ministral 3B and Ministral 8B. These models excel in knowledge, commonsense reasoning, function-calling, and efficiency within the sub-10B parameter range. They support up to 128k context length and are designed for various applications, including on-device translation, offline smart assistants, local analytics, and autonomous robotics. Ministral 8B features an interleaved sliding-window attention pattern for faster and more memory-efficient inference. Both models can function as intermediaries in multi-step agentic workflows, handling tasks like input parsing, task routing, and API calls based on user intent with low latency and cost. Benchmark evaluations indicate that les Ministraux consistently outperforms comparable models across multiple tasks. As of October 16, 2024, both models are available, with Ministral 8B priced at $0.1 per million tokens.Starting Price: Free -
17
iCast ERP Foundry Software
Ellipsis Infotech
Our software solutions for foundry industries- ‘iCast’, iCastPRO & iCastENTERPRISE are developed under the consultation of leading foundrymen, foundry professionals and management consultants. It has successfully implemented and working in various foundries. In a very short time span iCast has been very popular and favorite Foundry Software for production and management among the foundries. Smart analysis reports and BI reports generated by iCast has been proven enormous help to foundry owners and managers in overcoming challenges like data capturing and analysis of the data and taking business decisions. This software provides solution to almost all the basic day-to-day needs and requirements that arises in a foundry. It covers all the major functionality of the foundry industry.Starting Price: $700 one-time payment -
18
SenseFoundry
SenseTime
SenseFoundry is a one-stop software platform tailored for Smart City management, addressing the needs of customers in the public sector. SenseFoundry Enterprise is our software platform to facilitate and accelerate the digital transformation of our enterprise customers, addressing complex demands from different industry verticals. We work with city administrators to build future-oriented urban management platforms. Integrated with the IT infrastructure of cities, our SenseFoundry software platform, powered by AI models, transcribes raw and real-time city visual data into insights, alerts and actions. SenseFoundry is used to monitor the conditions of public facilities such as fire hydrants, manhole covers, power poles and road signs. It is also used to track incidents, such as traffic accidents, fire and smoke, emergency exit obstructions, exposed garbage, road damage and unauthorized parking. It also tracks the impact of natural disasters such as flooding and typhoons and more. -
19
Ai2 OLMoE
The Allen Institute for Artificial Intelligence
Ai2 OLMoE is a fully open source mixture-of-experts language model that is capable of running completely on-device, allowing you to try our model privately and securely. Our app is intended to help researchers better explore how to make on-device intelligence better and to enable developers to quickly prototype new AI experiences, all with no cloud connectivity required. OLMoE is a highly efficient mixture-of-experts version of the Ai2 OLMo family of models. Experience which real-world tasks state-of-the-art local models are capable of. Research how to improve small AI models. Test your own models locally using our open-source codebase. Integrate OLMoE into other iOS applications. The Ai2 OLMoE app provides privacy and security by operating completely on-device. Easily share the output of your conversations with friends or colleagues. The OLMoE model and the application code are fully open source.Starting Price: Free -
20
Phi-4
Microsoft
Phi-4 is a 14B parameter state-of-the-art small language model (SLM) that excels at complex reasoning in areas such as math, in addition to conventional language processing. Phi-4 is the latest member of our Phi family of small language models and demonstrates what’s possible as we continue to probe the boundaries of SLMs. Phi-4 is currently available on Azure AI Foundry under a Microsoft Research License Agreement (MSRLA) and will be available on Hugging Face. Phi-4 outperforms comparable and larger models on math related reasoning due to advancements throughout the processes, including the use of high-quality synthetic datasets, curation of high-quality organic data, and post-training innovations. Phi-4 continues to push the frontier of size vs quality. -
21
WP Foundry
Michael Beck
WP Foundry is a desktop WordPress administration application which makes WordPress website management easy. It enables users to perform backup, update, activation and deactivation actions on their WordPress plugins, themes and core from their local computer.Starting Price: $5/year -
22
Simplismart
Simplismart
Fine-tune and deploy AI models with Simplismart's fastest inference engine. Integrate with AWS/Azure/GCP and many more cloud providers for simple, scalable, cost-effective deployment. Import open source models from popular online repositories or deploy your own custom model. Leverage your own cloud resources or let Simplismart host your model. With Simplismart, you can go far beyond AI model deployment. You can train, deploy, and observe any ML model and realize increased inference speeds at lower costs. Import any dataset and fine-tune open-source or custom models rapidly. Run multiple training experiments in parallel efficiently to speed up your workflow. Deploy any model on our endpoints or your own VPC/premise and see greater performance at lower costs. Streamlined and intuitive deployment is now a reality. Monitor GPU utilization and all your node clusters in one dashboard. Detect any resource constraints and model inefficiencies on the go. -
23
Flowise
Flowise AI
Flowise is an open-source, low-code platform that enables developers to create customized Large Language Model (LLM) applications through a user-friendly drag-and-drop interface. It supports integration with various LLMs, including LangChain and LlamaIndex, and offers over 100 integrations to facilitate the development of AI agents and orchestration flows. Flowise provides APIs, SDKs, and embedded widgets for seamless incorporation into existing systems, and is platform-agnostic, allowing deployment in air-gapped environments with local LLMs and vector databases.Starting Price: Free -
24
Climb
Climb
Select a model, and we'll handle the deployment, hosting, versioning and tuning then give you an inference endpoint. -
25
Intel Open Edge Platform
Intel
The Intel Open Edge Platform simplifies the development, deployment, and scaling of AI and edge computing solutions on standard hardware with cloud-like efficiency. It provides a curated set of components and workflows that accelerate AI model creation, optimization, and application development. From vision models to generative AI and large language models (LLM), the platform offers tools to streamline model training and inference. By integrating Intel’s OpenVINO toolkit, it ensures enhanced performance on Intel CPUs, GPUs, and VPUs, allowing organizations to bring AI applications to the edge with ease. -
26
NativeMind
NativeMind
NativeMind is an open source, on-device AI assistant that runs entirely in your browser via Ollama integration, ensuring absolute privacy by never sending data to the cloud. Everything, from model inference to prompt processing, occurs locally, so there’s no syncing, logging, or data leakage. Users can load and switch between powerful open models such as DeepSeek, Qwen, Llama, Gemma, and Mistral instantly, without additional setup, and leverage native browser features for streamlined workflows. NativeMind offers clean, concise webpage summarization; persistent, context-aware chat across multiple tabs; local web search that retrieves and answers queries directly within the page; and immersive, format-preserving translation of entire pages. Built for speed and security, the extension is fully auditable and community-backed, delivering enterprise-grade performance for real-world use cases without vendor lock-in or hidden telemetry.Starting Price: Free -
27
Phi-4-reasoning-plus
Microsoft
Phi-4-reasoning-plus is a 14-billion parameter open-weight reasoning model that builds upon Phi-4-reasoning capabilities. It is further trained with reinforcement learning to utilize more inference-time compute, using 1.5x more tokens than Phi-4-reasoning, to deliver higher accuracy. Despite its significantly smaller size, Phi-4-reasoning-plus achieves better performance than OpenAI o1-mini and DeepSeek-R1 at most benchmarks, including mathematical reasoning and Ph.D. level science questions. It surpasses the full DeepSeek-R1 model (with 671 billion parameters) on the AIME 2025 test, the 2025 qualifier for the USA Math Olympiad. Phi-4-reasoning-plus is available on Azure AI Foundry and HuggingFace. -
28
ModelArk
ByteDance
ModelArk is ByteDance’s one-stop large model service platform, providing access to cutting-edge AI models for video, image, and text generation. With powerful options like Seedance 1.0 for video, Seedream 3.0 for image creation, and DeepSeek-V3.1 for reasoning, it enables businesses and developers to build scalable, AI-driven applications. Each model is backed by enterprise-grade security, including end-to-end encryption, data isolation, and auditability, ensuring privacy and compliance. The platform’s token-based pricing keeps costs transparent, starting with 500,000 free inference tokens per LLM and 2 million tokens per vision model. Developers can quickly integrate APIs for inference, fine-tuning, evaluation, and plugins to extend model capabilities. Designed for scalability, ModelArk offers fast deployment, high GPU availability, and seamless enterprise integration. -
29
SuperDuperDB
SuperDuperDB
Build and manage AI applications easily without needing to move your data to complex pipelines and specialized vector databases. Integrate AI and vector search directly with your database including real-time inference and model training. A single scalable deployment of all your AI models and APIs which is automatically kept up-to-date as new data is processed immediately. No need to introduce an additional database and duplicate your data to use vector search and build on top of it. SuperDuperDB enables vector search in your existing database. Integrate and combine models from Sklearn, PyTorch, and HuggingFace with AI APIs such as OpenAI to build even the most complex AI applications and workflows. Deploy all your AI models to automatically compute outputs (inference) in your datastore in a single environment with simple Python commands. -
30
GradientJ
GradientJ
GradientJ provides everything you need to build large language model applications in minutes and manage them forever. Discover and maintain the best prompts by saving versions and comparing them across benchmark examples. Orchestrate and manage complex applications by chaining prompts and knowledge bases into complex APIs. Enhance the accuracy of your models by integrating them with your proprietary data. -
31
Fireworks AI
Fireworks AI
Fireworks partners with the world's leading generative AI researchers to serve the best models, at the fastest speeds. Independently benchmarked to have the top speed of all inference providers. Use powerful models curated by Fireworks or our in-house trained multi-modal and function-calling models. Fireworks is the 2nd most used open-source model provider and also generates over 1M images/day. Our OpenAI-compatible API makes it easy to start building with Fireworks. Get dedicated deployments for your models to ensure uptime and speed. Fireworks is proudly compliant with HIPAA and SOC2 and offers secure VPC and VPN connectivity. Meet your needs with data privacy - own your data and your models. Serverless models are hosted by Fireworks, there's no need to configure hardware or deploy models. Fireworks.ai is a lightning-fast inference platform that helps you serve generative AI models.Starting Price: $0.20 per 1M tokens -
32
Cloud Foundry ensures that the build and deploy aspects of coding remain carefully coordinated with any attached services — resulting in quick, consistent and reliable iterating of applications. As an industry-standard platform as a service (PaaS), Cloud Foundry ensures the fastest, easiest and most reliable deployment of cloud-native applications. IBM offers the Cloud Foundry PaaS in several hosting models, allowing you to customize your PaaS experience and balance a range of considerations, including price, deployment speed and security. Cloud Foundry includes runtimes for Java, Node.js, PHP, Python, Ruby, ASP.NET, Tomcat, Swift and Go. Community build packs are also available. Combined with DevOps services, the application runtimes enable a delivery pipeline that automates much of the iterative development process.
-
33
Azure OpenAI Service
Microsoft
Apply advanced coding and language models to a variety of use cases. Leverage large-scale, generative AI models with deep understandings of language and code to enable new reasoning and comprehension capabilities for building cutting-edge applications. Apply these coding and language models to a variety of use cases, such as writing assistance, code generation, and reasoning over data. Detect and mitigate harmful use with built-in responsible AI and access enterprise-grade Azure security. Gain access to generative models that have been pretrained with trillions of words. Apply them to new scenarios including language, code, reasoning, inferencing, and comprehension. Customize generative models with labeled data for your specific scenario using a simple REST API. Fine-tune your model's hyperparameters to increase accuracy of outputs. Use the few-shot learning capability to provide the API with examples and achieve more relevant results.Starting Price: $0.0004 per 1000 tokens -
34
DataChain
iterative.ai
DataChain connects unstructured data in cloud storage with AI models and APIs, enabling instant data insights by leveraging foundational models and API calls to quickly understand your unstructured files in storage. Its Pythonic stack accelerates development tenfold by switching to Python-based data wrangling without SQL data islands. DataChain ensures dataset versioning, guaranteeing traceability and full reproducibility for every dataset to streamline team collaboration and ensure data integrity. It allows you to analyze your data where it lives, keeping raw data in storage (S3, GCP, Azure, or local) while storing metadata in inefficient data warehouses. DataChain offers tools and integrations that are cloud-agnostic for both storage and computing. With DataChain, you can query your unstructured multi-modal data, apply intelligent AI filters to curate data for training and snapshot your unstructured data, the code for data selection, and any stored or computed metadata.Starting Price: Free -
35
Stochastic
Stochastic
Enterprise-ready AI system that trains locally on your data, deploys on your cloud and scales to millions of users without an engineering team. Build customize and deploy your own chat-based AI. Finance chatbot. xFinance, a 13-billion parameter model fine-tuned on an open-source model using LoRA. Our goal was to show that it is possible to achieve impressive results in financial NLP tasks without breaking the bank. Personal AI assistant, your own AI to chat with your documents. Single or multiple documents, easy or complex questions, and much more. Effortless deep learning platform for enterprises, hardware efficient algorithms to speed up inference at a lower cost. Real-time logging and monitoring of resource utilization and cloud costs of deployed models. xTuring is an open-source AI personalization software. xTuring makes it easy to build and control LLMs by providing a simple interface to personalize LLMs to your own data and application. -
36
Evoke
Evoke
Focus on building, we’ll take care of hosting. Just plug and play with our rest API. No limits, no headaches. We have all the inferencing capacity you need. Stop paying for nothing. We’ll only charge based on use. Our support team is our tech team too. So you’ll be getting support directly rather than jumping through hoops. The flexible infrastructure allows us to scale with you as you grow and handle any spikes in activity. Image and art generation from text to image or image to image with clear documentation with our stable diffusion API. Change the output's art style with additional models. MJ v4, Anything v3, Analog, Redshift, and more. Other stable diffusion versions like 2.0+ will also be included. Train your own stable diffusion model (fine-tuning) and deploy on Evoke as an API. We plan to have other models like Whisper, Yolo, GPT-J, GPT-NEOX, and many more in the future for not only inference but also training and deployment.Starting Price: $0.0017 per compute second -
37
AWS Thinkbox XMesh
Amazon
Optimize slow or large animated 3D geometry asset files. Support popular applications, such as Autodesk 3ds Max, Autodesk Maya, and The Foundry’s Nuke. Share consistent channel data between frames and remove data duplication over time. Increase file loading speeds for animated scene geometry. AWS Thinkbox XMesh accelerates file uploads for your large animated geometry assets. -
38
Together AI
Together AI
Together AI provides an AI-native cloud platform built to accelerate training, fine-tuning, and inference on high-performance GPU clusters. Engineered for massive scale, the platform supports workloads that process trillions of tokens without performance drops. Together AI delivers industry-leading cost efficiency by optimizing hardware, scheduling, and inference techniques, lowering total cost of ownership for demanding AI workloads. With deep research expertise, the company brings cutting-edge models, hardware, and runtime innovations—like ATLAS runtime-learning accelerators—directly into production environments. Its full-stack ecosystem includes a model library, inference APIs, fine-tuning capabilities, pre-training support, and instant GPU clusters. Designed for AI-native teams, Together AI helps organizations build and deploy advanced applications faster and more affordably.Starting Price: $0.0001 per 1k tokens -
39
Teachable Machine
Teachable Machine
A fast, easy way to create machine learning models for your sites, apps, and more – no expertise or coding required. Teachable Machine is flexible – use files or capture examples live. It’s respectful of the way you work. You can even choose to use it entirely on-device, without any webcam or microphone data leaving your computer. Teachable Machine is a web-based tool that makes creating machine learning models fast, easy, and accessible to everyone. Educators, artists, students, innovators, makers of all kinds – really, anyone who has an idea they want to explore. No prerequisite machine learning knowledge required. You train a computer to recognize your images, sounds, and poses without writing any machine learning code. Then, use your model in your own projects, sites, apps, and more. -
40
Foundry
Foundry
Foundry is a new breed of public cloud, powered by an orchestration platform that makes accessing AI compute as easy as flipping a light switch. Explore the high-impact features of our GPU cloud services designed for maximum performance and reliability. Whether you’re managing training runs, serving clients, or meeting research deadlines. Industry giants have invested for years in infra teams that build sophisticated cluster management and workload orchestration tools to abstract away the hardware. Foundry makes this accessible to everyone else, ensuring that users can reap compute leverage without a twenty-person team at scale. The current GPU ecosystem is first-come, first-serve, and fixed-price. Availability is a challenge in peak times, and so are the puzzling gaps in rates across vendors. Foundry is powered by a sophisticated mechanism design that delivers better price performance than anyone on the market. -
41
LMCache
LMCache
LMCache is an open source Knowledge Delivery Network (KDN) designed as a caching layer for large language model serving that accelerates inference by reusing KV (key-value) caches across repeated or overlapping computations. It enables fast prompt caching, allowing LLMs to “prefill” recurring text only once and then reuse those stored KV caches, even in non-prefix positions, across multiple serving instances. This approach reduces time to first token, saves GPU cycles, and increases throughput in scenarios such as multi-round question answering or retrieval augmented generation. LMCache supports KV cache offloading (moving cache from GPU to CPU or disk), cache sharing across instances, and disaggregated prefill, which separates the prefill and decoding phases for resource efficiency. It is compatible with inference engines like vLLM and TGI and supports compressed storage, blending techniques to merge caches, and multiple backend storage options.Starting Price: Free -
42
AI-FLOW
AI-Flow
AI-FLOW is an innovative open-source platform designed to simplify how creators and innovators harness the power of artificial intelligence. With its user-friendly drag-and-drop interface, AI-FLOW enables you to effortlessly connect and combine leading AI models, crafting custom AI tools tailored to your unique needs. Key Features: 1. Diverse AI Model Integration: Gain access to a suite of top-tier AI models, including GPT-4, DALL-E 3, Stable Diffusion, Mistral, LLaMA, and more—all in one convenient location. 2. Drag-and-Drop Interface: Build complex AI workflows with ease—no coding required—thanks to our intuitive design. 3. Custom AI Tool Creation: Design bespoke AI solutions quickly, from image generation to language processing. 4. Local Data Storage: Maintain full control over your data with options for local storage and the ability to export as JSON files.Starting Price: $9/500 credits -
43
Options for every business to train deep learning and machine learning models cost-effectively. AI accelerators for every use case, from low-cost inference to high-performance training. Simple to get started with a range of services for development and deployment. Tensor Processing Units (TPUs) are custom-built ASIC to train and execute deep neural networks. Train and run more powerful and accurate models cost-effectively with faster speed and scale. A range of NVIDIA GPUs to help with cost-effective inference or scale-up or scale-out training. Leverage RAPID and Spark with GPUs to execute deep learning. Run GPU workloads on Google Cloud where you have access to industry-leading storage, networking, and data analytics technologies. Access CPU platforms when you start a VM instance on Compute Engine. Compute Engine offers a range of both Intel and AMD processors for your VMs.
-
44
Ollama
Ollama
Ollama is an innovative platform that focuses on providing AI-powered tools and services, designed to make it easier for users to interact with and build AI-driven applications. Run AI models locally. By offering a range of solutions, including natural language processing models and customizable AI features, Ollama empowers developers, businesses, and organizations to integrate advanced machine learning technologies into their workflows. With an emphasis on usability and accessibility, Ollama strives to simplify the process of working with AI, making it an appealing option for those looking to harness the potential of artificial intelligence in their projects.Starting Price: Free -
45
Foundry USA Pool
Foundry
Finally, a US-based, truly institutional-grade mining pool brought to you by Foundry. Built from the ground up, we are geared towards providing best-in-class service with a focus on large miners. The Foundry USA Pool is geared for institutional miners who seek an integrated suite of services ranging from treasury management and bitcoin custody to derivatives products to BTC collateralized lending and yield earnings. Bitcoin mining pools are a coordinated way in which different miners can “pool” their resources together and increase their total hashing power. If the pool manages to successfully mine a block, the coinbase reward is split equally between integrants, according to the number of shares they contribute to the pool. Foundry USA Pool, being an American mining pool with transparent and compliant methods, provides all stakeholders with full disclosure of their earnings. -
46
Langbase
Langbase
The complete LLM platform with a superior developer experience and robust infrastructure. Build, deploy, and manage hyper-personalized, streamlined, and trusted generative AI apps. Langbase is an open source OpenAI alternative, a new inference engine & AI tool for any LLM. The most "developer-friendly" LLM platform to ship hyper-personalized AI apps in seconds.Starting Price: Free -
47
Anyscale
Anyscale
Anyscale is a unified AI platform built around Ray, the world’s leading AI compute engine, designed to help teams build, deploy, and scale AI and Python applications efficiently. The platform offers RayTurbo, an optimized version of Ray that delivers up to 4.5x faster data workloads, 6.1x cost savings on large language model inference, and up to 90% lower costs through elastic training and spot instances. Anyscale provides a seamless developer experience with integrated tools like VSCode and Jupyter, automated dependency management, and expert-built app templates. Deployment options are flexible, supporting public clouds, on-premises clusters, and Kubernetes environments. Anyscale Jobs and Services enable reliable production-grade batch processing and scalable web services with features like job queuing, retries, observability, and zero-downtime upgrades. Security and compliance are ensured with private data environments, auditing, access controls, and SOC 2 Type II attestation.Starting Price: $0.00006 per minute -
48
Wordware
Wordware
Wordware enables anyone to develop, iterate, and deploy useful AI agents. Wordware combines the best aspects of software with the power of natural language. Remove constraints of traditional no-code tools and empower every team member to iterate independently. Natural language programming is here to stay. Wordware frees prompt from your codebase by providing both technical and non-technical users with a powerful IDE for AI agent creation. Experience the simplicity and flexibility of our interface. Empower your team to easily collaborate, manage prompts, and streamline workflows with an intuitive design. Loops, branching, structured generation, version control, and type safety help you get the most out of LLMs, while custom code execution allows you to connect to virtually any API. Easily switch between various large language model providers with one click. Optimize your workflows with the best cost-to-latency-to-quality ratios for your application.Starting Price: $69 per month -
49
←INTELLI•GRAPHS→
←INTELLI•GRAPHS→
←INTELLI•GRAPHS→ is a semantic wiki designed to unify disparate data into interconnected knowledge graphs that humans, AI assistants, and autonomous agents can co-edit and act upon in real time; it functions as a personal information manager, family tree/genealogy system, project management hub, digital publishing platform, CRM, document management system, GIS, biomedical/research database, electronic health record layer, digital twin engine, and e-governance tracker, all built on a next-gen progressive web app that is offline-first, peer-to-peer, and zero-knowledge end-to-end encrypted with locally generated keys. Users get live, conflict-free collaboration, schema library with validation, full import/export of encrypted graph files (including attachments), and AI/agent readiness via APIs and tooling like IntelliAgents, which provide identity, task orchestration, workflow planning with human-in-the-loop breakpoints, adaptive inference meshes, and continuous memory enhancement.Starting Price: Free -
50
Protopia AI
Protopia AI
Protopia AI’s Stained Glass Transform (SGT) offers a cutting-edge solution to secure sensitive data in AI workloads by preventing data exposure during processing and inference. It enables enterprises to maximize the value of their data by breaking down silos while retaining full ownership and security. SGT supports deployment across diverse environments, including on-premises, hybrid, and multi-tenant clouds, optimizing GPU use for performance. It runs up to 14,000 times faster than traditional cryptographic methods, ensuring AI inference adds only minimal latency. The platform is designed to meet the needs of industries with strict data privacy requirements, such as finance, defense, and healthcare. Protopia’s technology integrates with AWS Marketplace and partners like Lambda and vLLM to provide comprehensive, high-performance, secure AI inference solutions.