210 Integrations with Hugging Face
View a list of Hugging Face integrations and software that integrates with Hugging Face below. Compare the best Hugging Face integrations as well as features, ratings, user reviews, and pricing of software that integrates with Hugging Face. Here are the current Hugging Face integrations in 2025:
-
1
Luminal
Luminal
Luminal is a machine-learning framework built for speed, simplicity, and composability, focusing on static graphs and compiler-based optimization to deliver high performance even for complex neural networks. It compiles models into minimal “primops” (only 12 primitive operations) and then applies compiler passes to replace those with device-specific optimized kernels, enabling efficient execution on GPU or other backends. It supports modules (building blocks of networks with a standard forward API) and the GraphTensor interface (typed tensors and graphs at compile time) for model definition and execution. Luminal’s core remains intentionally small and hackable, with extensibility via external compilers for datatypes, devices, training, quantization, and more. Quick-start guidance shows how to clone the repo, build a “Hello World” example, or run a larger model like LLaMA 3 using GPU features. -
2
HunyuanOCR
Tencent
Tencent Hunyuan is a large-scale, multimodal AI model family developed by Tencent that spans text, image, video, and 3D modalities, designed for general-purpose AI tasks like content generation, visual reasoning, and business automation. Its model lineup includes variants optimized for natural language understanding, multimodal vision-language comprehension (e.g., image & video understanding), text-to-image creation, video generation, and 3D content generation. Hunyuan models leverage a mixture-of-experts architecture and other innovations (like hybrid “mamba-transformer” designs) to deliver strong performance on reasoning, long-context understanding, cross-modal tasks, and efficient inference. For example, the vision-language model Hunyuan-Vision-1.5 supports “thinking-on-image”, enabling deep multimodal understanding and reasoning on images, video frames, diagrams, or spatial data. -
3
AWS EC2 Trn3 Instances
Amazon
Amazon EC2 Trn3 UltraServers are AWS’s newest accelerated computing instances, powered by the in-house Trainium3 AI chips and engineered specifically for high-performance deep-learning training and inference workloads. These UltraServers are offered in two configurations, a “Gen1” with 64 Trainium3 chips and a “Gen2” with up to 144 Trainium3 chips per UltraServer. The Gen2 configuration delivers up to 362 petaFLOPS of dense MXFP8 compute, 20 TB of HBM memory, and a staggering 706 TB/s of aggregate memory bandwidth, making it one of the highest-throughput AI compute platforms available. Interconnects between chips are handled by a new “NeuronSwitch-v1” fabric to support all-to-all communication patterns, which are especially important for large models, mixture-of-experts architectures, or large-scale distributed training. -
4
trail
trail
Trail ML is an AI governance copilot platform that helps organizations build trustworthy, compliant, and transparent AI systems by automating manual governance and documentation tasks. It centralizes AI registry, policy creation, risk management, automated documentation, development tracking, audit trails, and compliance workflows under one system, enabling teams to classify and manage all AI use cases, trace decisions from data and model to outcomes, and reduce the overhead of manual documentation and governance processes. It integrates governance frameworks and templates, supports creation of custom AI policies, and guides teams through identifying and mitigating risks, preparing for audits and standards like ISO 42001 and regulation such as the EU AI Act. Trail uses curated knowledge, risk libraries, and AI-powered automation to orchestrate governance tasks, translate regulatory requirements into actionable to-dos, and streamline collaboration between stakeholders. -
5
Texel.ai
Texel.ai
Accelerate your GPU workflows. Make AI models, video processing, and more up to 10x faster while cutting costs by up to 90%. -
6
Cleanlab
Cleanlab
Cleanlab Studio handles the entire data quality and data-centric AI pipeline in a single framework for analytics and machine learning tasks. Automated pipeline does all ML for you: data preprocessing, foundation model fine-tuning, hyperparameter tuning, and model selection. ML models are used to diagnose data issues, and then can be re-trained on your corrected dataset with one click. Explore the entire heatmap of suggested corrections for all classes in your dataset. Cleanlab Studio provides all of this information and more for free as soon as you upload your dataset. Cleanlab Studio comes pre-loaded with several demo datasets and projects, so you can check those out in your account after signing in. -
7
Unremot
Unremot
Unremot is a go-to place for anyone aspiring to build an AI product - with 120+ pre-built APIs, you can build and launch AI products 2X faster, at 1/3rd cost. Even, some of the most complicated AI product APIs take less than a few minutes to deploy and launch, with minimal code or even no-code. Choose an AI API that you want to integrate to your product from 120+ APIs we have on Unremot. Provide your API private key to authenticate Unremot to access the API. Use unremot unique URL to connect the product API - the whole process takes only minutes, instead of days and weeks. -
8
Tune AI
NimbleBox
Leverage the power of custom models to build your competitive advantage. With our enterprise Gen AI stack, go beyond your imagination and offload manual tasks to powerful assistants instantly – the sky is the limit. For enterprises where data security is paramount, fine-tune and deploy generative AI models on your own cloud, securely. -
9
ChainForge
ChainForge
ChainForge is an open-source visual programming environment designed for prompt engineering and large language model evaluation. It enables users to assess the robustness of prompts and text-generation models beyond anecdotal evidence. Simultaneously test prompt ideas and variations across multiple LLMs to identify the most effective combinations. Evaluate response quality across different prompts, models, and settings to select the optimal configuration for specific use cases. Set up evaluation metrics and visualize results across prompts, parameters, models, and settings, facilitating data-driven decision-making. Manage multiple conversations simultaneously, template follow-up messages, and inspect outputs at each turn to refine interactions. ChainForge supports various model providers, including OpenAI, HuggingFace, Anthropic, Google PaLM2, Azure OpenAI endpoints, and locally hosted models like Alpaca and Llama. Users can adjust model settings and utilize visualization nodes. -
10
Chainlit
Chainlit
Chainlit is an open-source Python package designed to expedite the development of production-ready conversational AI applications. With Chainlit, developers can build and deploy chat-based interfaces in minutes, not weeks. The platform offers seamless integration with popular AI tools and frameworks, including OpenAI, LangChain, and LlamaIndex, allowing for versatile application development. Key features of Chainlit include multimodal capabilities, enabling the processing of images, PDFs, and other media types to enhance productivity. It also provides robust authentication options, supporting integration with providers like Okta, Azure AD, and Google. The Prompt Playground feature allows developers to iterate on prompts in context, adjusting templates, variables, and LLM settings for optimal results. For observability, Chainlit offers real-time visualization of prompts, completions, and usage metrics, ensuring efficient and trustworthy LLM operations.