Showing 97 open source projects for "using"

View related business solutions
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 1
    t5-small

    t5-small

    T5-Small: Lightweight text-to-text transformer for NLP tasks

    ...It was pretrained on the C4 dataset using both unsupervised denoising and supervised learning on tasks like sentiment analysis, NLI, and QA. Despite its size, it performs competitively across 24 NLP benchmarks, making it a strong candidate for prototyping and fine-tuning. T5-Small is compatible with major deep learning frameworks including PyTorch, TensorFlow, JAX, and ONNX.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Llama-3.2-1B

    Llama-3.2-1B

    Llama 3.2–1B: Multilingual, instruction-tuned model for mobile AI

    ...With 1.23 billion parameters, it offers strong performance in constrained environments like mobile devices, without sacrificing versatility or multilingual support. It is part of the Llama 3.2 family, trained on up to 9 trillion tokens and aligned using supervised fine-tuning, preference optimization, and safety tuning. The model supports eight officially listed languages (including Spanish, German, Hindi, and Thai) but can be adapted to more. Llama 3.2-1B outperforms other open models in several benchmarks relative to its size and offers quantized versions for efficiency. It uses a refined transformer architecture with Grouped-Query Attention (GQA) and supports long context windows of up to 128k tokens.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    mms-300m-1130-forced-aligner

    mms-300m-1130-forced-aligner

    CTC-based forced aligner for audio-text in 158 languages

    ...It supports forced alignment between audio and corresponding text across 158 languages, offering broad multilingual coverage. The model enables accurate word- or phoneme-level timestamping using Connectionist Temporal Classification (CTC) emissions. Unlike other tools, it provides significant memory efficiency compared to the TorchAudio forced alignment API. Users can integrate it easily through the Python package ctc-forced-aligner, and it supports GPU acceleration via PyTorch. The alignment pipeline includes audio processing, emission generation, tokenization, and span detection, making it suitable for speech analysis, transcription syncing, and dataset creation. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Qwen2.5-VL-7B-Instruct

    Qwen2.5-VL-7B-Instruct

    Multimodal 7B model for image, video, and text understanding tasks

    ...The model is also capable of video understanding with dynamic frame sampling and temporal reasoning, enabling it to analyze and respond to long-form videos. Built with an enhanced ViT architecture using window attention, SwiGLU, and RMSNorm, it aligns closely with Qwen2.5 LLM standards. The model demonstrates high performance across benchmarks like DocVQA, ChartQA, and MMStar, and even functions as a tool-using visual agent.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Find Hidden Risks in Windows Task Scheduler Icon
    Find Hidden Risks in Windows Task Scheduler

    Free diagnostic script reveals configuration issues, error patterns, and security risks. Instant HTML report.

    Windows Task Scheduler might be hiding critical failures. Download the free JAMS diagnostic tool to uncover problems before they impact production—get a color-coded risk report with clear remediation steps in minutes.
    Download Free Tool
  • 5
    fashion-clip

    fashion-clip

    CLIP model fine-tuned for zero-shot fashion product classification

    ...Developed by Patrick John Chia and collaborators, it builds on the CLIP ViT-B/32 architecture and was trained on over 800K image-text pairs from the Farfetch dataset. The model learns to align product images and descriptive text using contrastive learning, enabling it to perform well across various fashion-related tasks without additional supervision. FashionCLIP 2.0, the latest version, uses the laion/CLIP-ViT-B-32-laion2B-s34B-b79K checkpoint for improved accuracy, achieving better F1 scores across multiple benchmarks compared to earlier versions. It supports multilingual fashion queries and works best with clean, product-style images against white backgrounds. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Mistral Large 3 675B Base 2512

    Mistral Large 3 675B Base 2512

    Frontier-scale 675B multimodal base model for custom AI training

    ...Its architecture includes a powerful language MoE and a 2.5B-parameter vision encoder, enabling multimodal understanding out of the box. Mistral Large 3 Base supports deployment on-premises using FP8 or NVFP4 formats, enabling high-performance workflows on B200, H200, H100, or A100 hardware.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    VaultGemma

    VaultGemma

    VaultGemma: 1B DP-trained Gemma variant for private NLP tasks

    ...Training ran on TPU v6e using JAX and Pathways with privacy-preserving algorithms (DP-SGD, truncated Poisson subsampling) and DP scaling laws to balance compute and privacy budgets. Benchmarks on the 1B pre-trained checkpoint show expected utility trade-offs (e.g., HellaSwag 10-shot 39.09, BoolQ 0-shot 62.04, PIQA 0-shot 68.00), reflecting its privacy-first design.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    unidepth-v2-vitl14

    unidepth-v2-vitl14

    Metric monocular depth estimation (vision model)

    Estimates absolute (metric) depth from single RGB images, along with camera intrinsics and uncertainty. Designed to generalize across domains (zero-shot) using a self‑prompting camera module and pseudo-spherical prediction space.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    BLEURT-20-D12

    BLEURT-20-D12

    Custom BLEURT model for evaluating text similarity using PyTorch

    BLEURT-20-D12 is a PyTorch implementation of BLEURT, a model designed to assess the semantic similarity between two text sequences. It serves as an automatic evaluation metric for natural language generation tasks like summarization and translation. The model predicts a score indicating how similar a candidate sentence is to a reference sentence, with higher scores indicating greater semantic overlap. Unlike standard BLEURT models from TensorFlow, this version is built from a custom PyTorch...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 10
    FLUX.1-Krea-dev

    FLUX.1-Krea-dev

    Text-to-image model optimized for artistic quality and safe generation

    FLUX.1-Krea-dev is a 12 billion parameter rectified flow transformer for text-to-image generation, developed by Black Forest Labs in collaboration with Krea. It delivers aesthetic, high-quality outputs focused on photography and visual coherence, making it a strong competitor to closed-source models. Trained using guidance distillation, it offers efficient inference while preserving creative fidelity. The model is distributed under a non-commercial license, with conditions to prevent misuse and support ethical AI development. FLUX.1-Krea-dev is available via Diffusers and ComfyUI, and integrates with the FluxPipeline for streamlined usage. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Qwen2-7B-Instruct

    Qwen2-7B-Instruct

    Instruction-tuned 7B language model for chat and complex tasks

    Qwen2-7B-Instruct is a 7.62-billion-parameter instruction-tuned language model from the Qwen2 series developed by Alibaba's Qwen team. Built on a transformer architecture with SwiGLU activation and group query attention, it is optimized for chat, reasoning, coding, multilingual tasks, and extended context understanding up to 131,072 tokens. The model was pretrained on a large-scale dataset and aligned via supervised fine-tuning and direct preference optimization. It shows strong performance...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Devstral Small 2

    Devstral Small 2

    Lightweight 24B agentic coding model with vision and long context

    Devstral Small 2 is a compact agentic language model designed for software engineering workflows, excelling at tool usage, codebase exploration, and multi-file editing. With 24B parameters and FP8 instruct tuning, it delivers strong instruction following while remaining lightweight enough for local and on-device deployment. The model achieves competitive performance on SWE-bench, validating its effectiveness for real-world coding and automation tasks. It introduces vision capabilities,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    DeepSeek-V3.2-Speciale

    DeepSeek-V3.2-Speciale

    High-compute ultra-reasoning model surpassing model surpassing GPT-5

    DeepSeek-V3.2-Speciale is the high-compute, ultra-reasoning variant of DeepSeek-V3.2, designed specifically to push the boundaries of mathematical, logical, and algorithmic intelligence. It builds on the DeepSeek Sparse Attention (DSA) framework, delivering dramatically improved long-context efficiency while preserving full model quality. Unlike the standard version, Speciale is tuned exclusively for deep reasoning and therefore does not support tool-calling, focusing its full capacity on...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Qwen3-Next

    Qwen3-Next

    Qwen3-Next: 80B instruct LLM with ultra-long context up to 1M tokens

    ...With 80B total parameters and 3B activated at a time, it leverages hybrid attention (Gated DeltaNet + Gated Attention) and a high-sparsity Mixture-of-Experts architecture to achieve exceptional efficiency. The model natively supports a context length of 262K tokens and can be extended up to 1 million tokens using RoPE scaling (YaRN), making it highly capable for processing large documents and extended conversations. Multi-Token Prediction (MTP) boosts both training and inference, while stability optimizations such as weight-decayed and zero-centered layernorm ensure robustness. Benchmarks show it performs comparably to larger models like Qwen3-235B on reasoning, coding, multilingual, and alignment tasks while requiring only a fraction of the training cost.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Mellum-4b-base

    Mellum-4b-base

    JetBrains’ 4B parameter code model for completions

    ...With a context window of 8,192 tokens, it excels at code completion, fill-in-the-middle tasks, and intelligent code suggestions for professional developer tools and IDEs. The model is efficient for both cloud inference with vLLM and local deployment using llama.cpp or Ollama, thanks to its bf16 precision and AMP training. While the base model is not fine-tuned for downstream tasks, it is designed to be easily adapted through supervised fine-tuning (SFT) or reinforcement learning (RL). Benchmarks on RepoBench, SAFIM, and HumanEval demonstrate its competitive performance, with specialized fine-tuned versions for Python already showing strong improvements.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Llama-3.2-1B-Instruct

    Llama-3.2-1B-Instruct

    Instruction-tuned 1.2B LLM for multilingual text generation by Meta

    ...The model supports eight primary languages (including English, Spanish, Hindi, and Thai) and was trained on a curated mix of publicly available online data, with a December 2023 knowledge cutoff. Llama-3.2-1B is lightweight enough for deployment on constrained devices like smartphones, using formats like SpinQuant and QLoRA to reduce model size and latency. Despite its small size, it performs competitively across benchmarks such as MMLU, ARC, and TLDR summarization. The model is distributed under the Llama 3.2 Community License, requiring attribution and adherence to Meta’s Acceptable Use Policy.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP ViT-bigG/14: Zero-shot image-text model trained on LAION-2B

    CLIP-ViT-bigG-14-laion2B-39B-b160k is a powerful vision-language model trained on the English subset of the LAION-5B dataset using the OpenCLIP framework. Developed by LAION and trained by Mitchell Wortsman on Stability AI’s compute infrastructure, it pairs a ViT-bigG/14 vision transformer with a text encoder to perform contrastive learning on image-text pairs. This model excels at zero-shot image classification, image-to-text and text-to-image retrieval, and can be adapted for tasks such as image captioning or generation guidance. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Hunyuan-A13B-Instruct

    Hunyuan-A13B-Instruct

    Efficient 13B MoE language model with long context and reasoning modes

    Hunyuan-A13B-Instruct is a powerful instruction-tuned large language model developed by Tencent using a fine-grained Mixture-of-Experts (MoE) architecture. While the total model includes 80 billion parameters, only 13 billion are active per forward pass, making it highly efficient while maintaining strong performance across benchmarks. It supports up to 256K context tokens, advanced reasoning (CoT) abilities, and agent-based workflows with tool parsing.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    wav2vec2-large-xlsr-53-russian

    wav2vec2-large-xlsr-53-russian

    Russian ASR model fine-tuned on Common Voice and CSS10 datasets

    wav2vec2-large-xlsr-53-russian is a fine-tuned automatic speech recognition (ASR) model based on Facebook’s wav2vec2-large-xlsr-53 and optimized for Russian. It was trained using Mozilla’s Common Voice 6.1 and CSS10 datasets to recognize Russian speech with high accuracy. The model operates best with audio sampled at 16kHz and can transcribe Russian speech directly without a language model. It achieves a Word Error Rate (WER) of 13.3% and Character Error Rate (CER) of 2.88% on the Common Voice test set, with even better results when used with a language model. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Mistral Large 3 675B Instruct 2512 Eagle

    Mistral Large 3 675B Instruct 2512 Eagle

    Speculative-decoding accelerator for the 675B Mistral Large 3

    Mistral Large 3 675B Instruct 2512 Eagle is the dedicated speculative-decoding draft model for the full Mistral Large 3 Instruct system, designed to significantly speed up generation while preserving high output quality. It works alongside the primary 675B instruct model, enabling faster response times by predicting several tokens ahead using Mistral’s Eagle speculative method. Built on the same frontier-scale multimodal Mixture-of-Experts architecture, it complements a system featuring 41B active parameters and a 2.5B-parameter vision encoder. The Eagle variant is specialized rather than standalone, serving as a performance accelerator for production-grade assistants, agentic workflows, long-context applications, and retrieval-augmented reasoning pipelines. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Mistral Large 3 675B Instruct 2512 NVFP4

    Mistral Large 3 675B Instruct 2512 NVFP4

    Quantized 675B multimodal instruct model optimized for NVFP4

    ...This NVFP4 checkpoint is a post-training-activation quantized version of the original instruct model, created through a collaboration between Mistral AI, vLLM, and Red Hat using llm-compressor. It retains the same instruction-tuned behavior as the FP8 model, making it ideal for production assistants, agentic workflows, scientific tasks, and long-context enterprise systems. The model integrates a 673B-parameter MoE language backbone with a 2.5B-parameter vision encoder, enabling rich multimodal analysis across text and images. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    NoobAI XL 1.1

    NoobAI XL 1.1

    Open, non-commercial SDXL model for quality image generation

    NoobAI XL 1.1 is a diffusion-based text-to-image generative model developed by Laxhar Dream Lab, fine-tuned from NoobAI XL 1.0 and built upon Illustrious-xl. It leverages the latest Danbooru and e621 datasets, using native tag captions to enhance visual fidelity, style accuracy, and prompt responsiveness. The model introduces refined quality tagging, ranking images by percentile to ensure results reflect modern aesthetic preferences. It supports a range of recommended resolutions around 1024×1024, maintaining high consistency across varied aspect ratios. ...
    Downloads: 0 This Week
    Last Update:
    See Project