Showing 1978 open source projects for "python 2.4 linux"

View related business solutions
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 1
    Serena

    Serena

    Agent toolkit providing semantic retrieval and editing capabilities

    Serena is a coding-focused agent toolkit that turns an LLM into a practical software-engineering agent with semantic retrieval and editing over real repositories. It operates as an MCP server (and other integrations), exposing IDE-like tools so agents can locate symbols, reason about code structure, make targeted edits, and validate changes. The toolkit is LLM-agnostic and framework-agnostic, positioning itself as a drop-in capability for different chat UIs, orchestrators, or custom agent...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    MaxKB

    MaxKB

    Open-source platform for building enterprise-grade agents

    MaxKB (Max Knowledge Brain) is an open-source platform for building enterprise-grade AI agents with strong knowledge retrieval, RAG pipelines, and workflow orchestration. It focuses on practical deployments such as customer support, internal knowledge bases, research assistants, and education, bundling tools for data ingestion, chunking, embedding, retrieval, and answer synthesis. The system exposes flexible tool-use (including MCP), supports multi-model backends, and provides dashboards for...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    fairseq2

    fairseq2

    FAIR Sequence Modeling Toolkit 2

    fairseq2 is a modern, modular sequence modeling framework developed by Meta AI Research as a complete redesign of the original fairseq library. Built from the ground up for scalability, composability, and research flexibility, fairseq2 supports a broad range of language, speech, and multimodal content generation tasks, including instruction fine-tuning, reinforcement learning from human feedback (RLHF), and large-scale multilingual modeling. Unlike the original fairseq—which evolved into a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    MobileLLM

    MobileLLM

    MobileLLM Optimizing Sub-billion Parameter Language Models

    MobileLLM is a lightweight large language model (LLM) framework developed by Facebook Research, optimized for on-device deployment where computational and memory efficiency are critical. Introduced in the ICML 2024 paper “MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases”, it focuses on delivering strong reasoning and generalization capabilities in models under one billion parameters. The framework integrates several architectural innovations—SwiGLU...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Cloud-based help desk software with ServoDesk Icon
    Cloud-based help desk software with ServoDesk

    Full access to Enterprise features. No credit card required.

    What if You Could Automate 90% of Your Repetitive Tasks in Under 30 Days? At ServoDesk, we help businesses like yours automate operations with AI, allowing you to cut service times in half and increase productivity by 25% - without hiring more staff.
    Try ServoDesk for free
  • 5
    MetaCLIP

    MetaCLIP

    ICLR2024 Spotlight: curation/training code, metadata, distribution

    MetaCLIP is a research codebase that extends the CLIP framework into a meta-learning / continual learning regime, aiming to adapt CLIP-style models to new tasks or domains efficiently. The goal is to preserve CLIP’s strong zero-shot transfer capability while enabling fast adaptation to domain shifts or novel class sets with minimal data and without catastrophic forgetting. The repository provides training logic, adaptation strategies (e.g. prompt tuning, adapter modules), and evaluation...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Pearl

    Pearl

    A Production-ready Reinforcement Learning AI Agent Library

    Pearl is a production-ready reinforcement learning and contextual bandit agent library built for real-world sequential decision making. It is organized around modular components—policy learners, replay buffers, exploration strategies, safety modules, and history summarizers—that snap together to form reliable agents with clear boundaries and strong defaults. The library implements classic and modern algorithms across two regimes: contextual bandits (e.g., LinUCB, LinTS, SquareCB, neural...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    JEPA

    JEPA

    PyTorch code and models for V-JEPA self-supervised learning from video

    JEPA (Joint-Embedding Predictive Architecture) captures the idea of predicting missing high-level representations rather than reconstructing pixels, aiming for robust, scalable self-supervised learning. A context encoder ingests visible regions and predicts target embeddings for masked regions produced by a separate target encoder, avoiding low-level reconstruction losses that can overfit to texture. This makes learning focus on semantics and structure, yielding features that transfer well...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Flow Matching

    Flow Matching

    A PyTorch library for implementing flow matching algorithms

    flow_matching is a PyTorch library implementing flow matching algorithms in both continuous and discrete settings, enabling generative modeling via matching vector fields rather than diffusion. The underlying idea is to parameterize a flow (a time-dependent vector field) that transports samples from a simple base distribution to a target distribution, and train via matching of flows without requiring score estimation or noisy corruption—this can lead to more efficient or stable generative...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    DLRM

    DLRM

    An implementation of a deep learning recommendation model (DLRM)

    DLRM (Deep Learning Recommendation Model) is Meta’s open-source reference implementation for large-scale recommendation systems built to handle extremely high-dimensional sparse features and embedding tables. The architecture combines dense (MLP) and sparse (embedding) branches, then interacts features via dot product or feature interactions before passing through further dense layers to predict click-through, ranking scores, or conversion probabilities. The implementation is optimized for...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Desktop and Mobile Device Management Software Icon
    Desktop and Mobile Device Management Software

    It's a modern take on desktop management that can be scaled as per organizational needs.

    Desktop Central is a unified endpoint management (UEM) solution that helps in managing servers, laptops, desktops, smartphones, and tablets from a central location.
    Learn More
  • 10
    DeiT (Data-efficient Image Transformers)
    DeiT (Data-efficient Image Transformers) shows that Vision Transformers can be trained competitively on ImageNet-1k without external data by using strong training recipes and knowledge distillation. Its key idea is a specialized distillation strategy—including a learnable “distillation token”—that lets a transformer learn effectively from a CNN or transformer teacher on modest-scale datasets. The project provides compact ViT variants (Tiny/Small/Base) that achieve excellent...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    VGGT

    VGGT

    [CVPR 2025 Best Paper Award] VGGT

    VGGT is a transformer-based framework aimed at unifying classic visual geometry tasks—such as depth estimation, camera pose recovery, point tracking, and correspondence—under a single model. Rather than training separate networks per task, it shares an encoder and leverages geometric heads/decoders to infer structure and motion from images or short clips. The design emphasizes consistent geometric reasoning: outputs from one head (e.g., correspondences or tracks) reinforce others (e.g., pose...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Animated Drawings

    Animated Drawings

    Code to accompany "A Method for Animating Children's Drawings"

    AnimatedDrawings is a framework that converts user sketches or line drawings into fully animated 2D motion sequences using learned motion priors. The idea is that you draw a simple static figure (stick figure, silhouette, or contour lines), and the system produces plausible skeletal motion (walking, jumping, dancing) that adheres to the drawn shape constraints. The architecture separates shape embedding (to understand user-drawn geometry) from motion embedding / generation (to produce...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Prompt Engineering Interactive Tutorial

    Prompt Engineering Interactive Tutorial

    Anthropic's Interactive Prompt Engineering Tutorial

    Prompt-eng-interactive-tutorial is a comprehensive, hands-on tutorial that teaches the craft of prompt engineering with Claude through guided, executable lessons. It starts with the anatomy of a good prompt and moves into techniques that deliver the “80/20” gains—separating instructions from data, specifying schemas, and setting evaluation criteria. The course leans heavily on realistic failure modes (ambiguity, hallucination, brittle instructions) and shows how to iteratively debug prompts...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Courses (Anthropic)

    Courses (Anthropic)

    Anthropic's educational courses

    Anthropic’s courses repository is a growing collection of self-paced learning materials that teach practical AI skills using Claude and the Anthropic API. It’s organized as a sequence of hands-on courses—starting with API fundamentals and prompt engineering—so learners build capability step by step rather than in isolation. Each course mixes short readings with runnable notebooks and exercises, guiding you through concepts like model parameters, streaming, multimodal prompts, structured...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Mistral Finetune

    Mistral Finetune

    Memory-efficient and performant finetuning of Mistral's models

    mistral-finetune is an official lightweight codebase designed for memory-efficient and performant finetuning of Mistral’s open models (e.g. 7B, instruct variants). It builds on techniques like LoRA (Low-Rank Adaptation) to allow customizing models without full parameter updates, which reduces GPU memory footprint and training cost. The repo includes utilities for data preprocessing (e.g. reformat_data.py), validation scripts, and example YAML configs for training variants like 7B base or...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    HunyuanDiT

    HunyuanDiT

    Diffusion Transformer with Fine-Grained Chinese Understanding

    HunyuanDiT is a high-capability text-to-image diffusion transformer with bilingual (Chinese/English) understanding and multi-turn dialogue capability. It trains a diffusion model in latent space using a transformer backbone and integrates a Multimodal Large Language Model (MLLM) to refine captions and support conversational image generation. It supports adapters like ControlNet, IP-Adapter, LoRA, and can run under constrained VRAM via distillation versions. LoRA, ControlNet (pose, depth,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    NVIDIA Isaac GR00T

    NVIDIA Isaac GR00T

    NVIDIA Isaac GR00T N1.5 is the world's first open foundation model

    NVIDIA Isaac‑GR00T N1.5 is an open-source foundation model engineered for generalized humanoid robot reasoning and manipulation skills. It accepts multimodal inputs—such as language and images—and uses a diffusion transformer architecture built upon vision-language encoders, enabling adaptive robot behaviors across diverse environments. It is designed to be customizable via post-training with real or synthetic data. The vision-language model remains frozen during both pretraining and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    MiniCPM-o

    MiniCPM-o

    A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming

    MiniCPM-o 2.6 is a cutting-edge multimodal large language model (MLLM) designed for high-performance tasks across vision, speech, and video. Capable of running on end-side devices such as smartphones and tablets, it provides powerful features like real-time speech conversation, video understanding, and multimodal live streaming. With 8 billion parameters, MiniCPM-o 2.6 surpasses its predecessors in versatility and efficiency, making it one of the most robust models available. It supports...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Mem0

    Mem0

    The Memory layer for AI Agents

    Mem0 is a self-improving memory layer designed for Large Language Model (LLM) applications, enabling personalized AI experiences that save costs and delight users. It remembers user preferences, adapts to individual needs, and continuously improves over time. Key features include enhancing future conversations by building smarter AI that learns from every interaction, reducing LLM costs by up to 80% through intelligent data filtering, delivering more accurate and personalized AI outputs by...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Infinity

    Infinity

    Low-latency REST API for serving text-embeddings

    Infinity is a high-throughput, low-latency REST API for serving vector embeddings, supporting all sentence-transformer models and frameworks. Infinity is developed under MIT License. Infinity powers inference behind Gradient.ai and other Embedding API providers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    SalesGPT

    SalesGPT

    Context-aware AI Sales Agent to automate sales outreach

    This repo is an implementation of a context-aware AI Agent for Sales using LLMs and can work across voice, email and texting (SMS, WhatsApp, WeChat, Weibo, Telegram, etc.). SalesGPT is context-aware, which means it can understand what stage of a sales conversation it is in and act accordingly. Moreover, SalesGPT has access to tools, such as your own pre-defined product knowledge base, significantly reducing hallucinations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    NeuralForecast

    NeuralForecast

    Scalable and user friendly neural forecasting algorithms.

    NeuralForecast offers a large collection of neural forecasting models focusing on their performance, usability, and robustness. The models range from classic networks like RNNs to the latest transformers: MLP, LSTM, GRU, RNN, TCN, TimesNet, BiTCN, DeepAR, NBEATS, NBEATSx, NHITS, TiDE, DeepNPTS, TSMixer, TSMixerx, MLPMultivariate, DLinear, NLinear, TFT, Informer, AutoFormer, FedFormer, PatchTST, iTransformer, StemGNN, and TimeLLM. There is a shared belief in Neural forecasting methods'...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    UpTrain

    UpTrain

    Your open-source LLM evaluation toolkit

    Get scores for factual accuracy, context retrieval quality, guideline adherence, tonality, and many more. You can’t improve what you can’t measure. UpTrain continuously monitors your application's performance on multiple evaluation criterions and alerts you in case of any regressions with automatic root cause analysis. UpTrain enables fast and robust experimentation across multiple prompts, model providers, and custom configurations, by calculating quantitative scores for direct comparison...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    OpenCompass

    OpenCompass

    OpenCompass is an LLM evaluation platform

    Just like a compass guides us on our journey, OpenCompass will guide you through the complex landscape of evaluating large language models. With its powerful algorithms and intuitive interface, OpenCompass makes it easy to assess the quality and effectiveness of your NLP models. OpenCompass is a one-stop platform for large model evaluation, aiming to provide a fair, open, and reproducible benchmark for large model evaluation. Pre-support for 20+ HuggingFace and API models, a model evaluation...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    refinery

    refinery

    Open-source choice to scale, assess and maintain natural language data

    The data scientist's open-source choice to scale, assess and maintain natural language data. Treat training data like a software artifact. You are one of the people we've built refinery for. refinery helps you to build better NLP models in a data-centric approach. Semi-automate your labeling, find low-quality subsets in your training data, and monitor your data in one place. refinery doesn't get rid of manual labeling, but it makes sure that your valuable time is spent well. Also, the makers...
    Downloads: 0 This Week
    Last Update:
    See Project