Showing 1880 open source projects for "linux is"

View related business solutions
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • Total Network Visibility for Network Engineers and IT Managers Icon
    Total Network Visibility for Network Engineers and IT Managers

    Network monitoring and troubleshooting is hard. TotalView makes it easy.

    This means every device on your network, and every interface on every device is automatically analyzed for performance, errors, QoS, and configuration.
    Learn More
  • 1
    ArXiv MCP Server

    ArXiv MCP Server

    A Model Context Protocol server for searching and analyzing arXiv

    arxiv-mcp-server bridges AI assistants and the arXiv repository through a clean MCP interface, enabling search, metadata retrieval, and content access without bespoke scraping. With simple tools like “search” and “fetch,” an agent can find papers, pull abstracts, and download PDFs for downstream summarization or analysis. The project includes packaging and CI to publish to PyPI, plus tests and linting for reliability. Issue threads show feature requests such as extracting embedded LaTeX and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    4M

    4M

    4M: Massively Multimodal Masked Modeling

    4M is a training framework for “any-to-any” vision foundation models that uses tokenization and masking to scale across many modalities and tasks. The same model family can classify, segment, detect, caption, and even generate images, with a single interface for both discriminative and generative use. The repository releases code and models for multiple variants (e.g., 4M-7 and 4M-21), emphasizing transfer to unseen tasks and modalities. Training/inference configs and issues discuss things...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    MGIE

    MGIE

    Guiding Instruction-based Image Editing via Multimodal Large Language

    MGIE—Guiding Instruction-based Image Editing—demonstrates how a multimodal LLM can parse natural-language editing instructions and then drive image transformations accordingly. The project focuses on making edits explainable and controllable: the model interprets text guidance, reasons over image content, and outputs edits aligned with user intent. It’s positioned as an ICLR 2024 Spotlight work, with code and references that show how to connect language planning to concrete image operations....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    ML Ferret

    ML Ferret

    Refer and Ground Anything Anywhere at Any Granularity

    Ferret is Apple’s end-to-end multimodal large language model designed specifically for flexible referring and grounding: it can understand references of any granularity (boxes, points, free-form regions) and then ground open-vocabulary descriptions back onto the image. The core idea is a hybrid region representation that mixes discrete coordinates with continuous visual features, so the model can fluidly handle “any-form” referring while maintaining precise spatial localization. The repo...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Ultra-Fast Large File Transfer is Here Icon
    Ultra-Fast Large File Transfer is Here

    MASV is the fastest and most reliable way for videographers and creators to transfer time critical files to clients.

    MASV is the fastest and most reliable way for video and creative professionals to securely send large files. Unlike other file transfer services, MASV is designed for truly massive files (up to 5TB in size), the kind of volume that normally chokes other services.
    Learn More
  • 5
    LMCache

    LMCache

    Supercharge Your LLM with the Fastest KV Cache Layer

    LMCache is an extension layer for LLM serving engines that accelerates inference, especially with long contexts, by storing and reusing key-value (KV) attention caches across requests. Instead of rebuilding KV states for repeated or shared text segments, LMCache persists and retrieves them from multiple tiers—GPU memory, CPU DRAM, and local disk—then injects them into subsequent requests to reduce TTFT and increase throughput. Its design supports reuse beyond strict prefix matching and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    LLaMA 3

    LLaMA 3

    The official Meta Llama 3 GitHub site

    This repository is the former home for Llama 3 model artifacts and getting-started code, covering pre-trained and instruction-tuned variants across multiple parameter sizes. It introduced the public packaging of weights, licenses, and quickstart examples that helped developers fine-tune or run the models locally and on common serving stacks. As the Llama stack evolved, Meta consolidated repositories and marked this one deprecated, pointing users to newer, centralized hubs for models,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    CodeLlama

    CodeLlama

    Inference code for CodeLlama models

    Code Llama is a family of Llama-based code models optimized for programming tasks such as code generation, completion, and repair, with variants specialized for base coding, Python, and instruction following. The repo documents the sizes and capabilities (e.g., 7B, 13B, 34B) and highlights features like infilling and large input context to support real IDE workflows. It targets both general software synthesis and language-specific productivity, offering strong performance among open models...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    LLaMA Models

    LLaMA Models

    Utilities intended for use with Llama models

    This repository serves as the central hub for the Llama foundation model family, consolidating model cards, licenses and use policies, and utilities that support inference and fine-tuning across releases. It ties together other stack components (like safety tooling and developer SDKs) and provides canonical references for model variants and their intended usage. The project’s issues and releases reflect an actively used coordination point for the ecosystem, where guidance, utilities, and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Purple Llama

    Purple Llama

    Set of tools to assess and improve LLM security

    Purple Llama is an umbrella safety initiative that aggregates tools, benchmarks, and mitigations to help developers build responsibly with open generative AI. Its scope spans input and output safeguards, cybersecurity-focused evaluations, and reference shields that can be inserted at inference time. The project evolves as a hub for safety research artifacts like Llama Guard and Code Shield, along with dataset specs and how-to guides for integrating checks into applications. CyberSecEval, one...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Endpoint Protection Software for Businesses | HYPERSECURE Icon
    Endpoint Protection Software for Businesses | HYPERSECURE

    DriveLock protects systems, data, end devices from data loss and misuse.

    The HYPERSECURE endpoint protection platform is a comprehensive suite of products and services enhanced by European third-party solutions. It ensures our customers’ IT security, regulatory compliance, and digital sovereignty.
    Learn More
  • 10
    Serena

    Serena

    Agent toolkit providing semantic retrieval and editing capabilities

    Serena is a coding-focused agent toolkit that turns an LLM into a practical software-engineering agent with semantic retrieval and editing over real repositories. It operates as an MCP server (and other integrations), exposing IDE-like tools so agents can locate symbols, reason about code structure, make targeted edits, and validate changes. The toolkit is LLM-agnostic and framework-agnostic, positioning itself as a drop-in capability for different chat UIs, orchestrators, or custom agent...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    MaxKB

    MaxKB

    Open-source platform for building enterprise-grade agents

    MaxKB (Max Knowledge Brain) is an open-source platform for building enterprise-grade AI agents with strong knowledge retrieval, RAG pipelines, and workflow orchestration. It focuses on practical deployments such as customer support, internal knowledge bases, research assistants, and education, bundling tools for data ingestion, chunking, embedding, retrieval, and answer synthesis. The system exposes flexible tool-use (including MCP), supports multi-model backends, and provides dashboards for...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    fairseq2

    fairseq2

    FAIR Sequence Modeling Toolkit 2

    fairseq2 is a modern, modular sequence modeling framework developed by Meta AI Research as a complete redesign of the original fairseq library. Built from the ground up for scalability, composability, and research flexibility, fairseq2 supports a broad range of language, speech, and multimodal content generation tasks, including instruction fine-tuning, reinforcement learning from human feedback (RLHF), and large-scale multilingual modeling. Unlike the original fairseq—which evolved into a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Fast3R

    Fast3R

    Fast3R: Towards 3D Reconstruction of 1000+ Images in One Forward Pass

    Fast3R is Meta AI’s official CVPR 2025 release for “Towards 3D Reconstruction of 1000+ Images in One Forward Pass.” It represents a next-generation feedforward 3D reconstruction model capable of producing dense point clouds and camera poses for hundreds to thousands of images or video frames in a single inference pass—eliminating the need for slow, iterative structure-from-motion pipelines. Built on PyTorch Lightning and extending concepts from DUSt3R and Spann3r, Fast3R unifies multi-view...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    MetaCLIP

    MetaCLIP

    ICLR2024 Spotlight: curation/training code, metadata, distribution

    MetaCLIP is a research codebase that extends the CLIP framework into a meta-learning / continual learning regime, aiming to adapt CLIP-style models to new tasks or domains efficiently. The goal is to preserve CLIP’s strong zero-shot transfer capability while enabling fast adaptation to domain shifts or novel class sets with minimal data and without catastrophic forgetting. The repository provides training logic, adaptation strategies (e.g. prompt tuning, adapter modules), and evaluation...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Pearl

    Pearl

    A Production-ready Reinforcement Learning AI Agent Library

    Pearl is a production-ready reinforcement learning and contextual bandit agent library built for real-world sequential decision making. It is organized around modular components—policy learners, replay buffers, exploration strategies, safety modules, and history summarizers—that snap together to form reliable agents with clear boundaries and strong defaults. The library implements classic and modern algorithms across two regimes: contextual bandits (e.g., LinUCB, LinTS, SquareCB, neural...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    DLRM

    DLRM

    An implementation of a deep learning recommendation model (DLRM)

    DLRM (Deep Learning Recommendation Model) is Meta’s open-source reference implementation for large-scale recommendation systems built to handle extremely high-dimensional sparse features and embedding tables. The architecture combines dense (MLP) and sparse (embedding) branches, then interacts features via dot product or feature interactions before passing through further dense layers to predict click-through, ranking scores, or conversion probabilities. The implementation is optimized for...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    DeiT (Data-efficient Image Transformers)
    DeiT (Data-efficient Image Transformers) shows that Vision Transformers can be trained competitively on ImageNet-1k without external data by using strong training recipes and knowledge distillation. Its key idea is a specialized distillation strategy—including a learnable “distillation token”—that lets a transformer learn effectively from a CNN or transformer teacher on modest-scale datasets. The project provides compact ViT variants (Tiny/Small/Base) that achieve excellent...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    VGGT

    VGGT

    [CVPR 2025 Best Paper Award] VGGT

    VGGT is a transformer-based framework aimed at unifying classic visual geometry tasks—such as depth estimation, camera pose recovery, point tracking, and correspondence—under a single model. Rather than training separate networks per task, it shares an encoder and leverages geometric heads/decoders to infer structure and motion from images or short clips. The design emphasizes consistent geometric reasoning: outputs from one head (e.g., correspondences or tracks) reinforce others (e.g., pose...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Prompt Engineering Interactive Tutorial

    Prompt Engineering Interactive Tutorial

    Anthropic's Interactive Prompt Engineering Tutorial

    Prompt-eng-interactive-tutorial is a comprehensive, hands-on tutorial that teaches the craft of prompt engineering with Claude through guided, executable lessons. It starts with the anatomy of a good prompt and moves into techniques that deliver the “80/20” gains—separating instructions from data, specifying schemas, and setting evaluation criteria. The course leans heavily on realistic failure modes (ambiguity, hallucination, brittle instructions) and shows how to iteratively debug prompts...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Courses (Anthropic)

    Courses (Anthropic)

    Anthropic's educational courses

    Anthropic’s courses repository is a growing collection of self-paced learning materials that teach practical AI skills using Claude and the Anthropic API. It’s organized as a sequence of hands-on courses—starting with API fundamentals and prompt engineering—so learners build capability step by step rather than in isolation. Each course mixes short readings with runnable notebooks and exercises, guiding you through concepts like model parameters, streaming, multimodal prompts, structured...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Tiktoken

    Tiktoken

    tiktoken is a fast BPE tokeniser for use with OpenAI's models

    tiktoken is a high-performance, tokenizer library (based on byte-pair encoding, BPE) designed for use with OpenAI’s models. It handles encoding and decoding text to token IDs efficiently, with minimal overhead. Because tokenization is a fundamental step in preparing text for models, tiktoken is optimized for speed, memory, and correctness in model contexts (e.g. matching OpenAI’s internal tokenization). The repo supports multiple encodings (e.g. “cl100k_base”) and lets users switch encoding...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Ring

    Ring

    Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI

    Ring is a reasoning Mixture-of-Experts (MoE) large language model (LLM) developed by inclusionAI. It is built from or derived from Ling. Its design emphasizes reasoning, efficiency, and modular expert activation. In its “flash” variant (Ring-flash-2.0), it optimizes inference by activating only a subset of experts. It applies reinforcement learning/reasoning optimization techniques. Its architectures and training approaches are tuned to enable efficient and capable reasoning performance....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Code World Model (CWM)

    Code World Model (CWM)

    Research code artifacts for Code World Model (CWM)

    CWM (Code World Model) is a 32-billion-parameter open-weights language model. It is developed by Meta for enhancing code generation and reasoning about programs. It is explicitly trained on execution traces, action-observation trajectories, and agentic interactions in controlled environments. It has been developed to better capture how code, actions, and state interact over time. The repository provides inference code, reproducibility scripts, prompt guides, and more. It has model cards,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    HunyuanVideo-I2V

    HunyuanVideo-I2V

    A Customizable Image-to-Video Model based on HunyuanVideo

    HunyuanVideo-I2V is a customizable image-to-video generation framework from Tencent Hunyuan, built on their HunyuanVideo foundation. It extends video generation so that given a static reference image plus an optional prompt, it generates a video sequence that preserves the reference image’s identity (especially in the first frame) and allows stylized effects via LoRA adapters. The repository includes pretrained weights, inference and sampling scripts, training code for LoRA effects, and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Tencent-Hunyuan-Large

    Tencent-Hunyuan-Large

    Open-source large language model family from Tencent Hunyuan

    Tencent-Hunyuan-Large is the flagship open-source large language model family from Tencent Hunyuan, offering both pre-trained and instruct (fine-tuned) variants. It is designed with long-context capabilities, quantization support, and high performance on benchmarks across general reasoning, mathematics, language understanding, and Chinese / multilingual tasks. It aims to provide competitive capability with efficient deployment and inference. FP8 quantization support to reduce memory usage...
    Downloads: 0 This Week
    Last Update:
    See Project