Showing 573 open source projects for "code::block"

View related business solutions
  • Outgrown Windows Task Scheduler? Icon
    Outgrown Windows Task Scheduler?

    Free diagnostic identifies where your workflow is breaking down—with instant analysis of your scheduling environment.

    Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
    Download Free Tool
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 1
    Generative AI for Beginners (Version 3)

    Generative AI for Beginners (Version 3)

    21 Lessons, Get Started Building with Generative AI

    Generative AI for Beginners is a 21-lesson course by Microsoft Cloud Advocates that teaches the fundamentals of building generative AI applications in a practical, project-oriented way. Lessons are split into “Learn” modules for core concepts and “Build” modules with hands-on code in Python and TypeScript, so you can jump in at any point that matches your goals. The course covers everything from model selection, prompt engineering, and chat/text/image app patterns to secure development practices and UX for AI. It also walks through modern application techniques such as function calling, RAG with vector databases, working with open source models, agents, fine-tuning, and using SLMs. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 2
    Agently 4

    Agently 4

    Build GenAI application quick and easy

    Agently is a Python framework for building generative-AI (“GenAI”) applications; it focuses on enabling developers to orchestrate AI agents, workflows, and event-driven logic in a robust, reusable way. With Agently, one can define agents that call different models, chain tasks, trigger workflows based on events, and switch models with minimal code changes. It abstracts away boilerplate around model API calls, tool usage, prompt management, and workflow state. The project aims at production-grade GenAI application development rather than just one-off scripts — you’ll find examples of news gathering, agentic workflows, control systems, etc. It is licensed under Apache-2.0, allowing commercial use and modification. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    4M

    4M

    4M: Massively Multimodal Masked Modeling

    ...The same model family can classify, segment, detect, caption, and even generate images, with a single interface for both discriminative and generative use. The repository releases code and models for multiple variants (e.g., 4M-7 and 4M-21), emphasizing transfer to unseen tasks and modalities. Training/inference configs and issues discuss things like depth tokenizers, input masks for generation, and CUDA build questions, signaling active research iteration. The design leans into flexibility and steerability, so prompts and masks can shape behavior without bespoke heads per task. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    MGIE

    MGIE

    Guiding Instruction-based Image Editing via Multimodal Large Language

    ...The project focuses on making edits explainable and controllable: the model interprets text guidance, reasons over image content, and outputs edits aligned with user intent. It’s positioned as an ICLR 2024 Spotlight work, with code and references that show how to connect language planning to concrete image operations. This bridges a gap between free-form prompts and precise edits by letting users describe “what” and “where” in everyday language. The repo includes instructions, examples, and links that situate MGIE within Apple’s broader line of multimodal research. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 5
    Avalanche

    Avalanche

    End-to-End Library for Continual Learning based on PyTorch

    Avalanche is an end-to-end Continual Learning library based on Pytorch, born within ContinualAI with the unique goal of providing a shared and collaborative open-source (MIT licensed) codebase for fast prototyping, training and reproducible evaluation of continual learning algorithms. Avalanche can help Continual Learning researchers in several ways. This module maintains a uniform API for data handling: mostly generating a stream of data from one or more datasets. It contains all the major...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    IVY

    IVY

    The Unified Machine Learning Framework

    Take any code that you'd like to include. For example, an existing TensorFlow model, and some useful functions from both PyTorch and NumPy libraries. Choose any framework for writing your higher-level pipeline, including data loading, distributed training, analytics, logging, visualization etc. Choose any backend framework which should be used under the hood, for running this entire pipeline.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    NoneBot

    NoneBot

    Asynchronous multi-platform robot framework written in Python

    ...Supports multiple platforms and multiple incident response methods. Asynchronous priority development to improve operational efficiency. Simple and clear dependency injection system, built-in dependency functions reduce user code. NoneBot2 is a modern, cross-platform, and extensible Python chatbot framework. It is based on Python's type annotations and asynchronous features, and can provide convenient and flexible support for your needs. NoneBot2 is written based on Python asyncio , and has a certain degree of synchronous function compatibility based on the asynchronous mechanism. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Z80-μLM

    Z80-μLM

    Z80-μLM is a 2-bit quantized language model

    ...A key deliverable is producing CP/M-compatible .COM binaries, enabling a genuinely vintage “chat with your computer” experience on real hardware or accurate emulators. The project sits at the intersection of machine learning and systems constraints, showing how model architecture, quantization, and inference code generation can be adapted to extreme memory and compute limits. It also functions as an educational reference for how to reduce inference to operations that fit an old-school instruction set and runtime environment.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    ComfyUI-LTXVideo

    ComfyUI-LTXVideo

    LTX-Video Support for ComfyUI

    ComfyUI-LTXVideo is a bridge between ComfyUI’s node-based generative workflow environment and the LTX-Video multimedia processing framework, enabling creators to orchestrate complex video tasks within a visual graph paradigm. Instead of writing code to apply effects, transitions, edits, and data flows, users can assemble nodes that represent video inputs, transformations, and outputs, letting them prototype and automate video production pipelines visually. This integration empowers non-programmers and rapid-iteration teams to harness the performance of LTX-Video while maintaining the clarity and flexibility of a dataflow graph model. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 10
    ChatGLM-6B

    ChatGLM-6B

    ChatGLM-6B: An Open Bilingual Dialogue Language Model

    ChatGLM-6B is an open bilingual (Chinese + English) conversational language model based on the GLM architecture, with approximately 6.2 billion parameters. The project provides inference code, demos (command line, web, API), quantization support for lower memory deployment, and tools for finetuning (e.g., via P-Tuning v2). It is optimized for dialogue and question answering with a balance between performance and deployability in consumer hardware settings. Support for quantized inference (INT4, INT8) to reduce GPU memory requirements. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    SAM 2

    SAM 2

    The repository provides code for running inference with SAM 2

    SAM2 is a next-generation version of the Segment Anything Model (SAM), designed to improve performance, generalization, and efficiency in promptable image segmentation tasks. It retains the core promptable interface—accepting points, boxes, or masks—but incorporates architectural and training enhancements to produce higher-fidelity masks, better boundary adherence, and robustness to complex scenes. The updated model is optimized for faster inference and lower memory use, enabling real-time...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 12
    OpenLIT

    OpenLIT

    OpenLIT is an open-source LLM Observability tool

    ...It automatically collects LLM input and output metadata and monitors GPU performance for self-hosted LLMs. OpenLIT makes integrating observability into GenAI projects effortless with just a single line of code. Whether you're working with popular LLM providers such as OpenAI and HuggingFace, or leveraging vector databases like ChromaDB, OpenLIT ensures your applications are monitored seamlessly, providing critical insights including GPU performance stats for self-hosted LLMs to improve performance and reliability. This project proudly follows the Semantic Conventions of the OpenTelemetry community, consistently updating to align with the latest standards in observability.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    openbench

    openbench

    Provider-agnostic, open-source evaluation infrastructure

    openbench is an open-source, provider-agnostic evaluation infrastructure designed to run standardized, reproducible benchmarks on large language models (LLMs), enabling fair comparison across different model providers. It bundles dozens of evaluation suites — covering knowledge, reasoning, math, code, science, reading comprehension, long-context recall, graph reasoning, and more — so users don’t need to assemble disparate datasets themselves. With a simple CLI interface (e.g. bench eval <benchmark> --model <model-id>), you can quickly evaluate any model supported by Groq or other providers (OpenAI, Anthropic, HuggingFace, local models, etc.). openbench also supports private/local evaluations: you can integrate your own custom benchmarks or data (e.g. internal test suites, domain-specific tasks) to evaluate models in a privacy-preserving way.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    OpenAGI

    OpenAGI

    When LLM Meets Domain Experts

    ...It provides a structured Python framework, pyopenagi, for defining agents as modular units that encapsulate execution logic, configuration, and dependency metadata. Agents are organized in a well-defined folder structure that includes code (agent.py), configuration (config.json), and extra requirements (meta_requirements.txt), which makes them easy to package, share, and reuse. The project includes tooling for registering agents with AIOS by uploading them via a command-line interface, enforcing a consistent naming scheme that matches the local folder layout. A companion tooling layer lets agents call external tools described in the tools.md documentation, enabling them to orchestrate APIs, retrieval pipelines, and other utilities in response to LLM decisions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    OSS-Fuzz Gen

    OSS-Fuzz Gen

    LLM powered fuzzing via OSS-Fuzz

    OSS-Fuzz-Gen is a companion project that helps automatically create or improve fuzz targets for open-source codebases, aiming to increase coverage in OSS-Fuzz with minimal maintainer effort. It analyses a library’s APIs, examples, and tests to propose harnesses that exercise parsers, decoders, or protocol handlers—precisely the code where fuzzing pays off. The system integrates with modern LLM-assisted workflows to draft harness code and then iterates based on build errors or low coverage signals. Importantly, it aligns with OSS-Fuzz conventions, generating corpus seeds, build rules, and sanitizer settings so projects can plug in quickly. Reports highlight what functions were targeted, how coverage evolved, and where manual hints could unlock more paths. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Qwen3 Embedding

    Qwen3 Embedding

    Designed for text embedding and ranking tasks

    ...It achieves state-of-the-art performance on benchmarks like MTEB (Multilingual Text Embedding Benchmark) and supports instruction-aware embedding (i.e. embedding task instructions along with queries) and flexible embedding/vector dimension definitions. It is meant for tasks such as text retrieval, classification, clustering, bitext mining, and code retrieval.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Diffusers

    Diffusers

    State-of-the-art diffusion models for image and audio generation

    ...Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions. State-of-the-art diffusion pipelines that can be run in inference with just a few lines of code. Interchangeable noise schedulers for different diffusion speeds and output quality. Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. We recommend installing Diffusers in a virtual environment from PyPi or Conda. For more details about installing PyTorch and Flax, please refer to their official documentation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Petastorm

    Petastorm

    Petastorm library enables single machine or distributed training

    ...It can also be used from pure Python code. A dataset created using Petastorm is stored in Apache Parquet format. On top of a Parquet schema, petastorm also stores higher-level schema information that makes multidimensional arrays into a native part of a petastorm dataset. Petastorm supports extensible data codecs. These enable a user to use one of the standard data compressions (jpeg, png) or implement her own.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Vision Transformer Pytorch

    Vision Transformer Pytorch

    Implementation of Vision Transformer, a simple way to achieve SOTA

    ...It breaks down the model into patch embedding, positional encoding, multi-head self-attention, feed-forward blocks, and a classification head so you can understand each component in isolation. The code is intentionally compact and modular, which makes it easy to tinker with hyperparameters, depth, width, and attention dimensions. Because it stays close to vanilla PyTorch, you can integrate custom datasets and training loops without framework lock-in. It’s widely used as an educational reference for people learning transformers in vision and as a lightweight baseline for research prototypes. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 20
    OpenFold

    OpenFold

    Trainable, memory-efficient, and GPU-friendly PyTorch reproduction

    OpenFold carefully reproduces (almost) all of the features of the original open source inference code (v2.0.1). The sole exception is model ensembling, which fared poorly in DeepMind's own ablation testing and is being phased out in future DeepMind experiments. It is omitted here for the sake of reducing clutter. In cases where the Nature paper differs from the source, we always defer to the latter. OpenFold is trainable in full precision, half precision, or bfloat16 with or without DeepSpeed, and we've trained it from scratch, matching the performance of the original. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 21
    AI Runner

    AI Runner

    Offline inference engine for art, real-time voice conversations

    ...It is implemented as a desktop-oriented Python application and emphasizes privacy and self-hosting, allowing users to work with text-to-speech, speech-to-text, text-to-image and multimodal models without sending data to external services. At the core of its LLM stack is a mode-based architecture with specialized “modes” such as Author, Code, Research, QA and General, and a workflow manager that automatically routes user requests to the right agent based on the task. The project has a strong focus on developer ergonomics, with thorough development guidelines, environment configuration using .env variables, and a clear structure for tests, tools and agents.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 22
    GLM-4.6V

    GLM-4.6V

    GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning

    GLM-4.6V represents the latest generation of the GLM-V family and marks a major step forward in multimodal AI by combining advanced vision-language understanding with native “tool-call” capabilities, long-context reasoning, and strong generalization across domains. Unlike many vision-language models that treat images and text separately or require intermediate conversions, GLM-4.6V allows inputs such as images, screenshots or document pages directly as part of its reasoning pipeline — and...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 23
    Archon

    Archon

    The knowledge and task management backbone for AI coding assistants

    ...It acts as a backend (including an MCP server) that allows different AI coding tools and assistants to share the same structured context, knowledge base, and task lists, improving consistency, productivity, and collaboration across multi-agent interactions. Users can import documentation, project files, and external knowledge so that assistants like Claude Code, Cursor, or other LLM-powered tools work with up-to-date, project-specific context rather than relying on limited prompt memory. Archon’s UI and APIs are intended to streamline how developers interact with their agents, whether for exploratory coding, automated task execution, or integrated RAG workflows, helping reduce friction between manual coding tasks and AI-generated suggestions.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 24
    PPTAgent

    PPTAgent

    PPTAgent: Generating and Evaluating Presentations

    PPTAgent is a research system for generating and evaluating slide decks that goes beyond simple text-to-slides. It follows a two-stage, edit-based workflow: first it analyzes reference presentations to infer slide roles and structure, then it drafts an outline and iteratively performs editing actions to produce new slides. The project includes both the generation agent and an evaluation framework, PPTEval, to score content quality, design, and coherence. The repository highlights the EMNLP...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 25
    DeepSeed

    DeepSeed

    Deep learning optimization library making distributed training easy

    DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective. DeepSpeed delivers extreme-scale model training for everyone, from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU. Using current generation of GPU clusters with hundreds of devices, 3D parallelism of DeepSpeed can efficiently train deep learning models with trillions of parameters. With just a single GPU,...
    Downloads: 2 This Week
    Last Update:
    See Project