Showing 133 open source projects for "layer"

View related business solutions
  • Outgrown Windows Task Scheduler? Icon
    Outgrown Windows Task Scheduler?

    Free diagnostic identifies where your workflow is breaking down—with instant analysis of your scheduling environment.

    Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
    Download Free Tool
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 1
    Laravel Boost

    Laravel Boost

    Laravel-focused MCP server for augmenting AI powered local development

    ...It’s designed to fit naturally into existing projects, supporting current Laravel releases and modern PHP runtimes with minimal setup. Rather than trying to replace your editor or framework, Boost acts like an intelligent layer that understands Laravel’s conventions and reduces the “explain my app to the AI” friction.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 2
    DeepCode

    DeepCode

    DeepCode: Open Agentic Coding

    ...It positions itself as an “open agentic coding” system that can handle tasks like paper-to-code reproduction, frontend generation, and backend implementation by decomposing problems into structured steps and coordinating specialized agents. The system description highlights an orchestration layer that plans, assigns subtasks, and adapts strategies as complexity changes, rather than relying on a single monolithic prompt. It also describes document parsing capabilities aimed at extracting algorithmic and mathematical details from technical materials, translating them into implementable specifications and code.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    x-transformers

    x-transformers

    A simple but complete full-attention transformer

    A simple but complete full-attention transformer with a set of promising experimental features from various papers. Proposes adding learned memory key/values prior to attending. They were able to remove feedforwards altogether and attain a similar performance to the original transformers. I have found that keeping the feedforwards and adding the memory key/values leads to even better performance. Proposes adding learned tokens, akin to CLS tokens, named memory tokens, that is passed through...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    Wren Engine

    Wren Engine

    The Semantic Engine for Model Context Protocol(MCP)

    Wren Engine is a semantic engine designed to empower Model Context Protocol (MCP) clients and AI agents by providing accurate, contextual, and governed access to business data. It serves as a bridge between large language models (LLMs) and enterprise systems, facilitating seamless integration and interaction. ​
    Downloads: 0 This Week
    Last Update:
    See Project
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 5
    Arch

    Arch

    Arch is an intelligent prompt gateway. Engineered with (fast) LLMs

    Arch is an intelligent Layer 7 gateway designed to protect, observe, and personalize LLM applications (agents, assistants, co-pilots) with your APIs. Engineered with purpose-built LLMs, Arch handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting jailbreak attempts, intelligently calling "backend" APIs to fulfill the user's request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    DiffEqFlux.jl

    DiffEqFlux.jl

    Pre-built implicit layer architectures with O(1) backprop, GPUs

    DiffEqFlux.jl is a Julia library that combines differential equations with neural networks, enabling the creation of neural differential equations (neural ODEs), universal differential equations, and physics-informed learning models. It serves as a bridge between the DifferentialEquations.jl and Flux.jl libraries, allowing for end-to-end differentiable simulations and model training in scientific machine learning. DiffEqFlux.jl is widely used for modeling dynamical systems with learnable...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Keras Hub

    Keras Hub

    Pretrained model hub for Keras 3

    Keras Hub is a repository of pre-trained models for Keras 3, offering a collection of ready-to-use models for various machine-learning tasks. KerasHub is an extension of the core Keras API; KerasHub components are provided as Layer and Model implementations. If you are familiar with Keras, congratulations. You already understand most of KerasHub.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    CTranslate2

    CTranslate2

    Fast inference engine for Transformer models

    ...The execution is significantly faster and requires less resources than general-purpose deep learning frameworks on supported models and tasks thanks to many advanced optimizations: layer fusion, padding removal, batch reordering, in-place operations, caching mechanism, etc. The model serialization and computation support weights with reduced precision: 16-bit floating points (FP16), 16-bit integers (INT16), and 8-bit integers (INT8). The project supports x86-64 and AArch64/ARM64 processors and integrates multiple backends that are optimized for these platforms: Intel MKL, oneDNN, OpenBLAS, Ruy, and Apple Accelerate.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    WrenAI

    WrenAI

    Open-source SQL AI Agent for Text-to-SQL. Make Text2SQL Easy

    Wren AI is a SQL AI Agent for data teams to get results and insights faster by asking business questions without writing SQL, and it's open-source. Wren AI has implemented a semantic engine architecture to provide the LLM context of your business; you can easily establish a logical presentation layer on your data schema that helps LLM learn more about your business context. With Wren AI, you can process metadata, schema, terminology, data relationships, and the logic behind calculations and aggregations with “Modeling Definition Language”, to generate accurate SQL queries with semantic context. When starting a new conversation in Wren AI, your question is used to find the most relevant tables. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 10
    imodelsX

    imodelsX

    Interpretable prompting and models for NLP

    ...Find a natural-language prompt using input-gradients. Fit a better linear model using an LLM to extract embeddings. Fit better decision trees using an LLM to expand features. Finetune a single linear layer on top of LLM embeddings. Use these just a like a sci-kit-learn model. During training, they fit better features via LLMs, but at test-time, they are extremely fast and completely transparent.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Docker MCP Gateway

    Docker MCP Gateway

    Docker mcp CLI plugin / MCP Gateway

    Docker’s MCP Gateway project is a Docker CLI plugin and supporting gateway system designed to run, manage, and securely expose MCP servers using container isolation. It underpins the MCP Toolkit experience in Docker Desktop, but it can also be used independently as a general-purpose MCP operational layer. The core idea is to treat MCP servers like containerized services, giving each server controlled privileges and a lifecycle you can inspect, enable/disable, and reset as needed. Instead of having each AI client manage its own MCP server configuration, the gateway provides a unified interface so multiple clients can connect consistently to the same configured tool surface. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    Amazon Q Developer CLI

    Amazon Q Developer CLI

    Chat experience in your terminal

    ...It also integrates with common developer flows, offering autocompletion and step-by-step plans before running potentially destructive actions. The CLI targets macOS and Linux and is designed to coexist with standard tools rather than replace them, acting as a smart layer on top. Team-focused features center on repeatability and transparency so generated changes can be reviewed, amended, and committed like any other code.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 13
    Botonic

    Botonic

    Build chatbots and conversational experiences using React

    Botonic is a full-stack Javascript framework to create chatbots and modern conversational apps that work on multiple platforms, web, mobile and messaging apps (Messenger, Whatsapp, Telegram, etc). Building modern applications on top of messaging apps like Whatsapp or Messenger is much more than creating simple text-based chatbots. Botonic is a full-stack serverless framework that combines the power of React and Tensorflow.js to create amazing experiences at the intersection of text and...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    Sled

    Sled

    Teleport Claude Code, Codex or Gemini CLI to your phone

    ...Although specific details in the repository are limited without direct project documentation, context and related online mentions indicate it functions as a local interface layer that abstracts development agent workflows and Teleport-style interactions, bringing parts of modern assistant capabilities to phone or web UIs. This project resembles modern agent front ends where developers can test, iterate, and prompt their local models or backends without complex setup. The interface is light and integrates into broader development stacks, and the repository’s activity suggests ongoing maintenance with an MIT license and community engagement.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    Context7 MCP

    Context7 MCP

    Up-to-date code documentation for LLMs and AI code editors

    Context7 is a system that aims to inject fresh, version-specific documentation and code snippets into language model prompts, thereby avoiding reliance on outdated training data or hallucinated APIs. It’s designed to integrate with tools that support the Model Context Protocol (MCP), such as Cursor, Windsurf, and other LLM clients. When a user writes a prompt and appends something like “use context7,” the system detects the libraries or frameworks being asked about, fetches the latest...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    Mastra

    Mastra

    The TypeScript AI agent framework

    Mastra is a TypeScript-first framework for building AI-powered applications and agents, designed to take projects from prototype to production on a modern JavaScript/TypeScript stack. It integrates cleanly with React, Next.js, and Node-based backends, but can also run as a standalone server, giving teams flexibility in how they deploy their AI logic. At its core, Mastra provides abstractions for agents, workflows, tools, memory, retrieval, and model routing, so developers can focus on...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 17
    RAGapp

    RAGapp

    The easiest way to use Agentic RAG in any enterprise

    ...As simple to configure as OpenAI's custom GPTs, but deployable in your own cloud infrastructure using Docker. Built using LlamaIndex. Just the RAGapp container doesn't come with any authentication layer by design. This is the task of an API Gateway routing the traffic to RAGapp. This step heavily depends on your cloud provider and the services you use. For a pure Docker Compose environment, you can look at our RAGapp with management UI deployment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    ChatGPT Clone

    ChatGPT Clone

    ChatGPT interface with better UI

    ...The goal is to replicate the core chat UX—message history, streaming tokens, code blocks, and system prompts—while letting you plug in different provider APIs or local models. It showcases a clean separation between the web client and the message orchestration layer so you can experiment with prompts, roles, and memory strategies. The project is useful for prototyping assistants, documentation bots, and internal developer tools without committing to a specific vendor or UI framework. Configuration is kept simple so newcomers can get a working chat in minutes and then dial in features like authentication or multi-model routing. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 19
    CUTLASS

    CUTLASS

    CUDA Templates for Linear Algebra Subroutines

    CUTLASS is a collection of CUDA C++ template abstractions for implementing high-performance matrix-multiplication (GEMM) and related computations at all levels and scales within CUDA. It incorporates strategies for hierarchical decomposition and data movement similar to those used to implement cuBLAS and cuDNN. CUTLASS decomposes these "moving parts" into reusable, modular software components abstracted by C++ template classes. These thread-wide, warp-wide, block-wide, and device-wide...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    Octelium

    Octelium

    A next-gen FOSS self-hosted unified zero trust secure access platform

    ...It positions itself as more than a typical VPN; it supports zero-trust network access (ZTNA), “BeyondCorp”-style access, API/AI gateway functionality, and even serves as a PaaS-like deployment surface. One of its key strengths is identity-based, application-layer (L7) aware control, meaning access decisions are made per request, with context and policy rather than simple network-level allow/block rules. It supports both client-based (e.g., WireGuard/QUIC tunnels) and client-less access models, which makes it flexible for both human users and automated workloads. The project also highlights self-hosted, no hidden “server-side” locked components, giving organizations greater ownership and control over access, rather than relying on proprietary SaaS.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    Axon

    Axon

    Nx-powered Neural Networks

    Nx-powered Neural Networks for Elixir. Axon consists of the following components. Functional API – A low-level API of numerical definitions (defn) of which all other APIs build on. Model Creation API – A high-level model creation API which manages model initialization and application. Optimization API – An API for creating and using first-order optimization techniques based on the Optax library. Training API – An API for quickly training models, inspired by PyTorch Ignite. Axon provides...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    ktrain

    ktrain

    ktrain is a Python library that makes deep learning AI more accessible

    ktrain is a Python library that makes deep learning and AI more accessible and easier to apply. ktrain is a lightweight wrapper for the deep learning library TensorFlow Keras (and other libraries) to help build, train, and deploy neural networks and other machine learning models. Inspired by ML framework extensions like fastai and ludwig, ktrain is designed to make deep learning and AI more accessible and easier to apply for both newcomers and experienced practitioners. With only a few lines...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    Thinc

    Thinc

    A refreshing functional take on deep learning

    Thinc is a lightweight deep learning library that offers an elegant, type-checked, functional-programming API for composing models, with support for layers defined in other frameworks such as PyTorch, TensorFlow and MXNet. You can use Thinc as an interface layer, a standalone toolkit or a flexible way to develop new models. Previous versions of Thinc have been running quietly in production in thousands of companies, via both spaCy and Prodigy. We wrote the new version to let users compose, configure and deploy custom models built with their favorite framework. Switch between PyTorch, TensorFlow and MXNet models without changing your application, or even create mutant hybrids using zero-copy array interchange. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    GetProfile

    GetProfile

    User profile and long-term memory for your AI agent

    GetProfile is a drop-in proxy layer that sits in front of your LLM provider to turn otherwise stateless chat requests into a system with persistent user profiles and long-term memory. Instead of forcing you to redesign your application, you route your model calls through GetProfile and it captures conversation context automatically as traffic flows. It then extracts structured traits and “memories” from those conversations, stores them, and injects the most relevant profile context back into future prompts so responses stay consistent and personalised over time. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Semantic Router

    Semantic Router

    Superfast AI decision making and processing of multi-modal data

    Semantic Router is a superfast decision-making layer for your LLMs and agents. Rather than waiting for slow, unreliable LLM generations to make tool-use or safety decisions, we use the magic of semantic vector space — routing our requests using semantic meaning. Combining LLMs with deterministic rules means we can be confident that our AI systems behave as intended. Cramming agent tools into the limited context window is expensive, slow, and fundamentally limited.
    Downloads: 0 This Week
    Last Update:
    See Project