Open Source TypeScript Large Language Models (LLM) - Page 2

TypeScript Large Language Models (LLM)

View 361 business solutions

Browse free open source TypeScript Large Language Models (LLM) and projects below. Use the toggles on the left to filter open source TypeScript Large Language Models (LLM) by OS, license, language, programming language, and project status.

  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 1
    Code Review GPT

    Code Review GPT

    Your personal code reviewer powered by LLMs

    Code Review GPT uses Large Language Models to review code in your CI/CD pipeline. It helps streamline the code review process by providing feedback on code that may have issues or areas for improvement. Code Review GPT is in alpha and should be used for fun only. It may provide useful feedback but please check any suggestions thoroughly.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    DocStrange

    DocStrange

    Extract and convert data from any document, images, pdfs, word doc

    DocStrange is an open-source document understanding and extraction library designed to convert complex files into structured, LLM-ready outputs such as Markdown, JSON, CSV, and HTML. Developed by Nanonets, the project combines OCR, layout detection, table understanding, and structured extraction into one end-to-end pipeline, which reduces the need to stitch together multiple separate services. It is built for developers who need high-quality parsing from scans, photos, PDFs, office files, and other document sources while preserving privacy and control over the processing flow. One of its key differentiators is deployment flexibility: it offers a cloud API for managed usage as well as a fully private offline mode that runs locally on a GPU. The platform also supports synchronous extraction, streaming responses, and asynchronous processing for larger documents, which makes it adaptable to both interactive workflows and heavier back-end pipelines.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    Gemini Fullstack LangGraph Quickstart

    Gemini Fullstack LangGraph Quickstart

    Get started w/ building Fullstack Agents using Gemini 2.5 & LangGraph

    gemini-fullstack-langgraph-quickstart is a fullstack reference application from Google DeepMind’s Gemini team that demonstrates how to build a research-augmented conversational AI system using LangGraph and Google Gemini models. The project features a React (Vite) frontend and a LangGraph/FastAPI backend designed to work together seamlessly for real-time research and reasoning tasks. The backend agent dynamically generates search queries based on user input, retrieves information via the Google Search API, and performs reflective reasoning to identify knowledge gaps. It then iteratively refines its search until it produces a comprehensive, well-cited answer synthesized by the Gemini model. The repository provides both a browser-based chat interface and a command-line script (cli_research.py) for executing research queries directly. For production deployment, the backend integrates with Redis and PostgreSQL to manage persistent memory, streaming outputs, & background task coordination.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    Hollama

    Hollama

    A minimal LLM chat app that runs entirely in your browser

    Hollama is a lightweight open-source chat application designed to run entirely within the browser while interacting with large language model servers. The project provides a minimal but powerful user interface for communicating with local or remote LLMs, including servers powered by Ollama or OpenAI-compatible APIs. Because the application runs as a static web interface, it does not require complex backend infrastructure and can be easily deployed or self-hosted. Hollama supports both text-based and multimodal interactions, allowing users to work with models that process images as well as text. The interface includes features for editing prompts, retrying responses, copying generated code snippets, and storing conversation history locally within the browser. Mathematical expressions can be rendered using KaTeX, and Markdown formatting allows code blocks and structured outputs to appear clearly within conversations.
    Downloads: 1 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 5
    Lobe Icons

    Lobe Icons

    Brings AI/LLM brand logos to your React & React Native apps

    Lobe Icons is an open-source icon library designed to provide developers with a comprehensive collection of logos and visual assets representing popular artificial intelligence platforms, language models, and related technologies. The project focuses on making it easy for developers to include recognizable AI brand icons in applications such as dashboards, AI tools, documentation sites, or developer portals. The library includes icons for a wide range of AI providers and models, allowing developers to visually represent integrations with tools such as large language models, AI APIs, and machine learning platforms. These icons are distributed in multiple formats including SVG, PNG, and WebP so they can be used in both web and mobile applications.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    lms

    lms

    LM Studio CLI

    lms is a command-line interface tool designed to interact with and manage local large language models through the LM Studio ecosystem. The tool allows developers to control model execution directly from the terminal, providing programmatic access to features that are otherwise available through graphical interfaces. Through the CLI, users can load and unload models, start or stop local inference servers, and inspect the inputs and outputs generated by language models. LMS is built using the LM Studio JavaScript SDK and integrates tightly with the LM Studio runtime environment. The interface is designed to simplify automation workflows and scripting tasks related to local AI deployment. By exposing model management capabilities through command-line commands, the tool enables developers to integrate local LLM operations into development pipelines and backend services. As a result, LMS acts as a bridge between interactive local AI tools and automated software development workflows.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    promptfoo

    promptfoo

    Evaluate and compare LLM outputs, catch regressions, improve prompts

    Ensure high-quality LLM outputs with automatic evals. Use a representative sample of user inputs to reduce subjectivity when tuning prompts. Use built-in metrics, LLM-graded evals, or define your own custom metrics. Compare prompts and model outputs side-by-side, or integrate the library into your existing test/CI workflow. Use OpenAI, Anthropic, and open-source models like Llama and Vicuna, or integrate custom API providers for any LLM API.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    wllama

    wllama

    WebAssembly binding for llama.cpp - Enabling on-browser LLM inference

    wllama is a WebAssembly-based library that enables large language model inference directly inside a web browser. Built as a binding for the llama.cpp inference engine, the project allows developers to run LLM models locally without requiring a server backend or dedicated GPU hardware. The library leverages WebAssembly SIMD capabilities to achieve efficient execution within modern browsers while maintaining compatibility across platforms. By running models locally on the user’s device, wllama enables privacy-preserving AI applications that do not require sending data to remote servers. The framework provides both high-level APIs for common tasks such as text generation and embeddings, as well as low-level APIs that expose tokenization, sampling controls, and model state management.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    AI as Workspace

    AI as Workspace

    An elegant AI chat client. Full-featured, lightweight

    AI as Workspace, short for AI as Workspace, is an open-source AI client application that provides a unified interface for interacting with multiple large language models and AI tools within a single workspace environment. The platform is designed as a lightweight yet powerful desktop or web application that organizes AI interactions through structured workspaces. Instead of managing individual chat sessions separately, users can group conversations, artifacts, and tasks within customizable workspaces that support different projects or contexts. AIaW supports multiple AI providers and models through a flexible interface compatible with common API formats used by services such as OpenAI-style endpoints. The application also includes a plugin system that allows developers to extend the platform with additional capabilities such as automation tools, integrations, or custom AI utilities.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Add Two Lines of Code. Get Full APM. Icon
    Add Two Lines of Code. Get Full APM.

    AppSignal installs in minutes and auto-configures dashboards, alerts, and error tracking.

    Works out of the box for Rails, Django, Express, Phoenix, and more. Monitoring exceptions and performance in no time.
    Start Free
  • 10
    AWS GenAI LLM Chatbot

    AWS GenAI LLM Chatbot

    A modular and comprehensive solution to deploy a Multi-LLM

    AWS GenAI LLM Chatbot is an enterprise-ready reference solution for deploying a secure, feature-rich generative AI chatbot on AWS with retrieval-augmented generation capabilities. The project is built as a modular blueprint that helps organizations stand up a production-oriented chat experience rather than a simple demo, combining model access, knowledge retrieval, storage, security, and user interface components into one deployable system. It supports multiple model providers and endpoints, giving teams flexibility to work with Amazon Bedrock, SageMaker-hosted models, and additional model access patterns through related integrations. A major part of the design is its RAG layer, which enables the chatbot to pull contextual knowledge from connected data sources so responses can be grounded in enterprise content rather than relying only on model memory.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    AgentDock

    AgentDock

    Build Anything with AI Agents

    AgentDock is an open-source framework designed to simplify the development, orchestration, and deployment of AI agents capable of executing complex automated workflows. The platform provides a backend-first architecture that allows developers to create sophisticated agent systems while maintaining flexibility in model providers and infrastructure choices. It consists of two main components: a core framework that handles agent logic and orchestration, and a reference client application that demonstrates how agents can be deployed and interacted with through a web interface. Built primarily with TypeScript and modern web technologies, the framework emphasizes extensibility and predictable behavior through configurable determinism. AgentDock also supports integration with multiple large language model providers, enabling developers to combine reasoning models, APIs, and external tools within a unified automation pipeline.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    AutoGPT.js

    AutoGPT.js

    Auto-GPT on the browser

    AutoGPT.js is an open-source project that brings autonomous AI agent capabilities similar to AutoGPT directly into the browser environment. The system allows users to run an AI agent capable of performing tasks such as generating code, searching the web, and interacting with files on the local computer. Unlike traditional AutoGPT implementations that require server infrastructure, AutoGPT.js is designed to run primarily in the browser, making it easier to deploy and experiment with autonomous agents. The platform uses web APIs and language model integrations to give the agent the ability to plan tasks, execute commands, and store short-term memory during operations. Developers can also configure the system to connect to different language model APIs and adjust parameters such as temperature or prompt configuration. The project demonstrates how autonomous AI agents can operate within modern web environments while maintaining user privacy and accessibility.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Bedrock Chat

    Bedrock Chat

    AWS-native chatbot using Bedrock

    Bedrock Chat is a mirrored version of an open-source project that provides a conversational interface for interacting with large language models and AI services through a chat-style application. The project typically focuses on delivering a user interface that allows individuals or teams to communicate with AI models, manage conversations, and experiment with prompts and responses. Implementations like Bedrock Chat often integrate with model hosting platforms or APIs that provide access to generative AI systems. The mirror hosted on SourceForge exists primarily to ensure long-term accessibility of the source code and provide alternative download options for developers. Chat platforms of this type frequently include tools for maintaining conversation history, managing prompts, and connecting to multiple model providers. By combining a chat interface with backend AI services, these systems allow users to explore generative AI capabilities in a structured environment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    BrowserNode

    BrowserNode

    Make websites accessible for AI agents. Automate tasks online

    Browsernode is an open-source TypeScript framework that allows AI agents to interact directly with web browsers in order to automate tasks and gather information from websites. The project acts as a bridge between AI models and browser automation tools, enabling language models to control web pages programmatically. Built as an implementation compatible with the Browser-use ecosystem, Browsernode allows agents to perform actions such as navigating pages, extracting information, filling forms, or interacting with dynamic web interfaces. The system integrates with Playwright to control Chromium-based browsers and execute automation scripts in a reliable environment. Developers can configure the framework to connect to different language model providers so that AI agents can interpret instructions and decide which browser actions to perform.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    BuildingAI

    BuildingAI

    Build your own AI application system for free

    BuildingAI is an open-source project focused on applying artificial intelligence techniques to architectural design and building information modeling workflows. The platform aims to bridge the gap between natural language interfaces and building design tools by allowing AI systems to interpret user instructions and convert them into structured architectural operations. By combining generative AI capabilities with building data models, the system can assist with tasks such as design generation, spatial reasoning, and building component creation. The project is intended for architects, engineers, and developers exploring how AI can automate or augment design workflows in the architecture, engineering, and construction industries. It supports interactions where users describe building features, layouts, or modifications in natural language and the AI translates those instructions into actionable design operations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Byterover Cipher

    Byterover Cipher

    Byterover Cipher is an opensource memory layer

    Cipher is an open-source infrastructure component designed to provide a persistent memory layer for AI coding agents and developer tools. The system captures contextual information about codebases, past interactions, and reasoning steps generated by AI assistants so that agents can maintain long-term context while generating code. By storing structured knowledge about programming concepts, project logic, and previous development sessions, Cipher allows AI agents to operate with improved awareness of the software environment they are working within. The framework integrates with multiple AI coding tools and development environments through the Model Context Protocol, enabling seamless interoperability between different agents and IDEs. Cipher also supports collaborative workflows by allowing teams to share AI-generated memories and insights across development environments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Chat UI

    Chat UI

    The open source codebase powering HuggingChat

    Hugging Face Chat UI is an open-source web interface designed for interacting with large language models through a modern conversational interface. The project serves as the codebase behind HuggingChat and can be deployed locally or on cloud infrastructure to create customizable AI chat applications. Built with modern web technologies such as SvelteKit and backed by MongoDB for persistence, the interface provides a responsive environment for multi-turn conversations, file handling, and configuration management. Chat UI connects to any service that exposes an OpenAI-compatible API endpoint, allowing it to work with a wide range of models and inference providers. The platform supports advanced capabilities such as multimodal input, tool integration through Model Context Protocol servers, and intelligent routing that selects the most appropriate model for each request.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Chat with GPT

    Chat with GPT

    An open-source ChatGPT app with a voice

    Chat with GPT is an open-source conversational interface designed to provide an enhanced user experience for interacting with ChatGPT-style language models. The application serves as a customizable alternative client that allows users to connect to language model APIs using their own credentials. Built with modern web technologies such as React and TypeScript, the platform provides a responsive chat interface with advanced features for conversation management. Users can review past chat sessions, modify system prompts, and adjust model parameters such as temperature to control response creativity. The platform also integrates speech capabilities by connecting to text-to-speech systems and speech recognition engines, enabling voice-based conversations with the AI assistant. Additional features include message editing, response regeneration, and the ability to share conversations through public links.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    ChatGPT Admin Web

    ChatGPT Admin Web

    ChatGPT WebUI

    ChatGPT WebUI with user management and background management system. Deploy your commercial ChatGPT web application for free.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Deta Surf

    Deta Surf

    Personal AI Notebooks. Organize files & webpages and generate notes

    Surf is an open-source AI-driven development tool designed to simplify the process of building and experimenting with artificial intelligence applications. The platform provides a streamlined development environment where developers can test models, run experiments, and deploy small AI services with minimal infrastructure overhead. It focuses on simplicity and speed, allowing developers to prototype ideas quickly without managing complex cloud configurations. Surf integrates modern AI workflows such as prompt-based applications, lightweight APIs, and automated deployment pipelines. The platform is particularly useful for developers who want to experiment with AI models locally while maintaining the option to deploy them in production environments later. Its architecture is designed to minimize setup complexity while still supporting scalable application structures.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    DevDocs by CyberAGI

    DevDocs by CyberAGI

    Completely free, private, UI based Tech Documentation MCP server

    DevDocs is an open-source documentation server designed to provide developers with a private, structured interface for browsing and interacting with technical documentation using AI tools. The system functions as a Model Context Protocol (MCP) server that allows large language models and developer assistants to access technical documentation in a structured and efficient way. Instead of sending entire documents to a language model, DevDocs organizes documentation into sections so that only the most relevant portions are retrieved during a query. This approach reduces token usage and improves the accuracy of responses generated by AI coding assistants. The platform is designed to integrate easily with modern developer tools and AI environments such as Cursor, Cline, and Claude-based workflows. It includes a user interface that allows developers to browse documentation repositories and connect them to AI systems while keeping the data private.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Empirical

    Empirical

    Test and evaluate LLMs and model configurations

    Empirical is the fastest way to test different LLMs and model configurations, across all the scenarios that matter for your application.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Farfalle

    Farfalle

    AI search engine - self-host with local or cloud LLMs

    Farfalle is an open-source AI-powered search engine designed to provide an answer-centric search experience similar to modern conversational search systems. The project integrates large language models with multiple search APIs so that the system can gather information from external sources and synthesize responses into concise answers. It can run either with local language models or with cloud-based providers, allowing developers to deploy it privately or integrate with hosted AI services. The architecture separates the frontend and backend, using modern web technologies such as Next.js and FastAPI to deliver an interactive interface and scalable server logic. Farfalle also includes an agent-based search workflow that plans queries and executes multiple search steps to produce more accurate results than traditional keyword searches. The system supports multiple external search providers and integrates caching and rate-limiting mechanisms to maintain reliability during heavy usage.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Flock

    Flock

    Flock is a workflow-based low-code platform for building chatbots

    Flock is a workflow-based low-code platform designed for building AI applications such as chatbots, retrieval-augmented generation systems, and multi-agent workflows. The platform uses a visual workflow architecture where different nodes represent processing steps such as input processing, model inference, retrieval operations, and tool execution. Developers can connect these nodes to create complex pipelines that orchestrate multiple language models and external services. Built on technologies such as LangChain, LangGraph, FastAPI, and Next.js, Flock combines a modern web interface with a flexible backend capable of supporting advanced AI workflows. The platform supports multi-agent collaboration, allowing developers to design workflows where different agents handle specialized tasks within the same system. Flock also includes features such as intent recognition, code execution nodes, and human-in-the-loop approval processes that make it suitable for production AI applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Fulling

    Fulling

    Full-stack Engineer Agent. Built with Next.js, Claude, shadcn/ui

    Fulling is an open-source AI-powered development environment designed to function as an autonomous full-stack engineering assistant. The platform provides a sandboxed workspace where developers can build complete applications with the help of an integrated AI coding agent. Instead of manually configuring development environments, the system automatically provisions the required infrastructure including a Linux environment, database services, and development tools. It integrates an AI pair programmer that can generate code, implement features, and assist with debugging tasks through natural language instructions. The environment also includes web-based terminals, file management tools, and version control capabilities to support collaborative software development workflows. Developers can connect external services by simply providing API credentials, allowing the AI system to automatically integrate features such as authentication or payment processing.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB