Open Source TypeScript Large Language Models (LLM) - Page 4

TypeScript Large Language Models (LLM)

View 380 business solutions

Browse free open source TypeScript Large Language Models (LLM) and projects below. Use the toggles on the left to filter open source TypeScript Large Language Models (LLM) by OS, license, language, programming language, and project status.

  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • 1
    Stripe AI

    Stripe AI

    One-stop shop for building AI-powered products and businesses

    Stripe AI is an open-source collection of tools and software development kits designed to help developers build AI-powered products and services that integrate directly with Stripe’s payment infrastructure. The project acts as a centralized repository containing resources, libraries, and examples that simplify the process of incorporating payments, billing, and financial workflows into AI applications. It enables developers to connect large language models and AI agents with Stripe APIs so that automated systems can perform actions such as handling transactions, managing subscriptions, or processing financial events. The platform is particularly relevant for companies building AI-driven products that require monetization, usage-based billing, or programmable financial interactions. By offering ready-to-use integrations and development tools, Stripe AI reduces the complexity of connecting AI systems with payment services.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    TONL

    TONL

    TONL (Token-Optimized Notation Language)

    TONL is a cutting-edge data platform built around a production-ready serialization format designed to be both compact and powerful, combining human readability with performance features that make it suitable for large-scale applications and AI workflows. It provides a serialization format that significantly reduces token usage compared with traditional JSON, which can result in lower costs and more efficient prompt size utilization in LLM-driven systems. TONL isn’t just a format — it includes a rich API for querying, indexing, modifying, and streaming data, along with tools for schema validation and TypeScript code generation. The platform comes with a complete command-line interface that supports interactive dashboards and cross-platform usage in browsers and server environments, and its high test coverage gives developers confidence in stability.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Token-Oriented Object Notation

    Token-Oriented Object Notation

    Token-Oriented Object Notation (TOON)

    Token-Oriented Object Notation is an open specification and toolkit for a data serialization format called Token-Oriented Object Notation (TOON), designed specifically to optimize how structured data is passed to large language models. The format aims to reduce token overhead compared with traditional formats like JSON while remaining human-readable and structurally expressive. TOON represents the same data model as JSON but removes unnecessary syntax such as braces and quotes, relying instead on indentation and structured tokens to represent objects and arrays. This design allows prompts containing structured data to use significantly fewer tokens, which can reduce inference costs and improve efficiency in LLM applications. The project includes a formal specification, encoding rules, and reference implementations that developers can use to serialize and parse TOON data in their applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    TypedAI

    TypedAI

    TypeScript AI platform with AI chat, Autonomous agents

    TypedAI is an open-source TypeScript platform designed for building and running AI agents, chatbots, and large language model workflows. The framework provides developers with a full-featured environment for designing autonomous agents capable of performing complex tasks such as code analysis, workflow automation, or conversational assistance. Written in TypeScript, the platform emphasizes strong typing and structured development patterns to improve reliability when building AI-driven systems. TypedAI includes tools for building chat interfaces, managing LLM interactions, and orchestrating multi-step workflows that combine AI reasoning with external tools. The platform also includes specialized software engineering agents that can assist with tasks such as code reviews or repository analysis. Developers can integrate multiple model providers and tools into the platform to create flexible agent pipelines.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • 5
    axflow

    axflow

    The TypeScript framework for AI development

    Axflow is a modular TypeScript framework designed to support the development of natural language powered AI applications. The framework provides a collection of independent modules that can be adopted individually or combined to create a full AI application stack. Its core SDK enables developers to integrate language model capabilities into web applications while maintaining strong modular design principles. Additional components support data ingestion, evaluation, and model interaction workflows that are commonly required when building production AI systems. For example, the framework includes modules for connecting application data to language models, evaluating the quality of model outputs, and building streaming user interfaces. Because each component can be used independently, developers can adopt Axflow incrementally rather than committing to a monolithic framework. This flexibility makes the system suitable for both experimentation and production environments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    dLLM

    dLLM

    dLLM: Simple Diffusion Language Modeling

    dLLM is an open-source framework designed to simplify the development, training, and evaluation of diffusion-based large language models. Unlike traditional autoregressive models that generate text sequentially token by token, diffusion language models generate text through an iterative denoising process that refines masked tokens over multiple steps. This approach allows models to reason over the entire sequence simultaneously and potentially produce more coherent outputs with bidirectional context. The project provides an integrated pipeline that standardizes how diffusion language models are trained, evaluated, and deployed, helping researchers reproduce experiments and compare results more easily. The framework includes scalable training infrastructure inspired by modern deep learning toolkits and supports integrations with widely used libraries for distributed training.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    dataline

    dataline

    AI data analysis and visualization on CSV, Postgres, MySQL, Snowflake

    dataline is an open-source AI data analysis and visualization platform that allows users to interact with datasets using natural language. The system enables both technical and non-technical users to explore data by asking questions conversationally, which the platform translates into database queries and analytical operations. It supports connections to multiple structured data sources such as PostgreSQL, MySQL, Snowflake, SQLite, Excel files, CSV datasets, and other database systems. Once connected, users can generate tables, charts, and reports automatically based on queries produced by the AI engine. The platform is designed with a privacy-first architecture that stores data locally on the user’s device rather than sending it to external cloud services by default. It can also hide sensitive data from language models during processing, ensuring that only necessary metadata is used for query generation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    llmx.txt hub

    llmx.txt hub

    The largest directory for AI-ready documentation and tools

    llms-txt-hub serves as a central directory and knowledge base for the emerging llms.txt convention, a simple, text-based way for project owners to communicate preferences to AI tools. It catalogs implementations across projects and platforms, helping maintain a shared understanding of how LLM-powered services should interact with code and documentation. The repository aims to standardize patterns for allowlists, denylists, attribution, rate expectations, and contact information, mirroring the spirit of robots.txt for the AI era. It provides examples and templates to make adoption straightforward for maintainers of websites, docs portals, and repos. The hub encourages community debate and iteration so conventions remain practical as tooling evolves. By consolidating examples and tools, it accelerates consistent, respectful AI consumption of public content.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    lms

    lms

    LM Studio CLI

    lms is a command-line interface tool designed to interact with and manage local large language models through the LM Studio ecosystem. The tool allows developers to control model execution directly from the terminal, providing programmatic access to features that are otherwise available through graphical interfaces. Through the CLI, users can load and unload models, start or stop local inference servers, and inspect the inputs and outputs generated by language models. LMS is built using the LM Studio JavaScript SDK and integrates tightly with the LM Studio runtime environment. The interface is designed to simplify automation workflows and scripting tasks related to local AI deployment. By exposing model management capabilities through command-line commands, the tool enables developers to integrate local LLM operations into development pipelines and backend services. As a result, LMS acts as a bridge between interactive local AI tools and automated software development workflows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Full-stack observability with actually useful AI | Grafana Cloud Icon
    Full-stack observability with actually useful AI | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 10
    opensrc

    opensrc

    Fetch source code for npm packages

    OpenSrc is an open-source utility developed by Vercel Labs that retrieves and exposes the source code of npm packages so that AI coding agents can better understand how external libraries work. When large language models generate code, they often rely only on type definitions or documentation, which can limit their understanding of how a library actually behaves. OpenSrc addresses this limitation by allowing agents to fetch the underlying source code of dependencies and analyze their implementation directly. This gives AI coding assistants richer context about functions, internal logic, and architectural patterns used within external packages. The tool is designed to integrate into AI-driven developer workflows where coding agents explore repositories, inspect dependencies, and reason about how to use libraries correctly.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    react-llm

    react-llm

    Easy-to-use headless React Hooks to run LLMs in the browser with WebGP

    Easy-to-use headless React Hooks to run LLMs in the browser with WebGPU. As simple as useLLM().
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    repo2txt

    repo2txt

    Web-based tool converts GitHub repository contents

    repo2txt is an open-source developer tool that converts the contents of a code repository into a single structured text file that can be easily consumed by large language models. The tool is designed to address the challenge of analyzing entire codebases with AI assistants, where code is normally distributed across many files and directories. By collecting repository contents and formatting them into a single text document, repo2txt allows developers to feed complete projects into AI systems for analysis, documentation, or code explanation tasks. The application can load repositories from platforms such as GitHub or from local directories and provides an interface for selecting which files should be included in the generated output. It also supports filtering by file types and automatically respecting ignore patterns defined in project configuration files.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    swark.io

    swark.io

    Create architecture diagrams from code automatically using LLMs

    Swark is an open-source developer tool and Visual Studio Code extension that automatically generates software architecture diagrams directly from source code using large language models. The project aims to help developers quickly understand complex codebases by analyzing repositories and producing visual diagrams that represent system architecture, dependencies, and component relationships. Instead of relying on manually maintained diagrams that often become outdated, Swark uses AI to infer architecture patterns dynamically from the code itself. The tool integrates with GitHub Copilot and the VS Code environment, allowing developers to generate diagrams with minimal setup and without requiring additional authentication or API configuration. Because the logic of understanding code structure is handled by an LLM, Swark can support many programming languages and frameworks without requiring custom rules for each language.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    trench

    trench

    Open-Source Analytics Infrastructure

    Trench is an open-source analytics infrastructure designed for tracking events and performing real-time analysis of application data at scale. The system is built on top of high-performance data technologies including Apache Kafka and ClickHouse, which allows it to ingest and process very large volumes of events while maintaining fast query performance. It was originally developed to solve scaling challenges in product analytics systems where traditional relational databases become inefficient as event tables grow. The platform enables developers to collect events such as page views, user actions, and behavioral metrics while storing them in a column-oriented analytics database optimized for time-series workloads. By combining streaming ingestion with fast analytical queries, the system supports use cases such as product analytics dashboards, observability pipelines, and machine learning data preparation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    wllama

    wllama

    WebAssembly binding for llama.cpp - Enabling on-browser LLM inference

    wllama is a WebAssembly-based library that enables large language model inference directly inside a web browser. Built as a binding for the llama.cpp inference engine, the project allows developers to run LLM models locally without requiring a server backend or dedicated GPU hardware. The library leverages WebAssembly SIMD capabilities to achieve efficient execution within modern browsers while maintaining compatibility across platforms. By running models locally on the user’s device, wllama enables privacy-preserving AI applications that do not require sending data to remote servers. The framework provides both high-level APIs for common tasks such as text generation and embeddings, as well as low-level APIs that expose tokenization, sampling controls, and model state management.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB