Showing 8 open source projects for "c-sharp"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • 1
    llama2.c

    llama2.c

    Inference Llama 2 in one file of pure C

    llama2.c is a minimalist implementation of the Llama 2 language model architecture designed to run entirely in pure C. Created by Andrej Karpathy, this project offers an educational and lightweight framework for performing inference on small Llama 2 models without external dependencies. It provides a full training and inference pipeline: models can be trained in PyTorch and later executed using a concise 700-line C program (run.c).
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    llama.cpp

    llama.cpp

    Port of Facebook's LLaMA model in C/C++

    The llama.cpp project enables the inference of Meta's LLaMA model (and other models) in pure C/C++ without requiring a Python runtime. It is designed for efficient and fast model execution, offering easy integration for applications needing LLM-based capabilities. The repository focuses on providing a highly optimized and portable implementation for running large language models directly within C/C++ environments.
    Downloads: 164 This Week
    Last Update:
    See Project
  • 3
    Ollama

    Ollama

    Run models like Kimi-K2.5, GLM-5, DeepSeek, gpt-oss, Gemma, Qwen etc.

    Ollama is an open-source platform that enables developers to run large language models locally on their own machines. It simplifies working with modern AI models by providing a unified interface to download, manage, and interact with them. Users can run models like Llama, Gemma, Qwen, and others directly from the command line or through APIs. Ollama also integrates with popular developer tools and AI agents, allowing seamless workflows across coding environments and applications. It supports...
    Downloads: 513 This Week
    Last Update:
    See Project
  • 4
    TuyaOpen

    TuyaOpen

    Next-gen AI+IoT framework for T2/T3/T5AI/ESP32/and more

    TuyaOpen is an open-source AI-enabled Internet of Things development framework designed to simplify the creation and deployment of smart connected devices. The platform provides a cross-platform C and C++ software development kit that supports a wide range of hardware platforms including Tuya microcontrollers, ESP32 boards, Raspberry Pi devices, and other embedded systems. It offers a unified development environment where developers can build devices capable of communicating with IoT cloud services while integrating AI capabilities and intelligent automation features. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • Enterprise-grade ITSM, for every business Icon
    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity.

    Freshservice is an intuitive, AI-powered platform that helps IT, operations, and business teams deliver exceptional service without the usual complexity. Automate repetitive tasks, resolve issues faster, and provide seamless support across the organization. From managing incidents and assets to driving smarter decisions, Freshservice makes it easy to stay efficient and scale with confidence.
    Try it Free
  • 5
    PicoLM

    PicoLM

    Run a 1-billion parameter LLM on a $10 board with 256MB RAM

    ...The project focuses on enabling efficient local inference by optimizing memory usage, computation, and system dependencies so that relatively large models can operate on devices with minimal RAM. It is written primarily in C and designed with a minimalist architecture that removes unnecessary dependencies and external libraries. The runtime is capable of running language models with billions of parameters on devices with only a few hundred megabytes of memory, which is significantly lower than typical LLM infrastructure requirements. This makes PicoLM particularly suitable for edge computing, offline AI applications, and embedded AI devices that cannot rely on cloud resources.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Llama 2 Everywhere (L2E)

    Llama 2 Everywhere (L2E)

    Llama 2 Everywhere (L2E)

    Llama 2 Everywhere (L2E) is an open-source implementation of the LLaMA-2 large language model architecture designed to demonstrate how transformer-based language models can be executed with extremely minimal code. The project focuses on simplicity and educational clarity by implementing inference for LLaMA-style models in a compact C program rather than relying on large machine learning frameworks. Developers can train models using a Python training pipeline and then run inference using a lightweight C implementation that requires very few dependencies. The architecture mirrors the structure of the LLaMA-2 model family, allowing compatible model checkpoints to be converted and executed within the simplified runtime environment. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    CTransformers

    CTransformers

    Python bindings for the Transformer models implemented in C/C++

    Python bindings for the Transformer models implemented in C/C++ using GGML library.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Alpaca.cpp

    Alpaca.cpp

    Locally run an Instruction-Tuned Chat-Style LLM

    Run a fast ChatGPT-like model locally on your device. This combines the LLaMA foundation model with an open reproduction of Stanford Alpaca a fine-tuning of the base model to obey instructions (akin to the RLHF used to train ChatGPT) and a set of modifications to llama.cpp to add a chat interface. Download the zip file corresponding to your operating system from the latest release. The weights are based on the published fine-tunes from alpaca-lora, converted back into a PyTorch checkpoint...
    Downloads: 9 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB