Showing 14 open source projects for "basic language"

View related business solutions
  • Ship AI Apps Faster with Vertex AI Icon
    Ship AI Apps Faster with Vertex AI

    Go from idea to deployed AI app without managing infrastructure. Vertex AI offers one platform for the entire AI development lifecycle.

    Ship AI apps and features faster with Vertex AI—your end-to-end AI platform. Access Gemini 3 and 200+ foundation models, fine-tune for your needs, and deploy with enterprise-grade MLOps. Build chatbots, agents, or custom models. New customers get $300 in free credit.
    Try Vertex AI Free
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 1
    Kaleidoscope-SDK

    Kaleidoscope-SDK

    User toolkit for analyzing and interfacing with Large Language Models

    kaleidoscope-sdk is a Python module used to interact with large language models hosted via the Kaleidoscope service available at: https://github.com/VectorInstitute/kaleidoscope. It provides a simple interface to launch LLMs on an HPC cluster, asking them to perform basic features like text generation, but also retrieve intermediate information from inside the model, such as log probabilities and activations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Reader 3

    Reader 3

    Quick illustration of how one can easily read books together with LLMs

    This project is a minimalist, self-hosted EPUB reader designed to help users browse and read EPUB books one chapter at a time through a lightweight local server, making it especially easy to extract or work with chapters in external tools like large language models. It was created primarily as a simple demonstration of how to combine local book reading with LLM workflows without heavy dependencies or complicated setup, and it runs with just a small Python script and a basic HTTP server. The interface focuses on clarity and ease of use, offering straightforward navigation of book chapters rather than full-featured e-reading capabilities. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    GLM-V

    GLM-V

    GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning

    GLM-V is an open-source vision-language model (VLM) series from ZhipuAI that extends the GLM foundation models into multimodal reasoning and perception. The repository provides both GLM-4.5V and GLM-4.1V models, designed to advance beyond basic perception toward higher-level reasoning, long-context understanding, and agent-based applications. GLM-4.5V builds on the flagship GLM-4.5-Air foundation (106B parameters, 12B active), achieving state-of-the-art results on 42 benchmarks across image, video, document, GUI, and grounding tasks. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    Advanced RAG Techniques

    Advanced RAG Techniques

    Advanced techniques for RAG systems

    Advanced RAG Techniques is a comprehensive collection of tutorials and implementations focused on advanced Retrieval-Augmented Generation (RAG) systems. It is designed to help practitioners move beyond basic RAG setups and explore techniques that improve retrieval quality, context construction, and answer robustness. The repository organizes techniques into categories such as foundational RAG, query enhancement, context enrichment, and advanced retrieval, making it easier to navigate...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Easily Host LLMs and Web Apps on Cloud Run Icon
    Easily Host LLMs and Web Apps on Cloud Run

    Run everything from popular models with on-demand NVIDIA L4 GPUs to web apps without infrastructure management.

    Run frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure. Cloud Run gives you on-demand GPU access for hosting LLMs and running real-time AI—with 5-second cold starts and automatic scale-to-zero so you only pay for actual usage. New customers get $300 in free credit to start.
    Try Cloud Run Free
  • 5
    ChatGPT Clone

    ChatGPT Clone

    ChatGPT interface with better UI

    ChatGPT Clone demonstrates a ChatGPT-style conversational interface wired to large-language-model backends, packaged so developers can self-host and extend. The goal is to replicate the core chat UX—message history, streaming tokens, code blocks, and system prompts—while letting you plug in different provider APIs or local models. It showcases a clean separation between the web client and the message orchestration layer so you can experiment with prompts, roles, and memory strategies. The...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 6
    GPTCache

    GPTCache

    Semantic cache for LLMs. Fully integrated with LangChain

    ChatGPT and various large language models (LLMs) boast incredible versatility, enabling the development of a wide range of applications. However, as your application grows in popularity and encounters higher traffic levels, the expenses related to LLM API calls can become substantial. Additionally, LLM services might exhibit slow response times, especially when dealing with a significant number of requests. To tackle this challenge, we have created GPTCache, a project dedicated to building a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    fastai

    fastai

    Deep learning library

    fastai is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches. It aims to do both things without substantial compromises in ease of use, flexibility, or performance. This is possible thanks to a carefully layered architecture, which expresses common underlying...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 8
    Chinese-LLaMA-Alpaca 2

    Chinese-LLaMA-Alpaca 2

    Chinese LLaMA-2 & Alpaca-2 Large Model Phase II Project

    This project is developed based on the commercially available large model Llama-2 released by Meta. It is the second phase of the Chinese LLaMA&Alpaca large model project. The Chinese LLaMA-2 base model and the Alpaca-2 instruction fine-tuning large model are open-sourced. These models expand and optimize the Chinese vocabulary on the basis of the original Llama-2, use large-scale Chinese data for incremental pre-training, and further improve the basic semantics and command understanding of...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Doctor Dignity

    Doctor Dignity

    Doctor Dignity is an LLM that can pass the US Medical Licensing Exam

    Doctor Dignity is a prototype project exploring how AI-assisted tooling might support compassionate, accessible health guidance for people who struggle to get timely care. The repository centers on a simple end-to-end pipeline—intake of user-reported symptoms, basic triage logic, and clear, supportive messaging—intended to demonstrate how such systems could be built. It emphasizes a humane UX: plain-language prompts, de-jargonized outputs, and guardrails that nudge users toward professional care when needed. The code is designed to be hackable rather than production-grade, giving learners a chance to experiment with NLP flows and lightweight back-end components. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Go from Data Warehouse to Data and AI platform with BigQuery Icon
    Go from Data Warehouse to Data and AI platform with BigQuery

    Build, train, and run ML models with simple SQL. Automate data prep, analysis, and predictions with built-in AI assistance from Gemini.

    BigQuery is more than a data warehouse—it's an autonomous data-to-AI platform. Use familiar SQL to train ML models, run time-series forecasts, and generate AI-powered insights with native Gemini integration. Built-in agents handle data engineering and data science workflows automatically. Get $300 in free credit, query 1 TB, and store 10 GB free monthly.
    Try BigQuery Free
  • 10
    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese LLaMA & Alpaca large language model + local CPU/GPU training

    This project has open-sourced the Chinese LLaMA model and the Alpaca large model with instruction fine-tuning to further promote the open research of large models in the Chinese NLP community. Based on the original LLaMA , these models expand the Chinese vocabulary and use Chinese data for secondary pre-training, which further improves the basic semantic understanding of Chinese. At the same time, the Chinese Alpaca model further uses Chinese instruction data for fine-tuning, which...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Machine Learning PyTorch Scikit-Learn

    Machine Learning PyTorch Scikit-Learn

    Code Repository for Machine Learning with PyTorch and Scikit-Learn

    Initially, this project started as the 4th edition of Python Machine Learning. However, after putting so much passion and hard work into the changes and new topics, we thought it deserved a new title. So, what’s new? There are many contents and additions, including the switch from TensorFlow to PyTorch, new chapters on graph neural networks and transformers, a new section on gradient boosting, and many more that I will detail in a separate blog post. For those who are interested in knowing...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    PyTorch Natural Language Processing

    PyTorch Natural Language Processing

    Basic Utilities for PyTorch Natural Language Processing (NLP)

    PyTorch-NLP is a library for Natural Language Processing (NLP) in Python. It’s built with the very latest research in mind, and was designed from day one to support rapid prototyping. PyTorch-NLP comes with pre-trained embeddings, samplers, dataset loaders, metrics, neural network modules and text encoders. It’s open-source software, released under the BSD3 license. With your batch in hand, you can use PyTorch to develop and train your model using gradient descent. For example, check out...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    TextBlob

    TextBlob

    TextBlob is a Python library for processing textual data

    ...Also, it comes with a WordNet integration. If you only intend to use TextBlob’s default models (no model overrides), you can pass the lite argument. This downloads only those corpora needed for basic functionality. TextBlob is also available as a conda package.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    Pythia is a natural language question answering system, which uses Speech Recognition and Text To Speech technologies to communicate with the user.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB
Gen AI apps are built with MongoDB Atlas
Atlas offers built-in vector search and global availability across 125+ regions. Start building AI apps faster, all in one place.
Try Free →