Search Results for "artificial intelligence algorithm" - Page 13

Showing 1858 open source projects for "artificial intelligence algorithm"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 1
    Pfl Research

    Pfl Research

    Simulation framework for accelerating research

    A fast, modular Python framework released by Apple for privacy-preserving federated learning (PFL) simulation. Integrates with TensorFlow, PyTorch, and classical ML, and offers high-speed distributed simulation (7–72× faster than alternatives).
    Downloads: 3 This Week
    Last Update:
    See Project
  • 2
    FastMCP

    FastMCP

    The fast, Pythonic way to build Model Context Protocol servers

    FastMCP is a Pythonic framework designed to simplify the creation of MCP servers. It allows developers to build servers that provide context and tools to Large Language Models (LLMs) using clean and intuitive Python code, streamlining the integration process between AI models and external resources. ​
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    MCP Neo4j

    MCP Neo4j

    Model Context Protocol with Neo4j

    An implementation of the Model Context Protocol with Neo4j, enabling natural language interactions with Neo4j databases and facilitating operations such as schema retrieval and Cypher query execution. ​
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    MCP Server DuckDB

    MCP Server DuckDB

    A Model Context Protocol (MCP) server implementation for DuckDB

    An MCP server implementation for DuckDB, providing database interaction capabilities through MCP tools, allowing operations like querying, table creation, and schema inspection. ​
    Downloads: 3 This Week
    Last Update:
    See Project
  • Keep company data safe with Chrome Enterprise Icon
    Keep company data safe with Chrome Enterprise

    Protect your business with AI policies and data loss prevention in the browser

    Make AI work your way with Chrome Enterprise. Block unapproved sites and set custom data controls that align with your company's policies.
    Download Chrome
  • 5
    MCP Teams Server

    MCP Teams Server

    An MCP (Model Context Protocol) server implementation

    An MCP server implementation for Microsoft Teams integration, providing capabilities to read messages, create messages, reply to messages, and mention members, facilitating AI-driven interactions within Teams. ​
    Downloads: 3 This Week
    Last Update:
    See Project
  • 6
    FlashInfer

    FlashInfer

    FlashInfer: Kernel Library for LLM Serving

    FlashInfer is a kernel library designed to enhance the serving of Large Language Models (LLMs) by optimizing inference performance. It provides a high-performance framework that integrates seamlessly with existing systems, aiming to reduce latency and improve efficiency in LLM deployments. FlashInfer supports various hardware architectures and is built to scale with the demands of production environments.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    LoRAX

    LoRAX

    Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs

    Lorax is a multi-LoRA (Low-Rank Adaptation) inference server that scales to thousands of fine-tuned Large Language Models (LLMs). It enables efficient deployment and management of numerous fine-tuned models, facilitating scalable AI applications. Lorax is designed to handle high concurrency and provides a robust infrastructure for serving multiple LLMs simultaneously.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 8
    Superlinked

    Superlinked

    Superlinked is a Python framework for AI Engineers

    Superlinked is a Python framework designed for AI engineers to build high-performance search and recommendation applications that combine structured and unstructured data.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    DOLMA

    DOLMA

    Data and tools for generating and inspecting OLMo pre-training data

    DOLMA (Data Optimization and Learning for Model Alignment) is a framework designed to manage large-scale datasets for training and fine-tuning language models efficiently.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Secure remote access solution to your private network, in the cloud or on-prem. Icon
    Secure remote access solution to your private network, in the cloud or on-prem.

    Deliver secure remote access with OpenVPN.

    OpenVPN is here to bring simple, flexible, and cost-effective secure remote access to companies of all sizes, regardless of where their resources are located.
    Get started — no credit card required.
  • 10
    NLG-Eval

    NLG-Eval

    Evaluation code for various unsupervised automated metrics

    NLG-Eval is a toolkit for evaluating the quality of natural language generation (NLG) outputs using multiple automated metrics such as BLEU, METEOR, and ROUGE.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    FastRAG

    FastRAG

    Efficient Retrieval Augmentation and Generation Framework

    fastRAG is a research framework for efficient and optimized retrieval augmented generative pipelines, incorporating state-of-the-art LLMs and Information Retrieval. fastRAG is designed to empower researchers and developers with a comprehensive tool set for advancing retrieval augmented generation.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    Underthesea

    Underthesea

    Underthesea - Vietnamese NLP Toolkit

    Underthesea is a Vietnamese NLP toolkit providing various text processing capabilities, including word segmentation, part-of-speech tagging, and named entity recognition.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 13
    AdalFlow

    AdalFlow

    The library to build & auto-optimize LLM applications

    AdalFlow is a framework for building AI-powered automation workflows, enabling users to design and execute intelligent automation pipelines with minimal coding.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    DeepBI

    DeepBI

    LLM based data scientist, AI native data application

    DeepBI is an AI-native data analysis platform. DeepBI leverages the power of large language models to explore, query, visualize, and share data from any data source. Users can use DeepBI to gain data insight and make data-driven decisions.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 15
    Pyreft

    Pyreft

    ReFT: Representation Finetuning for Language Models

    PyreFT is a tool by Stanford NLP for fine-tuning transformer models with an emphasis on efficient, resource-conserving training and customizability for NLP tasks.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 16
    GraphRAG

    GraphRAG

    A modular graph-based Retrieval-Augmented Generation (RAG) system

    The GraphRAG project is a data pipeline and transformation suite that is designed to extract meaningful, structured data from unstructured text using the power of LLMs.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 17
    TensorFlow Datasets

    TensorFlow Datasets

    TFDS is a collection of datasets ready to use with TensorFlow,

    TensorFlow Datasets is a collection of datasets ready to use, with TensorFlow or other Python ML frameworks, such as Jax. All datasets are exposed as tf.data. Datasets , enabling easy-to-use and high-performance input pipelines. To get started see the guide and our list of datasets.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 18
    LLaMA Efficient Tuning

    LLaMA Efficient Tuning

    Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon

    Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM2)
    Downloads: 3 This Week
    Last Update:
    See Project
  • 19
    AReal

    AReal

    Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible

    ..., supporting scaling from single nodes to large GPU clusters. It can streamline the development of AI agents and reasoning systems. Support for algorithm and system co-design optimizations (to improve efficiency and stability).
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    Ring

    Ring

    Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI

    Ring is a reasoning Mixture-of-Experts (MoE) large language model (LLM) developed by inclusionAI. It is built from or derived from Ling. Its design emphasizes reasoning, efficiency, and modular expert activation. In its “flash” variant (Ring-flash-2.0), it optimizes inference by activating only a subset of experts. It applies reinforcement learning/reasoning optimization techniques. Its architectures and training approaches are tuned to enable efficient and capable reasoning performance....
    Downloads: 4 This Week
    Last Update:
    See Project
  • 21
    Tencent-Hunyuan-Large

    Tencent-Hunyuan-Large

    Open-source large language model family from Tencent Hunyuan

    Tencent-Hunyuan-Large is the flagship open-source large language model family from Tencent Hunyuan, offering both pre-trained and instruct (fine-tuned) variants. It is designed with long-context capabilities, quantization support, and high performance on benchmarks across general reasoning, mathematics, language understanding, and Chinese / multilingual tasks. It aims to provide competitive capability with efficient deployment and inference. FP8 quantization support to reduce memory usage...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 22
    NVIDIA AgentIQ

    NVIDIA AgentIQ

    The NVIDIA AgentIQ toolkit is an open-source library

    NVIDIA AgentIQ is an open-source toolkit designed to efficiently connect, evaluate, and accelerate teams of AI agents. It provides a framework-agnostic platform that integrates seamlessly with various data sources and tools, enabling developers to build composable and reusable agentic workflows. By treating agents, tools, and workflows as simple function calls, AgentIQ facilitates rapid development and optimization of AI-driven applications, enhancing collaboration and efficiency in complex...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 23
    Transformer Engine

    Transformer Engine

    A library for accelerating Transformer models on NVIDIA GPUs

    Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference. TE provides a collection of highly optimized building blocks for popular Transformer architectures and an automatic mixed precision-like API that can be used seamlessly with your framework-specific code. TE also includes a framework-agnostic C++...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 24
    EconML

    EconML

    Python Package for ML-Based Heterogeneous Treatment Effects Estimation

    EconML is a Python package for estimating heterogeneous treatment effects from observational data via machine learning. This package was designed and built as part of the ALICE project at Microsoft Research with the goal of combining state-of-the-art machine learning techniques with econometrics to bring automation to complex causal inference problems. One of the biggest promises of machine learning is to automate decision-making in a multitude of domains. At the core of many data-driven...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 25
    LLM Foundry

    LLM Foundry

    LLM training code for MosaicML foundation models

    Introducing MPT-7B, the first entry in our MosaicML Foundation Series. MPT-7B is a transformer trained from scratch on 1T tokens of text and code. It is open source, available for commercial use, and matches the quality of LLaMA-7B. MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k. Large language models (LLMs) are changing the world, but for those outside well-resourced industry labs, it can be extremely difficult to train and deploy...
    Downloads: 4 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.