Search Results for "llm for java developers" - Page 3

Showing 18483 open source projects for "llm for java developers"

View related business solutions
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • Keep company data safe with Chrome Enterprise Icon
    Keep company data safe with Chrome Enterprise

    Protect your business with AI policies and data loss prevention in the browser

    Make AI work your way with Chrome Enterprise. Block unapproved sites and set custom data controls that align with your company's policies.
    Download Chrome
  • 1
    MagicAPI AI Gateway

    MagicAPI AI Gateway

    Built for demanding AI workflows

    The world's fastest AI Gateway proxy, written in Rust and optimized for maximum performance. This high-performance API gateway routes requests to various AI providers (OpenAI, GROQ) with streaming support, making it perfect for developers who need reliable and blazing-fast AI API access.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    ExecuTorch

    ExecuTorch

    On-device AI across mobile, embedded and edge for PyTorch

    ExecuTorch is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, embedded devices and microcontrollers. It is part of the PyTorch Edge ecosystem and enables efficient deployment of PyTorch models to edge devices.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    CodeGeeX

    CodeGeeX

    CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)

    CodeGeeX is a large-scale multilingual code generation model with 13 billion parameters, trained on 850B tokens across more than 20 programming languages. Developed with MindSpore and later made PyTorch-compatible, it is capable of multilingual code generation, cross-lingual code translation, code completion, summarization, and explanation. It has been benchmarked on HumanEval-X, a multilingual program synthesis benchmark introduced alongside the model, and achieves state-of-the-art...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 4
    ncnn

    ncnn

    High-performance neural network inference framework for mobile

    ncnn is a high-performance neural network inference computing framework designed specifically for mobile platforms. It brings artificial intelligence right at your fingertips with no third-party dependencies, and speeds faster than all other known open source frameworks for mobile phone cpu. ncnn allows developers to easily deploy deep learning algorithm models to the mobile platform and create intelligent APPs. It is cross-platform and supports most commonly used CNN networks, including...
    Downloads: 24 This Week
    Last Update:
    See Project
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 5
    Groq Python

    Groq Python

    The official Python Library for the Groq API

    Groq Python is the official Python SDK for the Groq REST API, giving Python developers straightforward access to Groq’s LLM, chat, audio, and other AI services. Through this library, you can call Groq’s models from Python code — for example to request chat completions, code generation, transcription, or any supported endpoint — using idiomatic Python syntax. The SDK handles authentication (via environment variable or parameter), defines proper type-safe request/response data types, and supports both synchronous and asynchronous usage patterns depending on your application needs. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. TensorRT-based applications perform up to 40X faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers,...
    Downloads: 23 This Week
    Last Update:
    See Project
  • 7
    Adversarial Robustness Toolbox

    Adversarial Robustness Toolbox

    Adversarial Robustness Toolbox (ART) - Python Library for ML security

    Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to evaluate, defend, certify and verify Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, sci-kit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Secret Llama

    Secret Llama

    Fully private LLM chatbot that runs entirely with a browser

    Secret Llama is a privacy-first large-language-model chatbot that runs entirely inside your web browser, meaning no server is required and your conversation data never leaves your device. It focuses on open-source model support, letting you load families like Llama and Mistral directly in the client for fully local inference. Because everything happens in-browser, it can work offline once models are cached, which is helpful for air-gapped environments or travel. The interface mirrors the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    bolt.diy

    bolt.diy

    Prompt, run, edit, & deploy full-stack web applications using any LLM

    bolt.diy is an open-source platform that allows you to easily create, run, edit, and deploy full-stack web applications using a variety of large language models (LLMs). It supports popular models like OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, and Groq, and provides the flexibility to integrate additional models through the Vercel AI SDK. Whether you’re experimenting with pre-built models or developing custom AI-driven applications, bolt.diy...
    Downloads: 38 This Week
    Last Update:
    See Project
  • Build Securely on Azure with Proven Frameworks Icon
    Build Securely on Azure with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 10
    Pruna AI

    Pruna AI

    Pruna is a model optimization framework built for developers

    Pruna is an open-source, self-hostable AI inference engine designed to help teams deploy and manage large language models (LLMs) efficiently across private or hybrid infrastructures. Built with performance and developer ergonomics in mind, Pruna simplifies inference workflows by enabling multi-model orchestration, autoscaling, GPU resource allocation, and compatibility with popular open-source models. It is ideal for companies or teams looking to reduce reliance on external APIs while...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    IntelliJ Community Edition

    IntelliJ Community Edition

    IntelliJ IDEA & IntelliJ Platform

    ...It provides foundational features like a robust editor with code completion, syntax highlighting, refactoring tools, version control integrations, terminal, debugger, and plugin architecture. Since it’s open, community developers can contribute to language supports, UI tweaks, and platform enhancements. From this base, JetBrains builds full editions (Ultimate) by layering proprietary features for enterprise frameworks and integrations. IntelliJ Community supports multiple JVM-based languages (Java, Kotlin, Scala, Groovy) and serves as a host for plugin ecosystems that add support for web, database, and cloud tooling. ...
    Downloads: 534 This Week
    Last Update:
    See Project
  • 12
    Eko

    Eko

    Build Production-ready Agentic Workflow with Natural Language

    Eko (Eko Keeps Operating) is a JavaScript framework designed for building production-ready agent-based workflows using natural language commands. It allows developers to create automated agents that can handle complex workflows in both computer and browser environments. With a focus on high development efficiency, Eko simplifies the creation of multi-step workflows, enabling users to integrate and automate tasks across platforms. It provides a unified interface for managing agents, offering...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    FastMCP

    FastMCP

    The fast, Pythonic way to build Model Context Protocol servers

    FastMCP is a Pythonic framework designed to simplify the creation of MCP servers. It allows developers to build servers that provide context and tools to Large Language Models (LLMs) using clean and intuitive Python code, streamlining the integration process between AI models and external resources. ​
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    AWS IoT Device SDK for Java

    AWS IoT Device SDK for Java

    Java SDK for connecting to AWS IoT from a device

    The AWS IoT Device SDK for Java enables Java developers to access the AWS IoT Platform through MQTT or MQTT over the WebSocket protocol. The SDK is built with AWS IoT device shadow support, providing access to thing shadows (sometimes referred to as device shadows) using shadow methods, including GET, UPDATE, and DELETE. It also supports a simplified shadow access model, which allows developers to exchange data with their shadows by just using getter and setter methods without having to serialize or deserialize any JSON documents. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Mastra

    Mastra

    The TypeScript AI agent framework

    ...It integrates cleanly with React, Next.js, and Node-based backends, but can also run as a standalone server, giving teams flexibility in how they deploy their AI logic. At its core, Mastra provides abstractions for agents, workflows, tools, memory, retrieval, and model routing, so developers can focus on specifying behavior rather than wiring infrastructure from scratch. Model routing lets you connect to dozens of providers (OpenAI, Anthropic, Gemini, and others) through a single standardized interface, while agents orchestrate LLM calls and tools to solve open-ended tasks with internal reasoning loops. When explicit control is needed, Mastra’s workflow engine uses a graph-style API (.then(), .branch(), .parallel()) to orchestrate multi-step processes.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 16
    DALI

    DALI

    A GPU-accelerated library containing highly optimized building blocks

    The NVIDIA Data Loading Library (DALI) is a library for data loading and pre-processing to accelerate deep learning applications. It provides a collection of highly optimized building blocks for loading and processing image, video and audio data. It can be used as a portable drop-in replacement for built-in data loaders and data iterators in popular deep learning frameworks. Deep learning applications require complex, multi-stage data processing pipelines that include loading, decoding,...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    CodeGeeX2

    CodeGeeX2

    CodeGeeX2: A More Powerful Multilingual Code Generation Model

    CodeGeeX2 is the second-generation multilingual code generation model from ZhipuAI, built upon the ChatGLM2-6B architecture and trained on 600B code tokens. Compared to the first generation, it delivers a significant boost in programming ability across multiple languages, outperforming even larger models like StarCoder-15B in some benchmarks despite having only 6B parameters. The model excels at code generation, translation, summarization, debugging, and comment generation, and it supports...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 18
    aisuite

    aisuite

    Simple, unified interface to multiple Generative AI providers

    Simple, unified interface to multiple Generative AI providers. aisuite makes it easy for developers to use multiple LLM through a standardized interface. Using an interface similar to OpenAI's, aisuite makes it easy to interact with the most popular LLMs and compare the results. It is a thin wrapper around Python client libraries and allows creators to seamlessly swap out and test responses from different LLM providers without changing their code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    kotaemon

    kotaemon

    An open-source RAG-based tool for chatting with your documents

    An open-source clean & customizable RAG UI for chatting with your documents. Built with both end users and developers in mind. This project serves as a functional RAG UI for both end users who want to do QA on their documents and developers who want to build their own RAG pipeline.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    jMonkeyEngine

    jMonkeyEngine

    A complete 3-D game development suite written in Java

    jMonkeyEngine is a 3-D game engine for adventurous Java developers. It’s open-source, cross-platform, and cutting-edge. v3.6.1 is the latest stable version of the engine. The engine is used by several commercial game studios and computer-science courses.
    Downloads: 12 This Week
    Last Update:
    See Project
  • 21
    AWS Neuron

    AWS Neuron

    Powering Amazon custom machine learning chips

    AWS Neuron is a software development kit (SDK) for running machine learning inference using AWS Inferentia chips. It consists of a compiler, run-time, and profiling tools that enable developers to run high-performance and low latency inference using AWS Inferentia-based Amazon EC2 Inf1 instances. Using Neuron developers can easily train their machine learning models on any popular framework such as TensorFlow, PyTorch, and MXNet, and run it optimally on Amazon EC2 Inf1 instances. You can...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    iText

    iText

    iText for Java represents the next level of SDKs for developers

    iText for Java represents the next level of SDKs for developers who want to take advantage of the benefits PDF can bring. Equipped with a better document engine, high and low-level programming capabilities and the ability to create, edit, and enhance PDF documents, iText can be a boon to nearly every workflow. iText Suite refers to the complete line of products comprising the open-source iText Core PDF library and its add-ons.
    Downloads: 46 This Week
    Last Update:
    See Project
  • 23
    LitterBox

    LitterBox

    A secure sandbox environment for malware developers and red teamers

    LitterBox is a controlled malware-analysis and payload-testing sandbox aimed at red teams who need to validate evasions and behaviors before deployment. It provides an isolated environment to exercise payloads against modern detection stacks, verify signatures and heuristics, and observe runtime characteristics without leaking binaries to third-party vendors. The README frames typical use cases: testing evasion, validating detections, analyzing behavior, and keeping sensitive tooling...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    LLaMA 3

    LLaMA 3

    The official Meta Llama 3 GitHub site

    This repository is the former home for Llama 3 model artifacts and getting-started code, covering pre-trained and instruction-tuned variants across multiple parameter sizes. It introduced the public packaging of weights, licenses, and quickstart examples that helped developers fine-tune or run the models locally and on common serving stacks. As the Llama stack evolved, Meta consolidated repositories and marked this one deprecated, pointing users to newer, centralized hubs for models,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    CodeLlama

    CodeLlama

    Inference code for CodeLlama models

    Code Llama is a family of Llama-based code models optimized for programming tasks such as code generation, completion, and repair, with variants specialized for base coding, Python, and instruction following. The repo documents the sizes and capabilities (e.g., 7B, 13B, 34B) and highlights features like infilling and large input context to support real IDE workflows. It targets both general software synthesis and language-specific productivity, offering strong performance among open models...
    Downloads: 0 This Week
    Last Update:
    See Project