Search Results for "artificial intelligence java source code" - Page 2

1127 projects for "artificial intelligence java source code" with 1 filter applied:

  • Desktop and Mobile Device Management Software Icon
    Desktop and Mobile Device Management Software

    It's a modern take on desktop management that can be scaled as per organizational needs.

    Desktop Central is a unified endpoint management (UEM) solution that helps in managing servers, laptops, desktops, smartphones, and tablets from a central location.
    Learn More
  • We Stop Hackers From Sending Emails From Your Domain with DMARC Icon
    We Stop Hackers From Sending Emails From Your Domain with DMARC

    For businesses of all sizes, government organizations, and Managed Service Providers (MSPs) seeking robust email security

    PowerDMARC is a comprehensive email security solution designed to protect your brand reputation and safeguard your email communications. By leveraging advanced technologies such as DMARC, SPF, DKIM, BIMI, MTA-STS, and TLS-RPT, PowerDMARC offers a robust defense against email threats like spoofing, phishing, and ransomware.
    Learn More
  • 1
    LLaMA 3

    LLaMA 3

    The official Meta Llama 3 GitHub site

    This repository is the former home for Llama 3 model artifacts and getting-started code, covering pre-trained and instruction-tuned variants across multiple parameter sizes. It introduced the public packaging of weights, licenses, and quickstart examples that helped developers fine-tune or run the models locally and on common serving stacks. As the Llama stack evolved, Meta consolidated repositories and marked this one deprecated, pointing users to newer, centralized hubs for models,...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    HunyuanImage-3.0

    HunyuanImage-3.0

    A Powerful Native Multimodal Model for Image Generation

    HunyuanImage-3.0 is a powerful, native multimodal text-to-image generation model released by Tencent’s Hunyuan team. It unifies multimodal understanding and generation in a single autoregressive framework, combining text and image modalities seamlessly rather than relying on separate image-only diffusion components. It uses a Mixture-of-Experts (MoE) architecture with many expert subnetworks to scale efficiently, deploying only a subset of experts per token, which allows large parameter...
    Downloads: 11 This Week
    Last Update:
    See Project
  • 3
    Laravel Boost

    Laravel Boost

    Laravel-focused MCP server for augmenting AI powered local development

    Boost is a Laravel-first toolkit that supercharges AI-assisted development by giving assistants structured, Laravel-aware context. At its core it runs as an MCP server that exposes a battery of Laravel-specific tools, so an AI agent can explore your app, inspect code and config, and take targeted actions instead of guessing. It ships opinionated, composable guidelines tuned for popular Laravel packages, which helps keep generated code idiomatic and consistent with framework norms. The...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    VibeSDK

    VibeSDK

    Open source full-stack AI webapp generator

    VibeSDK is an open source “vibe coding” platform. VibeSDK is a project built by Cloudflare. It provides a full-stack reference implementation of an AI-driven system. Users describe the application they want in natural language, and the system generates, previews, and deploys the resulting web app. It uses Cloudflare’s infrastructure (Workers, Containers, sandboxes). It can run untrusted code safely, provide live previews, and deploy apps at scale. VibeSDK gives you the exact methodology,...
    Downloads: 6 This Week
    Last Update:
    See Project
  • Tool Tracking Made Simple Icon
    Tool Tracking Made Simple

    Use Phones to Track Tools - A simple app to turn your phone into a tool tracker.

    ShareMyToolbox is a tool tracking solution that enables companies to track individuals who are responsible for tools and small equipment. Mobile users are able to search the company tool inventory, request tools and accept tool assignments with Apple or Android devices such as phones or tablets. Built for contractors, the system was designed to be extremely easy to use.
    Learn More
  • 5
    handson-ml3

    handson-ml3

    Fundamentals of Machine Learning and Deep Learning

    handson-ml3 contains the Jupyter notebooks and code for the third edition of the book Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow. It guides readers through modern machine learning and deep learning workflows using Python, with examples spanning data preparation, supervised and unsupervised learning, deep neural networks, RL, and production-ready model deployment. The third edition updates the content for TensorFlow 2 and Keras, introduces new chapters (for example on...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    DeepSeek-V3.2-Exp

    DeepSeek-V3.2-Exp

    An experimental version of DeepSeek model

    DeepSeek-V3.2-Exp is an experimental release of the DeepSeek model family, intended as a stepping stone toward the next generation architecture. The key innovation in this version is DeepSeek Sparse Attention (DSA), a sparse attention mechanism that aims to optimize training and inference efficiency in long-context settings without degrading output quality. According to the authors, they aligned the training setup of V3.2-Exp with V3.1-Terminus so that benchmark results remain largely...
    Downloads: 28 This Week
    Last Update:
    See Project
  • 7
    Hiera

    Hiera

    A fast, powerful, and simple hierarchical vision transformer

    Hiera is a hierarchical vision transformer designed to be fast, simple, and strong across image and video recognition tasks. The core idea is to use straightforward hierarchical attention with a minimal set of architectural “bells and whistles,” achieving competitive or superior accuracy while being markedly faster at inference and often faster to train. The repository provides installation options (from source or Torch Hub), a model zoo with pre-trained checkpoints, and code for evaluation...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Generative AI Swift

    Generative AI Swift

    This SDK is now deprecated, use the unified Firebase SDK

    deprecated-generative-ai-swift is a Swift client and example scaffold for building generative AI apps using the Gemini models. Although marked “deprecated”, the repo demonstrates how to integrate Gemini inference into iOS and macOS apps via Swift APIs, providing boilerplate for prompt dispatching, streaming responses, UI integration, and error handling. It includes a sample app that showcases a chat interface, where users send messages and receive responses streamed in real time, with UI...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 9
    Bifrost

    Bifrost

    The Fastest LLM Gateway with built in OTel observability

    Bifrost is an LLM gateway designed to provide a unified OpenAI-compatible API front for many different model providers. It abstracts away the complexity of working directly with multiple backend providers (OpenAI, Anthropic, AWS Bedrock, Google Vertex, etc.), enabling you to plug in providers and switch between them without touching your client code. It is built to be high performance: in benchmark tests at 5,000 requests per second, it reportedly adds only microseconds of overhead and...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Haystack is a modern, engaging, and intuitive intranet platform that employees actually use. Icon
    Haystack is a modern, engaging, and intuitive intranet platform that employees actually use.

    You Deserve the Best Intranet Experience

    With customizable iOS and Android mobile apps, Slack and Microsoft Teams integrations, and an intuitive design employees love, Haystack brings an outstanding digital employee experience to your entire workforce, no matter where their work takes them.
    Learn More
  • 10
    LLM Datasets

    LLM Datasets

    Curated list of datasets and tools for post-training

    LLM Datasets curates and standardizes datasets commonly used to train and fine-tune large language models, reducing the overhead of hunting down sources and normalizing formats. The repository aims to make datasets easy to inspect and transform, with scripts for downloading, deduping, cleaning, and converting to formats like JSONL that slot into training pipelines. It highlights instruction-tuning and conversation-style corpora while also pointing to code, math, or domain-specific sets for...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 11
    verl

    verl

    Volcano Engine Reinforcement Learning for LLMs

    VERL is a reinforcement-learning–oriented toolkit designed to train and align modern AI systems, from language models to decision-making agents. It brings together supervised fine-tuning, preference modeling, and online RL into one coherent training stack so teams can move from raw data to aligned policies with minimal glue code. The library focuses on scalability and efficiency, offering distributed training loops, mixed precision, and replay/buffering utilities that keep accelerators busy....
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    mcpo

    mcpo

    A simple, secure MCP-to-OpenAPI proxy server

    mcpo is a minimal bridge that exposes any MCP tool as an OpenAPI-compatible HTTP server. Instead of writing glue code, you point mcpo at an MCP server command and it generates REST endpoints and an OpenAPI spec that other systems (or LLM agent frameworks) can call immediately. This design lets you reuse a growing library of MCP servers with platforms that only understand HTTP+OpenAPI, unifying tool access across ecosystems. The project emphasizes “dead-simple” setup and pairs with Open WebUI...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    CutLER

    CutLER

    Code release for Cut and Learn for Unsupervised Object Detection

    CutLER is an approach for unsupervised object detection and instance segmentation that trains detectors without human-annotated labels, and the repo also includes VideoCutLER for unsupervised video instance segmentation. The method follows a “Cut-and-LEaRn” recipe: bootstrap object proposals, refine them iteratively, and train detection/segmentation heads to discover objects across diverse datasets. The codebase provides training and inference scripts, model configs, and references to...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    CLIP

    CLIP

    CLIP, Predict the most relevant text snippet given an image

    CLIP (Contrastive Language-Image Pretraining) is a neural model that links images and text in a shared embedding space, allowing zero-shot image classification, similarity search, and multimodal alignment. It was trained on large sets of (image, caption) pairs using a contrastive objective: images and their matching text are pulled together in embedding space, while mismatches are pushed apart. Once trained, you can give it any text labels and ask it to pick which label best matches a given...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Llama Coder

    Llama Coder

    Open source Claude Artifacts – built with Llama 3.1 405B

    Llama Coder is an open-source tool that lets you generate small applications (often React or web apps) from a single natural-language prompt using the Llama 3 family of models. It’s framed as an open-source “Claude Artifacts”-style experience: you describe the app you want, the tool calls an LLM hosted on Together.ai, and you get back a runnable code artifact. The project includes a web interface where you can enter prompts, see generated code, and run or tweak the result directly in the...
    Downloads: 16 This Week
    Last Update:
    See Project
  • 16

    Generative AI

    Sample code and notebooks for Generative AI on Google Cloud

    Generative AI is a comprehensive collection of code samples, notebooks, and demo applications designed to help developers build generative-AI workflows on the Vertex AI platform. It spans multiple modalities—text, image, audio, search (RAG/grounding) and more—showing how to integrate foundation models like the Gemini family into cloud projects. The README emphasises getting started with prompts, datasets, environments and sample apps, making it ideal for both experimentation and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Unla

    Unla

    Gateway service that instantly transforms existing MCP Servers

    Unla is a lightweight, highly available MCP gateway written in Go that turns existing MCP servers or ordinary HTTP APIs into MCP-compliant services through configuration, not code changes. Its goal is to let teams “wire up” tools they already run—internal REST endpoints, third-party APIs, or local MCP servers—and present a single, reliable MCP interface to clients like Claude Desktop, Cursor, and IDEs. The gateway focuses on operational concerns you’d expect in production: multi-instance...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    Granite 3.0 Language Models

    Granite 3.0 Language Models

    New set of lightweight state-of-the-art, open foundation models

    This repository introduces Granite 3.0 language models as lightweight, state-of-the-art open foundation models built to natively support multilinguality, coding, reasoning, and tool usage. A central goal is efficient deployment, including the potential to run on constrained compute resources while remaining useful for a broad span of enterprise tasks. The repo positions the models for both research and commercial use under an Apache-2.0 license, signaling permissive adoption paths....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    PPTAgent

    PPTAgent

    PPTAgent: Generating and Evaluating Presentations

    PPTAgent is a research system for generating and evaluating slide decks that goes beyond simple text-to-slides. It follows a two-stage, edit-based workflow: first it analyzes reference presentations to infer slide roles and structure, then it drafts an outline and iteratively performs editing actions to produce new slides. The project includes both the generation agent and an evaluation framework, PPTEval, to score content quality, design, and coherence. The repository highlights the EMNLP...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    GPT-2

    GPT-2

    Code for the paper Language Models are Unsupervised Multitask Learners

    This repository contains the code and model weights for GPT-2, a large-scale unsupervised language model described in the OpenAI paper “Language Models are Unsupervised Multitask Learners.” The intent is to provide a starting point for researchers and engineers to experiment with GPT-2: generate text, fine‐tune on custom datasets, explore model behavior, or study its internal phenomena. The repository includes scripts for sampling, training, downloading pre-trained models, and utilities for...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 21
    Gemma in PyTorch

    Gemma in PyTorch

    The official PyTorch implementation of Google's Gemma models

    gemma_pytorch provides the official PyTorch reference for running and fine-tuning Google’s Gemma family of open models. It includes model definitions, configuration files, and loading utilities for multiple parameter scales, enabling quick evaluation and downstream adaptation. The repository demonstrates text generation pipelines, tokenizer setup, quantization paths, and adapters for low-rank or parameter-efficient fine-tuning. Example notebooks walk through instruction tuning and evaluation...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    4M

    4M

    4M: Massively Multimodal Masked Modeling

    4M is a training framework for “any-to-any” vision foundation models that uses tokenization and masking to scale across many modalities and tasks. The same model family can classify, segment, detect, caption, and even generate images, with a single interface for both discriminative and generative use. The repository releases code and models for multiple variants (e.g., 4M-7 and 4M-21), emphasizing transfer to unseen tasks and modalities. Training/inference configs and issues discuss things...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    MGIE

    MGIE

    Guiding Instruction-based Image Editing via Multimodal Large Language

    MGIE—Guiding Instruction-based Image Editing—demonstrates how a multimodal LLM can parse natural-language editing instructions and then drive image transformations accordingly. The project focuses on making edits explainable and controllable: the model interprets text guidance, reasons over image content, and outputs edits aligned with user intent. It’s positioned as an ICLR 2024 Spotlight work, with code and references that show how to connect language planning to concrete image operations....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    JEPA

    JEPA

    PyTorch code and models for V-JEPA self-supervised learning from video

    JEPA (Joint-Embedding Predictive Architecture) captures the idea of predicting missing high-level representations rather than reconstructing pixels, aiming for robust, scalable self-supervised learning. A context encoder ingests visible regions and predicts target embeddings for masked regions produced by a separate target encoder, avoiding low-level reconstruction losses that can overfit to texture. This makes learning focus on semantics and structure, yielding features that transfer well...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    DINOv2

    DINOv2

    PyTorch code and models for the DINOv2 self-supervised learning

    DINOv2 is a self-supervised vision learning framework that produces strong, general-purpose image representations without using human labels. It builds on the DINO idea of student–teacher distillation and adapts it to modern Vision Transformer backbones with a carefully tuned recipe for data augmentation, optimization, and multi-crop training. The core promise is that a single pretrained backbone can transfer well to many downstream tasks—from linear probing on classification to retrieval,...
    Downloads: 0 This Week
    Last Update:
    See Project