Showing 624 open source projects for "you-get"

View related business solutions
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • Free and Open Source HR Software Icon
    Free and Open Source HR Software

    OrangeHRM provides a world-class HRIS experience and offers everything you and your team need to be that HR hero you know that you are.

    Give your HR team the tools they need to streamline administrative tasks, support employees, and make informed decisions with the OrangeHRM free and open source HR software.
    Learn More
  • 1
    Norfair

    Norfair

    Lightweight Python library for adding real-time multi-object tracking

    Norfair is a customizable lightweight Python library for real-time multi-object tracking. Using Norfair, you can add tracking capabilities to any detector with just a few lines of code. Any detector expressing its detections as a series of (x, y) coordinates can be used with Norfair. This includes detectors performing tasks such as object or keypoint detection. It can easily be inserted into complex video processing pipelines to add tracking to existing projects.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    Raster Vision

    Raster Vision

    Open source framework for deep learning satellite and aerial imagery

    ...The input to a Raster Vision pipeline is a set of images and training data, optionally with Areas of Interest (AOIs) that describe where the images are labeled. The output of a Raster Vision pipeline is a model bundle that allows you to easily utilize models in various deployment scenarios.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    NVIDIA NeMo

    NVIDIA NeMo

    Toolkit for conversational AI

    NVIDIA NeMo, part of the NVIDIA AI platform, is a toolkit for building new state-of-the-art conversational AI models. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. Every module can easily be customized, extended, and composed to create new conversational AI model architectures. Conversational AI...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    Unstructured.IO

    Unstructured.IO

    Open source libraries and APIs to build custom preprocessing pipelines

    The unstructured library provides open-source components for ingesting and pre-processing images and text documents, such as PDFs, HTML, Word docs, and many more. The use cases of unstructured revolve around streamlining and optimizing the data processing workflow for LLMs. unstructured modular bricks and connectors form a cohesive system that simplifies data ingestion and pre-processing, making it adaptable to different platforms and is efficient in transforming unstructured data into...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Control D: Advanced DNS Filtering for Businesses and Consumers Icon
    Control D: Advanced DNS Filtering for Businesses and Consumers

    Secure, Filter, and Control Your Network

    Control D is a modern and customizable DNS service that blocks threats, unwanted content and ads - on all devices. Onboard in minutes, and forget about it.
    Learn More
  • 5
    GPTCache

    GPTCache

    Semantic cache for LLMs. Fully integrated with LangChain

    ChatGPT and various large language models (LLMs) boast incredible versatility, enabling the development of a wide range of applications. However, as your application grows in popularity and encounters higher traffic levels, the expenses related to LLM API calls can become substantial. Additionally, LLM services might exhibit slow response times, especially when dealing with a significant number of requests. To tackle this challenge, we have created GPTCache, a project dedicated to building a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Rhino

    Rhino

    On-device Speech-to-Intent engine powered by deep learning

    ...Measure adoption, learn, and iterate. Continuously re-design and re-train to optimize engagement. Building accurate, responsive, and private voice technology is difficult. We learned the hard way, so you don’t have to. Picovoice heavily invests in R&D to offer superior voice AI that surpasses even Big Tech in accuracy and efficiency. Picovoice researchers do not follow recent frameworks and techniques but build them.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    Agently

    Agently

    AI Agent Application Development Framework

    ...Enhance AI Agent using plugins instead of rebuilding a whole new agent. Agently is a development framework that helps developers build AI agent native applications really fast. You can use and build AI agents in your code in an extremely simple way.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    DB-GPT

    DB-GPT

    Revolutionizing Database Interactions with Private LLM Technology

    DB-GPT is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    talos

    talos

    Hyperparameter Optimization for TensorFlow, Keras and PyTorch

    ...Talos is made for data scientists and data engineers that want to remain in complete control of their TensorFlow (tf.keras) and PyTorch models, but are tired of mindless parameter hopping and confusing optimization solutions that add complexity instead of reducing it. Within minutes, without learning any new syntax, Talos allows you to configure, perform, and evaluate hyperparameter optimization experiments that yield state-of-the-art results across a wide range of prediction tasks. Talos provides the simplest and yet most powerful available method for hyperparameter optimization with TensorFlow (tf.keras) and PyTorch.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Polygon Software | Apparel Software | PLM and ERP Solutions Icon
    Polygon Software | Apparel Software | PLM and ERP Solutions

    Small to mid-sized sewn goods manufacturers and textile mills.

    PolyPM is an integrated enterprise resource planning (ERP) and product lifecycle management (PLM) solution developed by Polygon Software. Built for small to medium-sized apparel manufacturers, PolyPM enables businesses to integrate all aspects of the product development, supply chain and production processes, as well as instantly access all their style and manufacturing information anywhere in the world. This allows businesses to shorten time-to-market, incur lower development costs, and improve customer service and worker productivity.
    Learn More
  • 10
    x-transformers

    x-transformers

    A simple but complete full-attention transformer

    ...I have found that keeping the feedforwards and adding the memory key/values leads to even better performance. Proposes adding learned tokens, akin to CLS tokens, named memory tokens, that is passed through the attention layers alongside the input tokens. You can also use the l2 normalized embeddings proposed as part of fixnorm. I have found it leads to improved convergence when paired with small initialization (proposed by BlinkDL). The small initialization will be taken care of as long as l2norm_embed is set to True.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Z80-μLM

    Z80-μLM

    Z80-μLM is a 2-bit quantized language model

    Z80-μLM is a retro-computing AI project that demonstrates a tiny language model (Z80-μLM) engineered to run on an 8-bit Z80 CPU by aggressively quantizing weights down to 2-bit precision. The repository provides a complete workflow where you train or fine-tune conversational models in Python, then export them into a format that can be executed on classic Z80 systems. A key deliverable is producing CP/M-compatible .COM binaries, enabling a genuinely vintage “chat with your computer” experience on real hardware or accurate emulators. The project sits at the intersection of machine learning and systems constraints, showing how model architecture, quantization, and inference code generation can be adapted to extreme memory and compute limits. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Anthropic's Original Performance

    Anthropic's Original Performance

    Anthropic's original performance take-home, now open for you to try

    Anthropic's Original Performance repository contains the publicly released version of a performance challenge originally used by Anthropic as part of their technical interview process, offering developers the opportunity to optimize and benchmark low-level code against simulated models. The project sets up a baseline performance problem where participants work to reduce simulated “clock cycles” required to run a given workload, effectively challenging them to engineer faster code under...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    CLIP

    CLIP

    CLIP, Predict the most relevant text snippet given an image

    ...It was trained on large sets of (image, caption) pairs using a contrastive objective: images and their matching text are pulled together in embedding space, while mismatches are pushed apart. Once trained, you can give it any text labels and ask it to pick which label best matches a given image—even without explicit training for that classification task. The repository provides code for model architecture, preprocessing transforms, evaluation pipelines, and example inference scripts. Because it generalizes to arbitrary labels via text prompts, CLIP is a powerful tool for tasks that involve interpreting images in terms of descriptive language.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    InstantCharacter

    InstantCharacter

    Personalize Any Characters with a Scalable Diffusion Transformer

    ...Uses adapters, so full fine-tuning of the base model is not required. Demo scripts and pipeline API (via infer_demo.py, pipeline.py) included. It works by adapting a base image generation model with a lightweight adapter so that you can produce character-preserving generations in various downstream tasks (e.g. changing pose, clothing, scene) without needing full model fine-tuning. Works with huggingface/transformers/diffusers ecosystems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    SWE-agent

    SWE-agent

    SWE-agent takes a GitHub issue and tries to automatically fix it

    SWE-agent turns LMs (e.g. GPT-4) into software engineering agents that can resolve issues in real GitHub repositories. On the SWE-bench, the SWE-agent resolves 12.47% of issues, achieving state-of-the-art performance on the full test set. We accomplish our results by designing simple LM-centric commands and feedback formats to make it easier for the LM to browse the repository, and view, edit, and execute code files. We call this an Agent-Computer Interface (ACI).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    snorkel

    snorkel

    A system for quickly generating training data with weak supervision

    ...The Snorkel project started at Stanford in 2016 with a simple technical bet: that it would increasingly be the training data, not the models, algorithms, or infrastructure, that decided whether a machine learning project succeeded or failed. Given this premise, we set out to explore the radical idea that you could bring mathematical and systems structure to the messy and often entirely manual process of training data creation and management, starting by empowering users to programmatically label, build, and manage training data. Snorkel Flow, an end-to-end machine learning platform for developing and deploying AI applications. Snorkel Flow incorporates many of the concepts of the Snorkel project with a range of newer techniques around weak supervision modeling, data augmentation, multi-task learning, data slicing and structuring.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Stable Baselines3

    Stable Baselines3

    PyTorch version of Stable Baselines

    Stable Baselines3 (SB3) is a set of reliable implementations of reinforcement learning algorithms in PyTorch. It is the next major version of Stable Baselines. You can read a detailed presentation of Stable Baselines3 in the v1.0 blog post or our JMLR paper. These algorithms will make it easier for the research community and industry to replicate, refine, and identify new ideas, and will create good baselines to build projects on top of. We expect these tools will be used as a base around which new ideas can be added, and as a tool for comparing a new approach against existing ones. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    PyCaret

    PyCaret

    An open-source, low-code machine learning library in Python

    PyCaret is an open-source, low-code machine learning library in Python that automates machine learning workflows. It is an end-to-end machine learning and model management tool that speeds up the experiment cycle exponentially and makes you more productive. In comparison with the other open-source machine learning libraries, PyCaret is an alternate low-code library that can be used to replace hundreds of lines of code with few lines only. This makes experiments exponentially fast and efficient. PyCaret is essentially a Python wrapper around several machine learning libraries and frameworks such as scikit-learn, XGBoost, LightGBM, CatBoost, Optuna, Hyperopt, Ray, and few more. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Mosec

    Mosec

    A high-performance ML model serving framework, offers dynamic batching

    Mosec is a high-performance and flexible model-serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Ludwig AI

    Ludwig AI

    Low-code framework for building custom LLMs, neural networks

    Declarative deep learning framework built for scale and efficiency. Ludwig is a low-code framework for building custom AI models like LLMs and other deep neural networks. Declarative YAML configuration file is all you need to train a state-of-the-art LLM on your data. Support for multi-task and multi-modality learning. Comprehensive config validation detects invalid parameter combinations and prevents runtime failures. Automatic batch size selection, distributed training (DDP, DeepSpeed), parameter efficient fine-tuning (PEFT), 4-bit quantization (QLoRA), and larger-than-memory datasets. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Python Client For NLP Cloud

    Python Client For NLP Cloud

    NLP Cloud serves high performance pre-trained or custom models for NER

    NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, dialogue summarization, paraphrasing, intent classification, product description and ad generation, chatbot, grammar and spelling correction, keywords and keyphrases extraction, text generation, image generation, blog post generation, source code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API. You can either use the NLP Cloud pre-trained models, fine-tune your own models, or deploy your own models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    CTGAN

    CTGAN

    Conditional GAN for generating synthetic tabular data

    ...If you're just getting started with synthetic data, we recommend installing the SDV library which provides user-friendly APIs for accessing CTGAN. The SDV library provides wrappers for preprocessing your data as well as additional usability features like constraints. When using the CTGAN library directly, you may need to manually preprocess your data into the correct format, for example, continuous data must be represented as floats. Discrete data must be represented as ints or strings. The data should not contain any missing values.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Determined

    Determined

    Determined, deep learning training platform

    ...Easily share on-premise or cloud GPUs with your team. Determined’s cluster scheduling offers first-class support for deep learning and seamless spot instance support. Check out examples of how you can use Determined to train popular deep learning models at scale.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    PyTorch Ignite

    PyTorch Ignite

    Library to help with training and evaluating neural networks

    High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently. Less code than pure PyTorch while ensuring maximum control and simplicity. Library approach and no program's control inversion. Use ignite where and when you need. Extensible API for metrics, experiment managers, and other components. The cool thing with handlers is that they offer unparalleled flexibility (compared to, for example, callbacks). Handlers can be any function: e.g. lambda, simple function, class method, etc. Thus, we do not require to inherit from an interface and override its abstract methods which could unnecessarily bulk up your code and its complexity. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    SSD in PyTorch 1.0

    SSD in PyTorch 1.0

    High quality, fast, modular reference implementation of SSD in PyTorch

    ...The implementation is heavily influenced by the projects ssd.pytorch, pytorch-ssd and maskrcnn-benchmark. This repository aims to be the code base for research based on SSD. Multi-GPU training and inference: We use DistributedDataParallel, you can train or test with arbitrary GPU(s), the training schema will change accordingly. Add your own modules without pain. We abstract backbone, Detector, BoxHead, BoxPredictor, etc. You can replace every component with your own code without changing the code base. For example, You can add EfficientNet as the backbone, just add efficient_net.py (ALREADY ADDED) and register it, specific it in the config file, It's done! ...
    Downloads: 0 This Week
    Last Update:
    See Project