Showing 131 open source projects for "linux memory"

View related business solutions
  • The CRM you’ll want to use every day Icon
    The CRM you’ll want to use every day

    With CRM, Sales, and Marketing Automation in one, Act! gives you everything you need for happier clients, more revenue, and less stress.

    Act! Premium is perfect for small and midsize businesses looking to market better, sell more, and create customers for life. With unparalleled flexibility and freedom of choice, Act! Premium accommodates the unique ways you do business. Whether it’s customizations to fit your specific business or industry processes or your preferences for deployment and access, the possibilities with Act! Premium are limitless.
  • ManageEngine Endpoint Central for IT Professionals Icon
    ManageEngine Endpoint Central for IT Professionals

    A one-stop Unified Endpoint Management (UEM) solution

    ManageEngine's Endpoint Central is a Unified Endpoint Management Solution, that takes care of enterprise mobility management (including all features of mobile application management and mobile device management), as well as client management for a diversified range of endpoints - mobile devices, laptops, computers, tablets, server machines etc. With ManageEngine Endpoint Central, users can automate their regular desktop management routines like distributing software, installing patches, managing IT assets, imaging and deploying OS, and more.
  • 1
    whisper.cpp

    whisper.cpp

    Port of OpenAI's Whisper model in C/C++

    High-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model. Supported platforms: Mac OS (Intel and Arm) iOS Android Linux / FreeBSD WebAssembly Windows (MSVC and MinGW] Raspberry Pi
    Downloads: 17 This Week
    Last Update:
    See Project
  • 2
    PyGPT

    PyGPT

    Open source personal AI Assistant for Linux, Windows and Mac

    PyGPT is a desktop application that allows you to talk to OpenAI's LLM models such as GPT4 and GPT3 using your own computer and OpenAI API. It allows you to talk in chat mode and in completion mode, as well as generate images using DALL-E 2. PyGPT also adds access to the Internet for GPT via Google Custom Search API and Wikipedia API and includes voice synthesis using Microsoft Azure Text-to-Speech API. Moreover, the application has implemented context memory support, context storage, history...
    Downloads: 12 This Week
    Last Update:
    See Project
  • 3
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. TensorRT-based applications perform up to 40X faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers,...
    Downloads: 14 This Week
    Last Update:
    See Project
  • 4
    VLLM

    VLLM

    A high-throughput and memory-efficient inference and serving engine

    vLLM is a fast and easy-to-use library for LLM inference and serving. High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more.
    Downloads: 5 This Week
    Last Update:
    See Project
  • Cloud data warehouse to power your data-driven innovation Icon
    Cloud data warehouse to power your data-driven innovation

    BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data.

    BigQuery Studio provides a single, unified interface for all data practitioners of various coding skills to simplify analytics workflows from data ingestion and preparation to data exploration and visualization to ML model creation and use. It also allows you to use simple SQL to access Vertex AI foundational models directly inside BigQuery for text processing tasks, such as sentiment analysis, entity extraction, and many more without having to deal with specialized models.
  • 5
    ncnn

    ncnn

    High-performance neural network inference framework for mobile

    ncnn is a high-performance neural network inference computing framework designed specifically for mobile platforms. It brings artificial intelligence right at your fingertips with no third-party dependencies, and speeds faster than all other known open source frameworks for mobile phone cpu. ncnn allows developers to easily deploy deep learning algorithm models to the mobile platform and create intelligent APPs. It is cross-platform and supports most commonly used CNN networks, including...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 6
    Flowise

    Flowise

    Drag & drop UI to build your customized LLM flow

    Open source UI visual tool to build your customized LLM flow using LangchainJS, written in Node Typescript/Javascript. Conversational agent for a chat model which utilizes chat-specific prompts and buffer memory. Open source is the core of Flowise, and it will always be free for commercial and personal usage. Flowise support different environment variables to configure your instance. You can specify the following variables in the .env file inside the packages/server folder.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 7

    LightGBM

    Gradient boosting framework based on decision tree algorithms

    LightGBM or Light Gradient Boosting Machine is a high-performance, open source gradient boosting framework based on decision tree algorithms. Compared to other boosting frameworks, LightGBM offers several advantages in terms of speed, efficiency and accuracy. Parallel experiments have shown that LightGBM can attain linear speed-up through multiple machines for training in specific settings, all while consuming less memory. LightGBM supports parallel and GPU learning, and can handle large...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 8
    SuperAGI

    SuperAGI

    A dev-first open source autonomous AI agent framework

    ... Vector DBs to enhance your agent’s performance. Each agent is unique, use different models of your choice. Get insights into your agent’s performance and optimize accordingly. Control token usage to manage costs effectively. Enable your agents to learn and adapt by storing their memory. Get notified when agents get stuck in the loop, and provide proactive resolution. Read and store files generated by Agents.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 9
    CTranslate2

    CTranslate2

    Fast inference engine for Transformer models

    CTranslate2 is a C++ and Python library for efficient inference with Transformer models. The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. The execution is significantly faster and requires less resources than general-purpose deep learning frameworks on supported models and tasks thanks to many advanced...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Make Recruiting and Onboarding Easy Icon
    Make Recruiting and Onboarding Easy

    Simple, easy-to-use applicant tracking and employee Onboarding system for any sized organization.

    Take away the pain and hassle associated with applicant recruitment, hiring, and onboarding with ApplicantStack. Designed for HR professionals and recruiters, ApplicantStack helps streamline the recruiting and onboarding processes to improve productivity and reduce costs. ApplicantStack provides a complete toolkit that includes tools for posting, launching, and advertising jobs, assessing and managing candidates, collaborating with teams, centralizing information for quick hiring and onboarding, and more.
  • 10
    Deep Lake

    Deep Lake

    Data Lake for Deep Learning. Build, manage, and query datasets

    ... Cross, Omdena, Yale, & Oxford. Use one API to upload, download, and stream datasets to/from AWS S3/S3-compatible storage, GCP, Activeloop cloud, or local storage. Store images, audios and videos in their native compression. Deeplake automatically decompresses them to raw data only when needed, e.g., when training a model. Treat your cloud datasets as if they are a collection of NumPy arrays in your system's memory. Slice them, index them, or iterate through them.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    Datasets

    Datasets

    Hub of ready-to-use datasets for ML models

    Datasets is a library for easily accessing and sharing datasets, and evaluation metrics for Natural Language Processing (NLP), computer vision, and audio tasks. Load a dataset in a single line of code, and use our powerful data processing methods to quickly get your dataset ready for training in a deep learning model. Backed by the Apache Arrow format, process large datasets with zero-copy reads without any memory constraints for optimal speed and efficiency. We also feature a deep integration...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    Pedalboard

    Pedalboard

    A Python library for audio

    pedalboard is a Python library for working with audio: reading, writing, rendering, adding effects, and more. It supports the most popular audio file formats and a number of common audio effects out of the box and also allows the use of VST3® and Audio Unit formats for loading third-party software instruments and effects. pedalboard was built by Spotify’s Audio Intelligence Lab to enable using studio-quality audio effects from within Python and TensorFlow. Internally at Spotify, pedalboard...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    ChatLLM Web

    ChatLLM Web

    Chat with LLM like Vicuna totally in your browser with WebGPU

    Chat with LLM like Vicuna totally in your browser with WebGPU, safely, privately, and with no server. Powered By web-llm. To use this app, you need a browser that supports WebGPU, such as Chrome 113 or Chrome Canary. Chrome versions ≤ 112 are not supported. You will need a GPU with about 6.4GB of memory. If your GPU has less memory, the app will still run, but the response time will be slower. The first time you use the app, you will need to download the model. For the Vicuna-7b model that we...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    Colossal-AI

    Colossal-AI

    Making large AI models cheaper, faster and more accessible

    The Transformer architecture has improved the performance of deep learning models in domains such as Computer Vision and Natural Language Processing. Together with better performance come larger model sizes. This imposes challenges to the memory wall of the current accelerator hardware such as GPU. It is never ideal to train large models such as Vision Transformer, BERT, and GPT on a single GPU or a single machine. There is an urgent demand to train models in a distributed environment. However...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    libvips

    libvips

    A fast image processing library with low memory needs

    libvips is a demand-driven, horizontally threaded image processing library. Compared to similar libraries, libvips runs quickly and uses little memory. libvips is licensed under the LGPL 2.1+. It has around 300 operations covering arithmetic, histograms, convolution, morphological operations, frequency filtering, colour, resampling, statistics and others. It supports a large range of numeric types, from 8-bit int to 128-bit complex. Images can have any number of bands. It supports a good range...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    Voqal

    Voqal

    Natural speech programming assistant for the software developers

    Voqal is a programming assistant built for software developers looking to enhance their productivity with natural speech programming. Using Voqal, you can navigate, write, run, and debug software in JetBrains IDEs using your voice. Write code faster, reduce repetitive strain injuries, and improve focus and productivity. Voqal is promptable and privacy-focused, allowing you to customize your experience and control your data.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    LocalAI

    LocalAI

    Self-hosted, community-driven, local OpenAI compatible API

    Self-hosted, community-driven, local OpenAI compatible API. Drop-in replacement for OpenAI running LLMs on consumer-grade hardware. Free Open Source OpenAI alternative. No GPU is required. Runs ggml, GPTQ, onnx, TF compatible models: llama, gpt4all, rwkv, whisper, vicuna, koala, gpt4all-j, cerebras, falcon, dolly, starcoder, and many others. LocalAI is a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    OneFlow

    OneFlow

    OneFlow is a deep learning framework designed to be user-friendly

    ... distributed expansion. It adheres to the core concept and architecture of static compilation and streaming parallelism and solves the memory wall challenge at the cluster level. world-leading level. Provides a variety of services from primary AI talent training to enterprise-level machine learning lifecycle integrated management (MLOps), including AI training and AI development, and supports three deployment modes of public cloud, private cloud and hybrid cloud.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    spaCy models

    spaCy models

    Models for the spaCy Natural Language Processing (NLP) library

    spaCy is designed to help you do real work, to build real products, or gather real insights. The library respects your time, and tries to avoid wasting it. It's easy to install, and its API is simple and productive. spaCy excels at large-scale information extraction tasks. It's written from the ground up in carefully memory-managed Cython. If your application needs to process entire web dumps, spaCy is the library you want to be using. Since its release in 2015, spaCy has become an industry...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    Node ChatGPT API

    Node ChatGPT API

    A client implementation for ChatGPT and Bing AI

    ... can still set userLabel, chatGptLabel and promptPrefix (system instructions) as usual. Support for the official ChatGPT underlying model, gpt-3.5-turbo, via OpenAI's API. Replicates chat threads from the official ChatGPT website (with conversation IDs and message IDs), with persistent conversations using Keyv. Conversations are stored in memory by default, but you can optionally install a storage adapter to persist conversations to a database.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    Amiga Memories

    Amiga Memories

    A walk along memory lane

    Amiga Memories is a project (started & released in 2013) that aims to make video programmes that can be published on the internet. The images and sound produced by Amiga Memories are 100% automatically generated. The generator itself is implemented in Squirrel, the 3D rendering is done on GameStart 3D. An Amiga Memories video is mostly based on a narrative. The purpose of the script is to define the spoken and written content. The spoken text will be read by a voice synthesizer (Text To...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    AIMET

    AIMET

    AIMET is a library that provides advanced quantization and compression

    ... accelerators. Quantized inference is significantly faster than floating point inference. For example, models that we’ve run on the Qualcomm® Hexagon™ DSP rather than on the Qualcomm® Kryo™ CPU have resulted in a 5x to 15x speedup. Plus, an 8-bit model also has a 4x smaller memory footprint relative to a 32-bit model. However, often when quantizing a machine learning model (e.g., from 32-bit floating point to an 8-bit fixed point value), the model accuracy is sacrificed.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    Pandas Profiling

    Pandas Profiling

    Create HTML profiling reports from pandas DataFrame objects

    ..., separator), scripts (Latin, Cyrillic) and blocks (ASCII, Cyrilic). File sizes, creation dates, dimensions, indication of truncated images and existance of EXIF metadata. Mostly global details about the dataset (number of records, number of variables, overall missigness and duplicates, memory footprint). Comprehensive and automatic list of potential data quality issues (high correlation, skewness, uniformity, zeros, missing values, constant values, between others).
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    Smile

    Smile

    Statistical machine intelligence and learning engine

    Smile is a fast and comprehensive machine learning engine. With advanced data structures and algorithms, Smile delivers the state-of-art performance. Compared to this third-party benchmark, Smile outperforms R, Python, Spark, H2O, xgboost significantly. Smile is a couple of times faster than the closest competitor. The memory usage is also very efficient. If we can train advanced machine learning models on a PC, why buy a cluster? Write applications quickly in Java, Scala, or any JVM languages...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    x-transformers

    x-transformers

    A simple but complete full-attention transformer

    A simple but complete full-attention transformer with a set of promising experimental features from various papers. Proposes adding learned memory key/values prior to attending. They were able to remove feedforwards altogether and attain a similar performance to the original transformers. I have found that keeping the feedforwards and adding the memory key/values leads to even better performance. Proposes adding learned tokens, akin to CLS tokens, named memory tokens, that is passed through...
    Downloads: 0 This Week
    Last Update:
    See Project