Showing 55 open source projects for "cpu"

View related business solutions
  • Context for your AI agents Icon
    Context for your AI agents

    Crawl websites, sync to vector databases, and power RAG applications. Pre-built integrations for LLM pipelines and AI assistants.

    Build data pipelines that feed your AI models and agents without managing infrastructure. Crawl any website, transform content, and push directly to your preferred vector store. Use 10,000+ tools for RAG applications, AI assistants, and real-time knowledge bases. Monitor site changes, trigger workflows on new data, and keep your AIs fed with fresh, structured information. Cloud-native, API-first, and free to start until you need to scale.
    Try for free
  • Nonprofit Budgeting Software Icon
    Nonprofit Budgeting Software

    Martus Solutions provides seamless budgeting, reporting, and forecasting tools that integrate with accounting systems for real-time financial insights

    Martus' collaborative and easy-to-use budgeting and reporting platform will save you hundreds of hours each year. It's designed to make the entire budgeting process easier and create unlimited financial transparency.
    Learn More
  • 1
    DGL

    DGL

    Python package built to ease deep learning on graph

    ...We also want to make the combination of graph based modules and tensor based modules (PyTorch or MXNet) as smooth as possible. DGL provides a powerful graph object that can reside on either CPU or GPU. It bundles structural data as well as features for a better control. We provide a variety of functions for computing with graph objects including efficient and customizable message passing primitives for Graph Neural Networks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Jittor

    Jittor

    Jittor is a high-performance deep learning framework

    Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators. The whole framework and meta-operators are compiled just in time. A powerful op compiler and tuner are integrated into Jittor. It allowed us to generate high-performance code specialized for your model. Jittor also contains a wealth of high-performance model libraries, including image recognition, detection, segmentation, generation, differentiable rendering, geometric learning, reinforcement...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    AIMET

    AIMET

    AIMET is a library that provides advanced quantization and compression

    ...Quantized inference is significantly faster than floating point inference. For example, models that we’ve run on the Qualcomm® Hexagon™ DSP rather than on the Qualcomm® Kryo™ CPU have resulted in a 5x to 15x speedup. Plus, an 8-bit model also has a 4x smaller memory footprint relative to a 32-bit model. However, often when quantizing a machine learning model (e.g., from 32-bit floating point to an 8-bit fixed point value), the model accuracy is sacrificed.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Deep Java Library (DJL)

    Deep Java Library (DJL)

    An engine-agnostic deep learning framework in Java

    ...Because DJL is deep learning engine agnostic, you don't have to make a choice between engines when creating your projects. You can switch engines at any point. To ensure the best performance, DJL also provides automatic CPU/GPU choice based on hardware configuration.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Axe Credit Portal - ACP- is axefinance’s future-proof AI-driven solution to digitalize the loan process from KYC to servicing, available as a locally hosted or cloud-based software. Icon
    Axe Credit Portal - ACP- is axefinance’s future-proof AI-driven solution to digitalize the loan process from KYC to servicing, available as a locally hosted or cloud-based software.

    Banks, lending institutions

    Founded in 2004, axefinance is a global market-leading software provider focused on credit risk automation for lenders looking to provide an efficient, competitive, and seamless omnichannel financing journey for all client segments (FI, Retail, Commercial, and Corporate.)
    Learn More
  • 5
    TensorFlow Addons

    TensorFlow Addons

    Useful extra functionality for TensorFlow 2.x maintained by SIG-addons

    TensorFlow Addons is a repository of contributions that conform to well-established API patterns but implement new functionality not available in core TensorFlow. TensorFlow natively supports a large number of operators, layers, metrics, losses, and optimizers. However, in a fast-moving field like ML, there are many interesting new developments that cannot be integrated into core TensorFlow (because their broad applicability is not yet clear, or it is mostly used by a smaller subset of the...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    Bandicoot

    Bandicoot

    fast C++ library for GPU linear algebra & scientific computing

    * Fast GPU linear algebra library (matrix maths) for the C++ language, aiming towards a good balance between speed and ease of use * Provides high-level syntax and functionality deliberately similar to Matlab * Provides an API that is aiming to be compatible with Armadillo for easy transition between CPU and GPU linear algebra code * Useful for algorithm development directly in C++, or quick conversion of research code into production environments * Distributed under the permissive Apache 2.0 license, useful for both open-source and proprietary (closed-source) software * Can be used for machine learning, pattern recognition, computer vision, signal processing, bioinformatics, statistics, finance, etc * Downloads: http://coot.sourceforge.io/download.html * Documentation: http://coot.sourceforge.io/docs.html * Bug reports: http://coot.sourceforge.io/faq.html * Git repo: https://gitlab.com/conradsnicta/bandicoot-code
    Downloads: 6 This Week
    Last Update:
    See Project
  • 7
    Implicit

    Implicit

    Fast Python collaborative filtering for implicit feedback datasets

    This project provides fast Python implementations of several different popular recommendation algorithms for implicit feedback datasets. All models have multi-threaded training routines, using Cython and OpenMP to fit the models in parallel among all available CPU cores. In addition, the ALS and BPR models both have custom CUDA kernels - enabling fitting on compatible GPU’s. This library also supports using approximate nearest neighbour libraries such as Annoy, NMSLIB and Faiss for speeding up making recommendations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Exadel CompreFace

    Exadel CompreFace

    Leading free and open-source face recognition system

    ...The solution also features a role management system that allows you to easily control who has access to your Face Recognition Services. CompreFace is delivered as a docker-compose config and supports different models that work on CPU and GPU. Our solution is based on state-of-the-art methods and libraries like FaceNet and InsightFace. Official website: https://exadel.com/solutions/compreface/ Github link: https://github.com/exadel-inc/CompreFace
    Downloads: 18 This Week
    Last Update:
    See Project
  • 9
    GFPGAN

    GFPGAN

    GFPGAN aims at developing Practical Algorithms

    ...Online demo: Baseten.co (backed by GPU, returns the whole image). We provide a clean version of GFPGAN, which can run without CUDA extensions. So that it can run in Windows or on CPU mode. GFPGAN aims at developing a Practical Algorithm for Real-world Face Restoration. It leverages rich and diverse priors encapsulated in a pretrained face GAN (e.g., StyleGAN2) for blind face restoration. Add V1.3 model, which produces more natural restoration results, and better results on very low-quality / high-quality inputs.
    Downloads: 110 This Week
    Last Update:
    See Project
  • BoldTrail Real Estate CRM Icon
    BoldTrail Real Estate CRM

    A first-of-its-kind homeownership solution that puts YOU at the center of the coveted lifetime consumer relationship.

    BoldTrail, the #1 rated real estate platform, is built to power your entire brokerage with next-generation technology your agents will use and love. Showcase your unique brand with customizable websites for your company, offices, and every agent. Maximize lead capture with a modern, portal-like consumer search experience and intelligent behavior tracking. Hyper-local area pages, home valuation pages and options for rich lifestyle data keep customers searching with your brokerage as the local experts. The most robust lead gen tools on the market help your brokerage, teams & agents effectively drive new business - no matter their budget. Empower your agents to generate free leads instantly with our simple to use landing pages & IDX squeeze pages. Drive more leads with higher quality and lower cost through in-house tools built within the platform. Diversify lead sources with our automated social media posting, integrated Google and Facebook advertising, custom text codes and more.
    Learn More
  • 10
    KoboldAI

    KoboldAI

    Your gateway to GPT writing

    ...No matter if you want to use the free, fast power of Google Colab, your own high end graphics card, an online service you have an API key for (Like OpenAI or Inferkit) or if you rather just run it slower on your CPU you will be able to find a way to use KoboldAI that works for you.
    Leader badge
    Downloads: 183 This Week
    Last Update:
    See Project
  • 11
    Flashlight library

    Flashlight library

    A C++ standalone library for machine learning

    ...Flashlight can be broken down into several components as described above. Each component can be incrementally built by specifying the correct build options. Flashlight is most-easily built and installed with vcpkg. Both the CUDA and CPU backends are supported with vcpkg. For either backend, first, install Intel MKL. Flashlight app binaries are also built for the selected features and are installed into the vcpkg install tree's tools directory.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    TensorFlowOnSpark

    TensorFlowOnSpark

    TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters

    By combining salient features from the TensorFlow deep learning framework with Apache Spark and Apache Hadoop, TensorFlowOnSpark enables distributed deep learning on a cluster of GPU and CPU servers. It enables both distributed TensorFlow training and inferencing on Spark clusters, with a goal to minimize the amount of code changes required to run existing TensorFlow programs on a shared grid.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Hugging Face Transformer

    Hugging Face Transformer

    CPU/GPU inference server for Hugging Face transformer models

    Optimize and deploy in production Hugging Face Transformer models in a single command line. At Lefebvre Dalloz we run in-production semantic search engines in the legal domain, in the non-marketing language it's a re-ranker, and we based ours on Transformer. In that setup, latency is key to providing a good user experience, and relevancy inference is done online for hundreds of snippets per user query. Most tutorials on Transformer deployment in production are built over Pytorch and FastAPI....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    MACE

    MACE

    Deep learning inference framework optimized for mobile platforms

    Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices. Runtime is optimized with NEON, OpenCL and Hexagon, and Winograd algorithm is introduced to speed up convolution operations. The initialization is also optimized to be faster. Chip-dependent power options like big.LITTLE scheduling, Adreno GPU hints are included as advanced APIs. UI responsiveness guarantee is sometimes...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Tez

    Tez

    Tez is a super-simple and lightweight Trainer for PyTorch

    ...This is a simple, to-the-point, library to make your PyTorch training easy. This library is in early-stage currently! So, there might be breaking changes. Currently, tez supports cpu, single gpu and multi-gpu & tpu training. More coming soon! Using tez is super-easy. We don't want you to be far away from pytorch. So, you do everything on your own and just use tez to make a few things simpler.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    FARM

    FARM

    Fast & easy transfer learning for NLP

    ...With FARM you can build fast proofs-of-concept for tasks like text classification, NER or question answering and transfer them easily into production. Easy fine-tuning of language models to your task and domain language. AMP optimizers (~35% faster) and parallel preprocessing (16 CPU cores => ~16x faster). Modular design of language models and prediction heads. Switch between heads or combine them for multitask learning. Full Compatibility with HuggingFace Transformers' models and model hub. Smooth upgrading to newer language models. Integration of custom datasets via Processor class. Powerful experiment tracking & execution.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    YOLO ROS

    YOLO ROS

    YOLO ROS: Real-Time Object Detection for ROS

    ...Darknet on the CPU is fast (approximately 1.5 seconds on an Intel Core i7-6700HQ CPU @ 2.60GHz × 8) but it's like 500 times faster on GPU! You'll have to have an Nvidia GPU and you'll have to install CUDA. The CMakeLists.txt file automatically detects if you have CUDA installed or not. CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    TFLearn

    TFLearn

    Deep learning library featuring a higher-level API for TensorFlow

    ...Powerful helper functions to train any TensorFlow graph, with support of multiple inputs, outputs, and optimizers. Easy and beautiful graph visualization, with details about weights, gradients, activations, and more. Effortless device placement for using multiple CPU/GPU. The high-level API currently supports the most of the recent deep learning models, such as Convolutions, LSTM, BiRNN, BatchNorm, etc.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    imgaug

    imgaug

    Image augmentation for machine learning experiments

    imgaug is a library for image augmentation in machine learning experiments. It supports a wide range of augmentation techniques, allows to easily combine these and to execute them in random order or on multiple CPU cores, has a simple yet powerful stochastic interface and can not only augment images but also key points/landmarks, bounding boxes, heatmaps and segmentation maps. Affine transformations, perspective transformations, contrast changes, gaussian noise, dropout of regions, hue/saturation changes, cropping/padding, blurring, etc. Rotate image and segmentation map on it by the same value sampled. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    textgenrnn

    textgenrnn

    Easily train your own text-generating neural network

    ...Configure RNN size, the number of RNN layers, and whether to use bidirectional RNNs. Train on any generic input text file, including large files. Train models on a GPU and then use them to generate text with a CPU. Utilize a powerful CuDNN implementation of RNNs when trained on the GPU, which massively speeds up training time as opposed to typical LSTM implementations. Train the model using contextual labels, allowing it to learn faster and produce better results in some cases.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    PyTorch Book

    PyTorch Book

    PyTorch tutorials and fun projects including neural talk

    ...The current version of the code is based on pytorch 1.0.1, if you want to use an older version please git checkout v0.4or git checkout v0.3. Legacy code has better python2/python3 compatibility, CPU/GPU compatibility test. The new version of the code has not been fully tested, it has been tested under GPU and python3. But in theory there shouldn't be too many problems on python2 and CPU. The basic part (the first five chapters) explains the content of PyTorch. This part introduces the main modules in PyTorch and some tools commonly used in deep learning. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Edward

    Edward

    A probabilistic programming language in TensorFlow

    ...Edward fuses three fields, Bayesian statistics and machine learning, deep learning, and probabilistic programming. Edward is built on TensorFlow. It enables features such as computational graphs, distributed training, CPU/GPU integration, automatic differentiation, and visualization with TensorBoard. Expectation-Maximization, pseudo-marginal and ABC methods, and message passing algorithms.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    Deepo

    Deepo

    Set up deep learning environment in a single command line

    Deepo is a series of Docker images that allows you to quickly set up your deep learning research environment, supports almost all commonly used deep learning frameworks, supports GPU acceleration (CUDA and cuDNN included), also works in CPU-only mode, and works on Linux (CPU version/GPU version), Windows (CPU version) and OS X (CPU version). Their Dockerfile generator that allows you to customize your own environment with Lego-like modules, and automatically resolves the dependencies for you. For users in China who may suffer from slow speeds when pulling the image from the public Docker registry, you can pull deepo images from the China registry mirror by specifying the full path, including the registry, in your docker pull command. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    NNVM

    NNVM

    Open deep learning compiler stack for cpu, gpu

    The vision of the Apache NNVM Project is to host a diverse community of experts and practitioners in machine learning, compilers, and systems architecture to build an accessible, extensible, and automated open-source framework that optimizes current and emerging machine learning models for any hardware platform. Compilation of deep learning models into minimum deployable modules. Infrastructure to automatically generates and optimize models on more backend with better performance....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Deep Learning with Keras and Tensorflow

    Deep Learning with Keras and Tensorflow

    Introduction to Deep Neural Networks with Keras and Tensorflow

    Introduction to Deep Neural Networks with Keras and Tensorflow. To date tensorflow comes in two different packages, namely tensorflow and tensorflow-gpu, whether you want to install the framework with CPU-only or GPU support, respectively. NVIDIA Drivers and CuDNN must be installed and configured before hand. Please refer to the official Tensorflow documentation for further details. Since version 0.9 Theano introduced the libgpuarray in the stable release (it was previously only available in the development version). The goal of libgpuarray is (from the documentation) make a common GPU ndarray (n dimensions array) that can be reused by all projects that is as future proof as possible, while keeping it easy to use for simple need/quick test. ...
    Downloads: 0 This Week
    Last Update:
    See Project