Showing 11 open source projects for "parallel"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Context for your AI agents Icon
    Context for your AI agents

    Crawl websites, sync to vector databases, and power RAG applications. Pre-built integrations for LLM pipelines and AI assistants.

    Build data pipelines that feed your AI models and agents without managing infrastructure. Crawl any website, transform content, and push directly to your preferred vector store. Use 10,000+ tools for RAG applications, AI assistants, and real-time knowledge bases. Monitor site changes, trigger workflows on new data, and keep your AIs fed with fresh, structured information. Cloud-native, API-first, and free to start until you need to scale.
    Try for free
  • 1
    ncnn

    ncnn

    High-performance neural network inference framework for mobile

    ncnn is a high-performance neural network inference computing framework designed specifically for mobile platforms. It brings artificial intelligence right at your fingertips with no third-party dependencies, and speeds faster than all other known open source frameworks for mobile phone cpu. ncnn allows developers to easily deploy deep learning algorithm models to the mobile platform and create intelligent APPs. It is cross-platform and supports most commonly used CNN networks, including...
    Downloads: 23 This Week
    Last Update:
    See Project
  • 2
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    ...With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers, embedded, or automotive product platforms. TensorRT is built on CUDA®, NVIDIA’s parallel programming model, and enables you to optimize inference leveraging libraries, development tools, and technologies in CUDA-X™ for artificial intelligence, autonomous machines, high-performance computing, and graphics. With new NVIDIA Ampere Architecture GPUs, TensorRT also leverages sparse tensor cores providing an additional performance boost.
    Downloads: 16 This Week
    Last Update:
    See Project
  • 3
    Stanza

    Stanza

    Stanford NLP Python library for many human languages

    ...It contains tools, which can be used in a pipeline, to convert a string containing human language text into lists of sentences and words, to generate base forms of those words, their parts of speech and morphological features, to give a syntactic structure dependency parse, and to recognize named entities. The toolkit is designed to be parallel among more than 70 languages, using the Universal Dependencies formalism. Stanza is built with highly accurate neural network components that also enable efficient training and evaluation with your own annotated data.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 4
    Fairseq

    Fairseq

    Facebook AI Research Sequence-to-Sequence Toolkit written in Python

    Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks. We provide reference implementations of various sequence modeling papers. Recent work by Microsoft and Google has shown that data parallel training can be made significantly more efficient by sharding the model parameters and optimizer state across data parallel workers. These ideas are encapsulated in the new FullyShardedDataParallel (FSDP) wrapper provided by fairscale. Fairseq can be extended through user-supplied plug-ins. Models define the neural network architecture and encapsulate all of the learnable parameters. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Inventors: Validate Your Idea, Protect It and Gain Market Advantages Icon
    Inventors: Validate Your Idea, Protect It and Gain Market Advantages

    SenseIP is ideal for individual inventors, startups, and businesses

    senseIP is an AI innovation platform for inventors, automating any aspect of IP from the moment you have an idea. You can have it researched for uniqueness and protected; quickly and effortlessly, without expensive attorneys. Built for business success while securing your competitive edge.
    Learn More
  • 5
    YOLO ROS

    YOLO ROS

    YOLO ROS: Real-Time Object Detection for ROS

    ...Darknet on the CPU is fast (approximately 1.5 seconds on an Intel Core i7-6700HQ CPU @ 2.60GHz × 8) but it's like 500 times faster on GPU! You'll have to have an Nvidia GPU and you'll have to install CUDA. The CMakeLists.txt file automatically detects if you have CUDA installed or not. CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    ResNeXt

    ResNeXt

    Implementation of a classification framework

    ResNeXt is a deep neural network architecture for image classification built on the idea of aggregated residual transformations. Instead of simply increasing depth or width, ResNeXt introduces a new dimension called cardinality, which refers to the number of parallel transformation paths (i.e. the number of “branches”) that are aggregated together. Each branch is a small transformation (e.g. bottleneck block) and their outputs are summed—this enables richer representation without excessive parameter blowup. The design is modular and homogeneous, making it relatively easy to scale (by tuning cardinality, width, depth) and adopt in existing residual frameworks. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Grenade

    Grenade

    Deep Learning in Haskell

    Grenade is a composable, dependently typed, practical, and fast recurrent neural network library for concise and precise specifications of complex networks in Haskell. Because the types are so rich, there's no specific term level code required to construct this network; although it is of course possible and easy to construct and deconstruct the networks and layers explicitly oneself. Networks in Grenade can be thought of as a heterogeneous list of layers, where their type includes not only...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Swift AI

    Swift AI

    The Swift machine learning library

    ...Swift AI includes a collection of common tools used for artificial intelligence and scientific applications. A flexible, fully-connected neural network with support for deep learning. Optimized specifically for Apple hardware, using advanced parallel processing techniques. We've created some example projects to demonstrate the usage of Swift AI. Each resides in their own repository and can be built with little or no configuration. Each module now contains its own documentation. We recommend that you read the docs carefully for detailed instructions on using the various components of Swift AI. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9

    Fast Matrix for Java

    General purpose matrix utilities for Java in Parallel Computing

    Fast Matrix for Java (fm4j) is a general-purpose matrix utility library for computing with dense matrices. fm4j encapsulated different underlying implementations and select the optimal one in run-time depending on the size of the input matrix. Moreover, fm4j employs Java (Tm) Concurrency to take advantage of the computation power of multi-cor processors.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Leverage AI to Automate Medical Coding Icon
    Leverage AI to Automate Medical Coding

    Medical Coding Solution

    As a healthcare provider, you should be paid promptly for the services you provide to patients. Slow, inefficient, and error-prone manual coding keeps you from the financial peace you deserve. XpertDox’s autonomous coding solution accelerates the revenue cycle so you can focus on providing great healthcare.
    Learn More
  • 10

    FeedForwardNeuralNetworkC++

    Feedforward Neural Network writen in C++

    Feedforward Neural Network writen in C++ serial and parallelized in TBB library. Also using Autotune library for best parallel performance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    This project uses massively parallel Graphics Processing Units(GPU) for neural network(Backpropagation) purposes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next