Unix Shell Neural Network Libraries

View 51 business solutions

Browse free open source Unix Shell Neural Network Libraries and projects below. Use the toggles on the left to filter open source Unix Shell Neural Network Libraries by OS, license, language, programming language, and project status.

  • Auth0 for AI Agents now in GA Icon
    Auth0 for AI Agents now in GA

    Ready to implement AI with confidence (without sacrificing security)?

    Connect your AI agents to apps and data more securely, give users control over the actions AI agents can perform and the data they can access, and enable human confirmation for critical agent actions.
    Start building today
  • Desktop and Mobile Device Management Software Icon
    Desktop and Mobile Device Management Software

    It's a modern take on desktop management that can be scaled as per organizational needs.

    Desktop Central is a unified endpoint management (UEM) solution that helps in managing servers, laptops, desktops, smartphones, and tablets from a central location.
    Learn More
  • 1
    PrettyTensor

    PrettyTensor

    Pretty Tensor: Fluent Networks in TensorFlow

    Pretty Tensor is a high-level API built on top of TensorFlow that simplifies the process of creating and managing deep learning models. It wraps TensorFlow tensors in a chainable object syntax, allowing developers to build multi-layer neural networks with concise and readable code. Pretty Tensor preserves full compatibility with TensorFlow’s core functionality while providing syntactic sugar for defining complex architectures such as convolutional and recurrent networks. The library’s design emphasizes flexibility and modularity, supporting advanced features like default scopes, parameter templates, and variable reuse. It also allows easy integration with custom operations and third-party libraries, making it ideal for both research experimentation and production-grade modeling. By combining TensorFlow’s power with an intuitive builder-style API, Pretty Tensor accelerates model development without sacrificing transparency or control.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 2
    I3D models trained on Kinetics

    I3D models trained on Kinetics

    Convolutional neural network model for video classification

    Kinetics-I3D, developed by Google DeepMind, provides trained models and implementation code for the Inflated 3D ConvNet (I3D) architecture introduced in the paper “Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset” (CVPR 2017). The I3D model extends the 2D convolutional structure of Inception-v1 into 3D, allowing it to capture spatial and temporal information from videos for action recognition. This repository includes pretrained I3D models on the Kinetics dataset, with both RGB and optical flow input streams. The models have achieved state-of-the-art results on benchmark datasets such as UCF101 and HMDB51, and also won first place in the CVPR 2017 Charades Challenge. The project provides TensorFlow and Sonnet-based implementations, pretrained checkpoints, and example scripts for evaluating or fine-tuning models. It also offers sample data, including preprocessed video frames and optical flow arrays, to demonstrate how to run inference and visualize outputs.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    CNN for Image Retrieval
    cnn-for-image-retrieval is a research-oriented project that demonstrates the use of convolutional neural networks (CNNs) for image retrieval tasks. The repository provides implementations of CNN-based methods to extract feature representations from images and use them for similarity-based retrieval. It focuses on applying deep learning techniques to improve upon traditional handcrafted descriptors by learning features directly from data. The code includes training and evaluation scripts that can be adapted for custom datasets, making it useful for experimenting with retrieval systems in computer vision. By leveraging CNN architectures, the project showcases how learned embeddings can capture semantic similarity across varied images. This resource serves as both an educational reference and a foundation for further exploration in image retrieval research.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    SVoice (Speech Voice Separation)

    SVoice (Speech Voice Separation)

    We provide a PyTorch implementation of the paper Voice Separation

    SVoice is a PyTorch-based implementation of Facebook Research’s study on speaker voice separation as described in the paper “Voice Separation with an Unknown Number of Multiple Speakers.” This project presents a deep learning framework capable of separating mixed audio sequences where several people speak simultaneously, without prior knowledge of how many speakers are present. The model employs gated neural networks with recurrent processing blocks that disentangle voices over multiple computational steps, while maintaining speaker consistency across output channels. Separate models are trained for different speaker counts, and the largest-capacity model dynamically determines the actual number of speakers in a mixture. The repository includes all necessary scripts for training, dataset preparation, distributed training, evaluation, and audio separation.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Cloud-based help desk software with ServoDesk Icon
    Cloud-based help desk software with ServoDesk

    Full access to Enterprise features. No credit card required.

    What if You Could Automate 90% of Your Repetitive Tasks in Under 30 Days? At ServoDesk, we help businesses like yours automate operations with AI, allowing you to cut service times in half and increase productivity by 25% - without hiring more staff.
    Try ServoDesk for free
  • 5
    XNNPACK

    XNNPACK

    High-efficiency floating-point neural network inference operators

    XNNPACK is a highly optimized, low-level neural network inference library developed by Google for accelerating deep learning workloads across a variety of hardware architectures, including ARM, x86, WebAssembly, and RISC-V. Rather than serving as a standalone ML framework, XNNPACK provides high-performance computational primitives—such as convolutions, pooling, activation functions, and arithmetic operations—that are integrated into higher-level frameworks like TensorFlow Lite, PyTorch Mobile, ONNX Runtime, TensorFlow.js, and MediaPipe. The library is written in C/C++ and designed for maximum portability, efficiency, and performance, leveraging platform-specific instruction sets (e.g., NEON, AVX, SIMD) for optimized execution. It supports NHWC tensor layouts and allows flexible striding along the channel dimension to efficiently handle channel-split and concatenation operations without additional cost.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next