Machine Learning Apps for Apple iPhone

View 25 business solutions

Browse free open source Machine Learning apps and projects for Apple iPhone below. Use the toggles on the left to filter open source Machine Learning apps by OS, license, language, programming language, and project status.

  • Passwordless authentication enables a secure and frictionless experience for your users | Auth0 Icon
    Over two-thirds of people reuse passwords across sites, resulting in an increasingly insecure e-commerce ecosystem. Learn how passwordless can not only mitigate these issues but make the authentication experience delightful. Implement Auth0 in any application in just five minutes
  • File Synchronization, File Replication and File Archiving software solutions. Icon
    SureSync is a file replication and synchronization application that provides one-way and multi-way processing in both scheduled and real-time modes.
  • 1
    YOLOv5

    YOLOv5

    YOLOv5 is the world's most loved vision AI

    Introducing Ultralytics YOLOv8, the latest version of the acclaimed real-time object detection and image segmentation model. YOLOv8 is built on cutting-edge advancements in deep learning and computer vision, offering unparalleled performance in terms of speed and accuracy. Its streamlined design makes it suitable for various applications and easily adaptable to different hardware platforms, from edge devices to cloud APIs. Explore the YOLOv8 Docs, a comprehensive resource designed to help you understand and utilize its features and capabilities. Whether you are a seasoned machine learning practitioner or new to the field, this hub aims to maximize YOLOv8's potential in your projects.
    Downloads: 121 This Week
    Last Update:
    See Project
  • 2
    Vosk Speech Recognition Toolkit

    Vosk Speech Recognition Toolkit

    Offline speech recognition API for Android, iOS, Raspberry Pi

    Vosk is an offline open source speech recognition toolkit. It enables speech recognition for 20+ languages and dialects - English, Indian English, German, French, Spanish, Portuguese, Chinese, Russian, Turkish, Vietnamese, Italian, Dutch, Catalan, Arabic, Greek, Farsi, Filipino, Ukrainian, Kazakh, Swedish, Japanese, Esperanto, Hindi, Czech, Polish. More to come. Vosk models are small (50 Mb) but provide continuous large vocabulary transcription, zero-latency response with streaming API, reconfigurable vocabulary and speaker identification. Speech recognition bindings are implemented for various programming languages like Python, Java, Node.JS, C#, C++, Rust, Go and others. Vosk supplies speech recognition for chatbots, smart home appliances, and virtual assistants. It can also create subtitles for movies, and transcription for lectures and interviews. Vosk scales from small devices like Raspberry Pi or Android smartphones to big clusters.
    Downloads: 13 This Week
    Last Update:
    See Project
  • 3
    MACE

    MACE

    Deep learning inference framework optimized for mobile platforms

    Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices. Runtime is optimized with NEON, OpenCL and Hexagon, and Winograd algorithm is introduced to speed up convolution operations. The initialization is also optimized to be faster. Chip-dependent power options like big.LITTLE scheduling, Adreno GPU hints are included as advanced APIs. UI responsiveness guarantee is sometimes obligatory when running a model. Mechanism like automatically breaking OpenCL kernel into small units is introduced to allow better preemption for the UI rendering task. Graph level memory allocation optimization and buffer reuse are supported. The core library tries to keep minimum external dependencies to keep the library footprint small.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 4
    MNN

    MNN

    MNN is a blazing fast, lightweight deep learning framework

    MNN is a highly efficient and lightweight deep learning framework. It supports inference and training of deep learning models, and has industry leading performance for inference and training on-device. At present, MNN has been integrated in more than 20 apps of Alibaba Inc, such as Taobao, Tmall, Youku, Dingtalk, Xianyu and etc., covering more than 70 usage scenarios such as live broadcast, short video capture, search recommendation, product searching by image, interactive marketing, equity distribution, security risk control. In addition, MNN is also used on embedded devices, such as IoT. MNN Workbench could be downloaded from MNN's homepage, which provides pretrained models, visualized training tools, and one-click deployment of models to devices. Android platform, core so size is about 400KB, OpenCL so is about 400KB, Vulkan so is about 400KB. Supports hybrid computing on multiple devices. Currently supports CPU and GPU.
    Downloads: 3 This Week
    Last Update:
    See Project
  • The all-in-one Omnichannel Experience Management Platform Icon
    Build conversational surveys of any type, for any purpose, in any language. Get 40% more responses.
  • 5
    Bender

    Bender

    Easily craft fast Neural Networks on iOS

    Bender allows you to easily define and run neural networks on your iOS apps, it uses Apple’s MetalPerformanceShaders under the hood. Bender provides the ease of use of CoreML with the flexibility of a modern ML framework. Bender allows you to run trained models, you can use Tensorflow, Keras, Caffe, the choice is yours. Either freeze the graph or export the weights to files. You can import a frozen graph directly from supported platforms or re-define the network structure and load the weights. Either way, it just takes a few minutes. Bender suports the most common ML nodes and layers but it is also extensible so you can write your own custom functions. With Core ML, you can integrate trained machine learning models into your app, it supports Caffe and Keras 1.2.2+ at the moment. Apple released conversion tools to create CoreML models which then can be run easily. Finally, there is no easy way to add additional pre or post-processing layers to run on the GPU.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6

    DGRLVQ

    Dynamic Generalized Relevance Learning Vector Quantization

    Some of the usual problems for Learning vector quantization (LVQ) based methods are that one cannot optimally guess about the number of prototypes required for initialization for multimodal data structures i.e.these algorithms are very sensitive to initialization of prototypes and one has to pre define the optimal number of prototypes before running the algorithm. If a prototype, for some reasons, is ‘outside’ the cluster which it should represent and if there are points of a different categories in between, then the other points act as a barrier and the prototype will not find its optimum position during training. Since the model complexity is not known in many cases, we avoid this problem by introducing a "Dynamic" version of LVQ. Dynamic-GRLVQ (DGRLVQ), which adapts the model complexity to the given problem during training by adding or removing prototypes dynamically/realtime one by one for each category until satisfactory classification results are achieved.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    TNN

    TNN

    Uniform deep learning inference framework for mobile

    TNN, a high-performance, lightweight neural network inference framework open sourced by Tencent Youtu Lab. It also has many outstanding advantages such as cross-platform, high performance, model compression, and code tailoring. The TNN framework further strengthens the support and performance optimization of mobile devices on the basis of the original Rapidnet and ncnn frameworks. At the same time, it refers to the high performance and good scalability characteristics of the industry's mainstream open source frameworks, and expands the support for X86 and NV GPUs. On the mobile phone, TNN has been used by many applications such as mobile QQ, weishi, and Pitu. As a basic acceleration framework for Tencent Cloud AI, TNN has provided acceleration support for the implementation of many businesses. Everyone is welcome to participate in the collaborative construction to promote the further improvement of the TNN inference framework.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next