Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
0.5.0 (14_12_2022) Inference on Android with ONNX Runtime source code.tar.gz | 2022-12-01 | 87.9 MB | |
0.5.0 (14_12_2022) Inference on Android with ONNX Runtime source code.zip | 2022-12-01 | 88.5 MB | |
README.md | 2022-12-01 | 6.2 kB | |
Totals: 3 Items | 176.4 MB | 0 |
Features:
* Added Android inference support
* Built Android artifacts for "impl", "onnx" and "visualization" modules #422
* Added Android-specific models to the model zoo
* Classification #438:
* EfficientNet4Lite
* MobilenetV1
* Object Detection:
* SSDMobileNetV1
#440
* EfficientDetLite0
#443
* Pose Detection #442:
* MoveNetSinglePoseLighting
* MoveNetSinglePoseThunder
* Face Detection #461:
* UltraFace320
* UltraFace640
* Face Alignment #441:
* Fan2d106
* Implemented preprocessing operations working on Android Bitmap
#416
#478:
* Resize
* Rotate
* Crop
* ConvertToFloatArray
* Added utility functions to convert ImageProxy
to Bitmap
#458
* Added NNAPI
execution provider #420
* Added api to create OnnxInferenceModel
from the ByteArray
representation #415
* Introduced a gradle task to download model hub models before the build #444
* Added utility functions to draw detection results on Android Canvas
#450
* Implemented new preprocessing API #425
* Introduced an Operation
interface to represent a preprocessing operation for any input and output
* Added PreprocessingPipeline
class to combine operations together in a type-safe manner
* Re-implemented old operations with the new API
* Added convenience functions such as pipeline
to start a new preprocessing pipeline,
call
to invoke operations defined elsewhere, onResult
to access intermediate preprocessing results
* Converted ModelType#preprocessInput
function to Operation
#429
* Converted common preprocessing functions for models trained on ImageNet to Operation
#429
* Added new ONNX features
* Added execution providers support (CPU
, CUDA
, NNAPI
) and convenient extensions for inference with them
#386
* Introduced OnnxInferenceModel#predictRaw
function which allows custom OrtSession.Result
processing and extension functions
to extract common data types from the result #465
* Added validation of input shape #385
* Added Imagenet
enum to represent different Imagenet dataset labels and added support for zero indexed COCO labels
#438 #446
* Implemented unified summary printing for Tensorflow and ONNX models #368
* Added FlatShape
interface to allow manipulating the detected shapes in a unified way #480
* Introduced DataLoader
interface for loading and preprocessing data for dataset implementations #424
* Improved swing visualization utilities #379
#388
* Simplified Layer
interface to leave only build
function to be implemented and remove explicit output shape computation
#408
Breaking changes:
* Refactored module structure and packages #412 #469
* Extracted "tensorflow" module for learning and inference with Tensorflow backend
* Extracted "impl" module for implementation classes and utilities
* Moved preprocessing operation implementations to the "impl" module
* Removed dependency of "api" module on the "dataset" module
* Changed packages for "api", "impl", "dataset" and "onnx" so that they match the corresponding module name
* Preprocessing classes such as Preprocessing
, ImagePreprocessing
, ImagePreprocessor
,
ImageSaver
, ImageShape
, TensorPreprocessing
, Preprocessor
got removed in favor of the new preprocessing API #425
* Removed Sharpen
preprocessor since the ModelType#preprocessor
field was introduced, which can be used in the preprocessing
pipeline using the call
function #429
Bugfixes:
* Fix loading of jpeg files not supported by standard java ImageIO #384
* Updated ONNX Runtime version to enable inference on M1 chips #361
* Fixed channel ordering in for image recognition models #400
* Avoid warnings from loadWeightsForFrozenLayers
function for layers without parameters #382
New documentation and examples: * Inference with KotlinDL and ONNX Runtime on desktop and Android * KotlinDL ONNX Model Zoo * Sample Android App
Thanks to our contributors: * Nikita Ermolenko (@ermolenkodev) * Julia Beliaeva (@juliabeliaeva) * Burak Akgün (@mbakgun) * Pavel Gorgulov (@devcrocod)