コンテンツにスキップ
出品NNEngine - Neural Network Engineのメディア1

説明

Demo video: Overview, Monocular depth estimation demo, Artistic style transfer demo

Tutorial video: Implement depth estimation

Documentation: Link

By simply calling a few Blueprint nodes, you can load and run cutting-edge AI.

This plugin supports ONNX (Open Neural Network Exchange), which is an open-source machine learning model format widely used.

Many ML frameworks such as PyTorch and TensorFlow can export its model in ONNX format.

Many trained models are available on ONNX Model Zoo.

Performance is our first consideration.

This plugin supports model optimization at runtime and GPU accelerations on various hardware as well as this plugin itself is optimized.

Demo Project contains practical examples of

  • Human detection

  • Human pose estimation

  • Face detection

  • Facial landmark estimation

  • Eye tracking

using a single RGB camera.

Also, example projects for

are available.

Prerequisite to use with CUDA and TensorRT

To use with CUDA and TensorRT,

  • you need to disable NNERuntimeORT plugin, which is enabled by default since UE5.5 (This is because this plugin and NNERuntimeORT use different versions of ONNX Runtime).

  • you need to install the following versions of CUDA, cuDNN, and TensorRT. 

Windows:

The versions of cuDNN and TensorRT are different for RTX30** series and others. We only tested GTX1080Ti, RTX2070, RTX3060Ti and RTX3070. Others are not tested.

Versions for other than RTX30** series (RTX20**, GTX10**)

  • CUDA: 11.0.3

  • cuDNN: 8.0.2 (July 24th, 2020), for CUDA 11.0

  • TensorRT: 7.1.3.4 for CUDA 11.0

Versions for RTX30** series

  • CUDA: 11.0.3

  • cuDNN: 8.0.5 (November 9th, 2020), for CUDA 11.0

  • TensorRT: 7.2.2.3 for CUDA 11.0

Ubuntu:
  • CUDA: 11.4.2 for Linux x86_64 Ubuntu 18.04

  • cuDNN: 8.2.4 (September 2nd, 2021), for CUDA 11.4, Linux x86_64

  • TensorRT: 8.2.3.0 (8.2 GA Update 2) for Linux x86_64, CUDA 11.0-11.5

To use with TensorRT, it is recommended to add the following environment variables to cache TensorRT Engine:

  • "ORT_TENSORRT_ENGINE_CACHE_ENABLE" and set its value to "1".

  • "ORT_TENSORRT_CACHE_PATH" and set its value to any path where you want to save the cache, for example "C:\temp".

含まれる形式

  • Unreal Engine形式のロゴ