NVIDIA DeepStream SDK 6.4 Release Notes

These release notes are for the NVIDIA® DeepStream SDK for NVIDIA® Tesla®, NVIDIA® Ampere®, NVIDIA® Hopper®, NVIDIA® Ada Lovelace®, NVIDIA® Jetson AGX Orin™, NVIDIA® Jetson Orin™ NX, and NVIDIA® Jetson Orin™ Nano.

1.0 About This Release

1.1 What's New

The following new features are supported in this DeepStream SDK release:

1.1.1 DS 6.4

Note: Jetson version of DeepStream based on JetPack 6.0 DP (Developer Preview). It is not for production purpose.

1.1.2 DS 6.3 (Previous Release)

DeepStream 6.3 Applications can be migrated to DeepStream 6.4. Refer to the "Application Migration to DeepStream 6.4 from DeepStream 6.3” section in the NVIDIA DeepStream SDK Developer Guide 6.4 Release.

1.1.3 Graph Composer 3.1.0

Graph Composer 3.1.0 is mostly aimed at compute stack update and support on Ubuntu-22.04.

1.2 Contents of This Release

This release includes the following:

1.3 Documentation in This Release

This release contains the following documentation:

1.4 Breaking Changes W.R.T DeepStream 6.3

1.5 Differences With DeepStream 6.1 and Above

gstreamer1.0-libav, libav, OSS encoder,decoder plugins (x264/x265) and audioparsers packages are removed in DeepStream dockers from DeepStream 6.1. You may install these packages based on your requirement (gstreamer1.0-plugins-good/ gstreamer1.0-plugins-bad/ gstreamer1.0-plugins-ugly).

Specifically, for deepstream-nmos, deepstream-avsync-app and python based deepstream-imagedata-multistream app you would need to install gstreamer1.0-libav and gstreamer1.0-plugins-good.

Gst-nveglglessink plugin is deprecated. Use Gst-nv3dsink plugin for Jetson instead.

2.0 Limitations

This section provides details about issues discovered during development and QA but not resolved in this release.

3.0 Notes

Note: OpenCV is deprecated by default. But you can enable OpenCV in plugins such as nvinfer (nvdsinfer) and dsexample (gst-dsexample) by setting WITH_OPENCV=1 in the Makefile of these components. Refer to the component README for more instructions. When using docker make sure libopencv-dev package is installed inside docker if the Application requires it.

3.1 Applications May Be Deployed in a Docker Container

Applications built with DeepStream can be deployed using a Docker container, available on NGC (https://ngc.nvidia.com/). Sign up for an NVIDIA GPU Cloud account and look for DeepStream containers to get started.

As an example, you can use DeepStream 6.4. docker containers on NGC and run the deepstream-test4-app sample application as an Azure edge runtime module on your edge device.

The following procedure deploys deepstream-test4-app:

Set up and install Azure IoT Edge on your system with the instructions provided in the Azure module client README file in the deepstream-6.4 package:

<deepstream-6.4_package>/sources/libs/azure_protocol_adaptor/module_client/README

Note: For the Jetson platform, omit installation of the Moby packages. Moby is currently incompatible with NVIDIA Container Runtime.

See the Azure documentation for information about prerequisites for creating an Azure edge device on the Azure portal: https://docs.microsoft.com/en-us/azure/iot-edge/how-to-deploy-modules-portal#prerequisites

To deploy deepstream-test4-app as an Azure IoT edge runtime module

  1. On the Azure portal, click the IoT edge device you have created and click Set Modules.
  2. Enter these settings: Container Registry Settings: Name: NGC, Address: nvcr.io, User name: $oauthtoken, Password: use the password or API key from your NGC account. Deployment modules: Add a new module with the name ds. Image URI: For x86 dockers: docker pull nvcr.io/nvidia/deepstream:6.4-gc-triton-devel
Multi-Arch dockers for x86 and Jetson:
Container Create options:

3. Specify route options for the module:

3.2 Sample Applications Malfunction if Docker Environment Cannot Support Display

If the Docker environment cannot support display, the sample applications deepstream-test1, deepstream-test2, deepstream-test3, and deepstream-test4 do not work as expected.

Workaround: To correct this problem, you must recompile the test applications after replacing nveglglessink on x86 and nv3dsink on Jetson with fakesink. With deepstream-test4, you also have the option to specify fakesink by adding the --no-display command line switch.

Alternatively virtual display can be used. For more information refer to “How to visualize the output if the display is not attached to the system" section in “Quick Start Guide" section of NVIDIA DeepStream Developer Guide 6.4 Release.

3.3 Installing DeepStream on Jetson

  1. Download the NVIDIA SDK Manager to install JetPack 6.0 DP.
  2. Select all the JetPack 6.0 DP components except DeepStreamSDK from the "Additional SDKs" section.

Refer to the “Quick Start Guide” section in NVIDIA DeepStream Developer Guide 6.4 Release to update additional BSP libraries if available. Continue with the DeepStream installation instructions after the BSP update.

Note: NVIDIA Container Runtime package shall be installed using JetPack 6.0 DP and is a pre-requisite for all DeepStream L4T docker containers.

3.4 Triton Inference Server in DeepStream

Triton inference server (version 23.08) on dGPU is supported only via docker deepstream:6.4-triton-multiarch for x86. On Jetson, version 23.11 is supported with or without docker.

Refer to the NVIDIA DeepStream Development Guide 6.4 Release for more details about Triton inference server.

Triton inference server Supports following frameworks:

Framework Tesla Jetson Notes / Limitations
TensorRT Yes Yes Supports TensorRT plan or engine file (*.plan, *.engine)
Triton model config.pbtxt for TensorRT engine file format:
platform: "tensorrt_plan"
default model filename: "model.engine"
input [...]
output [...]
Triton-TensorRT backend documentation: https://github.com/triton-inference-server/tensorrt_backend
TensorFlow Yes Yes Supports Tensorflow 2.x (Tensorflow 1.x is deprecated)
Supports TF-TensorRT optimization
ONNX Yes Yes Supported model formats: GraphDef or SavedModel
Other TF formats such as checkpoint variables or estimators are not directly supported
Triton model config.pbtxt for Graphdef format
platform: "tensorflow_graphdef"
default_model_filename: "model.graphdef"
Triton model config.pbtxt for Graphdef format
platform: "tensorflow_savedmodel"
default_model_filename: "model.savedmodel"
Triton Tensorflow backend documentation: https://github.com/triton-inference-server/tensorflow_backend
PyTorch(TorchScript) Yes Yes Supports ONNX model
Supports ONNX TensorRT optimization
Triton model config.pbtxt for ONNX
platform: "onnxruntime_onnx"
default model filename: "model.onnx"
# [optional: TensorRT optimization, disabled by default]
optimization { execution accelerators { gpu_execution_accelerator : [ { name : "tensorrt" parameters { key: "precision mode" value: "FP16" } parameters { key: "max_workspace _size_bytes" value: "1073741824" }} ] }}
Triton ONNXRuntime backend documentation: https://github.com/triton-inference-server/onnxruntime_backend
Python Backend Yes Yes Support TorchScript models(file format *.pt), PyTorch model must be traced and saved as a TorchScript Model (.pt)
Triton model config.pbtxt for TorchScript format
backend: "pytorch"
platform: "pytorch libtorch"
default model filename: "model.pt"
input [ { name: "INPUT0" }]
output [ { name: "OUTPUT1" } { name: "OUTPUT0" }]
Triton Pytorch backend documentation: https://github.com/triton-inference-server/pytorch_backend
Ensemble Models Yes Yes Support Custom Triton-Python backend
Support Python Custom conda Execution Environment
Triton model config.pbtxt for Python file format
backend: "python"
default model filename: "model.py"
# [optional: custom conda env, disabled by default]
parameters: { key: "EXECUTION ENV PATH", value: {string value: "$$TRITON_MODEL DIRECTORY/python3.6.tar.gz"}}
Triton Python backend documentation: https://github.com/triton-inference-server/python_backend

Support Triton Ensemble Models to connect multiple model inference graph
Triton model config.pbtxt for ensemble model
platform: "ensemble"
input[...]
output[...]
ensemble_scheduling { step [ {model_name: “model_A″}, {model_name: “model_B″}, {model name: “model C″}, ] }
Triton Ensemble Model documentation: https://github.com/triton-inference-server/server/blob/main/docs/user_guide/architecture.md#ensemble-models

For more information refer to the following links:


Notice

THE INFORMATION IN THIS DOCUMENT AND ALL OTHER INFORMATION CONTAINED IN NVIDIA DOCUMENTATION REFERENCED IN THIS DOCUMENT IS PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE INFORMATION FOR THE PRODUCT, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA's aggregate and cumulative liability towards customer for the product described in this document shall be limited in accordance with the NVIDIA terms and conditions of sale for the product. THE NVIDIA PRODUCT DESCRIBED IN THIS DOCUMENT IS NOT FAULT TOLERANT AND IS NOT DESIGNED, MANUFACTURED OR INTENDED FOR USE IN CONNECTION WITH THE DESIGN, CONSTRUCTION, MAINTENANCE, AND/OR OPERATION OF ANY SYSTEM WHERE THE USE OR A FAILURE OF SUCH SYSTEM COULD RESULT IN A SITUATION THAT THREATENS THE SAFETY OF HUMAN LIFE OR SEVERE PHYSICAL HARM OR PROPERTY DAMAGE (INCLUDING, FOR EXAMPLE, USE IN CONNECTION WITH ANY NUCLEAR, AVIONICS, LIFE SUPPORT OR OTHER LIFE CRITICAL APPLICATION). NVIDIA EXPRESSLY DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY OF FITNESS FOR SUCH HIGH RISK USES. NVIDIA SHALL NOT BE LIABLE TO CUSTOMER OR ANY THIRD PARTY, IN WHOLE OR IN PART, FOR ANY CLAIMS OR DAMAGES ARISING FROM SUCH HIGH RISK USES.

NVIDIA makes no representation or warranty that the product described in this document will be suitable for any specified use without further testing or modification. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer's sole responsibility to ensure the product is suitable and fit for the application planned by customer and to do the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer's product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA does not accept any liability related to any default, damage, costs or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document, or (ii) customer product designs.

Other than the right for customer to use the information in this document with the product, no other license, either expressed or implied, is hereby granted by NVIDIA under this document. Reproduction of information in this document is permissible only if reproduction is approved by NVIDIA in writing, is reproduced without alteration, and is accompanied by all associated conditions, limitations, and notices.

Trademarks

NVIDIA, the NVIDIA logo, TensorRT, NVIDIA Ampere, NVIDIA Hopper and NVIDIA Tesla are trademarks and/or registered trademarks of NVIDIA Corporation in the Unites States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.

Copyright © 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

www.nvidia.com

PDF preview unavailable. Download the PDF instead.

DeepStream 6.4 Release Notes Adobe PDF Library 23.6.156

Related Documents

Preview NVIDIA DeepStream SDK 6.0 Release Notes for DGPU and Jetson
This document provides release notes for NVIDIA DeepStream SDK version 6.0, detailing new features, enhancements, limitations, and important notes for developers working with NVIDIA DGPU and Jetson platforms. It covers updates to DS 6.0, Graph Composer 1.0.0, and integration with Triton Inference Server.
Preview DeepStream SDK 6.2 Release Notes for NVIDIA dGPU/X86 and Jetson
This document provides release notes for DeepStream SDK 6.2, detailing new features, improvements, limitations, and notes for NVIDIA dGPU/X86 and Jetson platforms. It covers updates to DS 6.2, Graph Composer 2.5, and differences from previous versions.
Preview NVIDIA DeepStream SDK 7.1 Release Notes
This document provides release notes for NVIDIA DeepStream SDK 7.1, detailing new features, improvements, limitations, and deprecations for NVIDIA DGPU/X86 and Jetson platforms.
Preview NVIDIA AI Enterprise User Guide: Installation, Configuration, and Management
Comprehensive user guide for NVIDIA AI Enterprise, detailing installation, configuration, and management of NVIDIA vGPU, AI frameworks, and software components across various hypervisors and operating systems.
Preview NVIDIA TensorRT Quick Start Guide: Optimize Deep Learning Inference
This NVIDIA TensorRT Quick Start Guide (v8.4.0 EA) provides essential instructions for optimizing deep learning models for high-performance inference. Learn how to install NVIDIA TensorRT, understand conversion and deployment workflows using ONNX and TF-TRT, and utilize the runtime API for C++ and Python.
Preview NVIDIA AI Enterprise User Guide: GPU Virtualization, Deployment, and Management
Comprehensive user guide for NVIDIA AI Enterprise, detailing installation, configuration, and management of AI and data analytics workloads on virtualized GPU environments. Covers vGPU, Kubernetes, VMware vSphere, and Red Hat KVM.
Preview NVIDIA Jetson Linux 34.1.1 Developer Preview Release Notes
This document provides release notes for NVIDIA Jetson Linux 34.1.1 Developer Preview, detailing new features, known issues, fixed issues, and implementation details for NVIDIA Jetson platforms including AGX Orin and Xavier NX.
Preview NVIDIA Jetson Nano: Hello AI World - Your Guide to AI at the Edge
Explore the NVIDIA Jetson Nano, a powerful and accessible platform for AI and robotics development. This guide covers its specifications, software stack (JetPack, TensorRT), application SDKs (DeepStream, Isaac), getting started tutorials, and use cases for autonomous machines.