Texas Instruments AM6x Developing Multiple Cameras
Nā kikoʻī
- Inoa Huahana: AM6x ʻohana o nā mea hana
- Supported Camera Type: AM62A (With or without built-in ISP), AM62P (With Built-in ISP)
- ʻIkepili Puke Paʻi Paʻi: AM62A (Raw/YUV/RGB), AM62P (YUV/RGB)
- ISP HWA: AM62A (ʻAe), AM62P (ʻAʻole)
- Hoʻonaʻauao HWA: AM62A (ʻAe), AM62P (ʻAʻole)
- HWA kiʻi 3-D: AM62A (ʻAʻole), AM62P (ʻAe)
Introduction to Multiple-Camera Applications on AM6x:
- He kuleana koʻikoʻi nā kāmela i hoʻokomo ʻia i nā ʻōnaehana ʻike hou.
- Utilizing multiple cameras in a system enhances capabilities and enables tasks not achievable with a single camera.
Applications Using Multiple Cameras:
- Mākaʻikaʻi palekana: Enhances surveillance coverage, object tracking, and recognition accuracy.
- Hoʻopuni View: Enables stereo vision for tasks like obstacle detection and object manipulation.
- Pūnaehana Mirror Cabin a me Camera Mirror System: Hāʻawi i ka uhi lōʻihi a hoʻopau i nā wahi makapō.
- Kiʻi Lapaʻau: Offers enhanced precision in surgical navigation and endoscopy.
- Nā Drone a me nā kiʻi lewa: Capture high-resolution images from different angles for various applications.
Connecting Multiple CSI-2 Cameras to the SoC:
To connect multiple CSI-2 cameras to the SoC, follow the guidelines provided in the user manual. Ensure proper alignment and connection of each camera to the designated ports on the SoC.
Palapala Noi
Ke hoʻomohala nei i nā noi he nui-kamera ma AM6x
Jianzhong Xu, Qutaiba Saleh
OLELO HOOLAHA
This report describes application development using multiple CSI-2 cameras on the AM6x family of devices. A reference design of object detection with deep learning on 4 cameras on the AM62A SoC is presented with performance analysis. General principles of the design apply to other SoCs with a CSI-2 interface, such as AM62x and AM62P.
Hoʻolauna
He kuleana koʻikoʻi nā kāmela i hoʻokomo ʻia i nā ʻōnaehana ʻike hou. ʻO ka hoʻohana ʻana i nā kāmeʻa lehulehu i loko o kahi ʻōnaehana e hoʻonui i nā hiki o kēia mau ʻōnaehana a hiki i nā hiki ke hiki ʻole me ka pahu hoʻokahi. Aia ma lalo kekahi mau examples of applications using multiple embedded cameras:
- Mākaʻikaʻi Palekana: Nui nā kāmela i hoʻonohonoho pono ʻia e hāʻawi i ka uhi mākaʻikaʻi piha. Hāʻawi lākou i ka panoramic views, reduce blind spots, and enhance the accuracy of object tracking and recognition, improving overall security measures.
- Hoʻopuni View: Multiple cameras are used to create a stereo vision setup, enabling three-dimensional information and the estimation of depth. This is crucial for tasks such as obstacle detection in autonomous vehicles, precise object manipulation in robotics, and enhanced realism of augmented reality experiences.
- Cabin Recorder and Camera Mirror System: A car cabin recorder with multiple cameras can provide more coverage using a single processor. Similarly, a camera mirror system with two or more cameras can expand the driver’s field of view a hoopau i na wahi makapo mai na aoao a pau o ke kaa.
- Medical Imaging: Multiple cameras can be used in medical imaging for tasks like surgical navigation, providing surgeons with multiple perspectives for enhanced precision. In endoscopy, multiple cameras enable a thorough examination of internal organs.
- Drones and Aerial Imaging: Drones often come equipped with multiple cameras to capture high-resolution images or videos from different angles. This is useful in applications like aerial photography, agriculture monitoring, and land surveying.
- With the advancement of microprocessors, multiple cameras can be integrated into a single System-on-Chip.
(SoC) to provide compact and efficient solutions. The AM62Ax SoC, with high-performance video/vision processing and deep learning acceleration, is an ideal device for the above-mentioned use cases. Another AM6x device, the AM62P, is built for high-performance embedded 3D display applications. Equipped with 3D graphics acceleration, the AM62P can easily stitch together the images from multiple cameras and produce a high-resolution panoramic view. Ua hōʻike ʻia nā hiʻohiʻona hou o ka AM62A/AM62P SoC i nā puke like ʻole, e like me [4], [5], [6], etc. - Hōʻike ka papa 1-1 i nā ʻokoʻa nui ma waena o AM62A a me AM62P e pili ana i ka hana kiʻi.
Papa 1-1. Nā ʻokoʻa ma waena o AM62A a me AM62P i ka hoʻoponopono kiʻi
SoC | AM62A | AM62P |
Kākoʻo ʻia ke ʻano pahupaʻikiʻi | With or without a built-in ISP | Me ka ISP i kūkulu ʻia |
ʻIkepili Huakaʻi pahupaʻikiʻi | Raw/YUV/RGB | YUV/RGB |
ISP HWA | ʻAe | ʻAʻole |
Hohonu HWA | ʻAe | ʻAʻole |
3-D Kiʻi HWA | ʻAʻole | ʻAe |
Hoʻohui i nā kāmeʻa CSI-2 he nui i ka SoC
Aia i loko o ka Pūnaehana Kamera ma ka AM6x SoC nā ʻāpana aʻe, e like me ka hōʻike ʻana ma ke Kiʻi 2-1:
- MIPI D-PHY Receiver: receives video streams from external cameras, supporting up to 1.5 Gbps per data lane for 4 lanes.
- CSI-2 Receiver (RX): receives video streams from the D-PHY receiver and either directly sends the streams to the ISP or dumps the data to DDR memory. This module supports up to 16 virtual channels.
- SHIM: a DMA wrapper that enables sending the captured streams to memory over DMA. Multiple DMA contexts can be created by this wrapper, with each context corresponding to a virtual channel of the CSI-2 Receiver.
Multiple cameras can be supported on the AM6x through the use of virtual channels of CSI-2 RX, even though there is only one CSI-2 RX interface on the SoC. An external CSI-2 aggregating component is needed to combine multiple camera streams and send them to a single SoC. Two types of CSI-2 aggregating solutions can be used, described in the following sections.
CSI-2 Aggregator Using SerDes
ʻO kahi ala o ka hoʻohui ʻana i nā kahawai kamera he nui ka hoʻohana ʻana i kahi hopena serializing a deserializing (SerDes). Hoʻololi ʻia ka ʻikepili CSI-2 mai kēlā me kēia kamera e kahi serializer a hoʻoili ʻia ma o ke kaula. Loaʻa ka deserializer i nā ʻikepili serialized a pau i hoʻoili ʻia mai nā kaula (hoʻokahi kelepona no ke kāmela), hoʻohuli i nā kahawai i ka ʻikepili CSI-2, a laila hoʻouna i kahi kahawai CSI-2 interleaved i ka interface CSI-2 RX hoʻokahi ma ka SoC. Hoʻomaopopo ʻia kēlā me kēia kahawai pahupaʻikiʻi e kahi kahawai virtual kū hoʻokahi. Hāʻawi kēia hāʻina hui i ka pōmaikaʻi hou o ka ʻae ʻana i ka pilina lōʻihi a hiki i 15m mai nā kāmela a i ka SoC.
ʻO ka FPD-Link a i ʻole V3-Link serializers a me deserializers (SerDes), i kākoʻo ʻia i ka AM6x Linux SDK, ʻo ia nā ʻenehana kaulana loa no kēia ʻano o ka hoʻonā hui ʻana o CSI-2. Loaʻa i nā FPD-Link a me V3-Link deserializers nā ala hope i hiki ke hoʻohana ʻia e hoʻouna i nā hōʻailona sync frame e hoʻonohonoho i nā kāmela āpau, e like me ka wehewehe ʻana ma [7].
Hōʻike ka kiʻi 2-2 i kahi exampka hoʻohana ʻana i nā SerDes e hoʻopili i nā kāmela lehulehu i hoʻokahi AM6x SoC.
He example of this aggregating solution can be found in the Arducam V3Link Camera Solution Kit. This kit has a deserializer hub which aggregates 4 CSI-2 camera streams, as well as 4 pairs of V3link serializers and IMX219 cameras, including FAKRA coaxial cables and 22-pin FPC cables. The reference design discussed later is built on this kit.
CSI-2 Aggregator without Using SerDes
Hiki i kēia ʻano aggregator ke hoʻopili pololei me nā kāmeʻa MIPI CSI-2 he nui a hōʻuluʻulu i ka ʻikepili mai nā pahu kiʻi a pau i hoʻokahi kahawai puka CSI-2.
Hōʻike ka kiʻi 2-3 i kahi example of such a system. This type of aggregating solution does not use any serializer/deserializer but is limited by the maximum distance of CSI-2 data transfer, which is up to 30cm. The AM6x Linux SDK does not support this type of CSI-2 aggregator
E ho'ā ana i nā kiʻi kamepiula he nui i ka lako polokalamu
Camera Subsystem Software Architecture
Hōʻike ka Figure 3-1 i kahi kiʻina poloka kiʻekiʻe o ka polokalamu kamepiula hopu kiʻi ma AM62A/AM62P Linux SDK, e pili ana i ka ʻōnaehana HW ma ke Kiʻi 2-2.
- Hiki i kēia hoʻolālā polokalamu i ka SoC ke loaʻa nā kahawai kamera he nui me ka hoʻohana ʻana iā SerDes, e like me ka hōʻike ʻana ma ke Kiʻi 2-2. Hāʻawi ka FPD-Link/V3-Link SerDes i kahi helu I2C kūʻokoʻa a me ke kahawai virtual i kēlā me kēia kamera. Pono e hana ʻia kahi uhi lāʻau ʻokoʻa me ka helu I2C kūʻokoʻa no kēlā me kēia pahu pahu. Hoʻomaopopo ka mea hoʻokele CSI-2 RX i kēlā me kēia pahupaʻikiʻi me ka hoʻohana ʻana i ka helu channel virtual kūʻokoʻa a hana i kahi pōʻaiapili DMA no kēlā me kēia kahawai kamera. Hana ʻia kahi node wikiō no kēlā me kēia pōʻaiapili DMA. Loaʻa a mālama ʻia ka ʻikepili mai kēlā me kēia kamera me ka hoʻohana ʻana iā DMA i ka hoʻomanaʻo. Hoʻohana nā palapala hoʻohana kikowaena i nā node wikiō e pili ana i kēlā me kēia pahupaʻikiʻi e kiʻi i ka ʻikepili kamera. Examples of using this software architecture are given in Chapter 4 – Reference Design.
- Hiki ke hoʻopili a pāʻani i nā mea hoʻokele sensor kikoʻī e pili ana i ka framework V4L2. E nānā i [8] e pili ana i ka hoʻohui ʻana i kahi mea hoʻokele sensor hou i ka Linux SDK.
Image Pipeline Software Architecture
- The AM6x Linux SDK provides the GStreamer (GST) framework, which can be used in the ser space to integrate the image processing components for various applications. The Hardware Accelerators (HWA) on the SoC, such as the Vision Pre-processing Accelerator (VPAC) or ISP, video encoder/decoder, and deep learning compute engine, are accessed through GST plugins. ʻO ka VPAC (ISP) ponoʻī he nui nā poloka, me ka Vision Imaging Sub-System (VISS), Lens Distortion Correction (LDC), a me Multiscalar (MSC), kēlā me kēia me kahi plugin GST.
- Figure 3-2 shows the block diagram of a typical image pipeline from the camera to encoding or deep
learning applications on AM62A. For more details about the end-to-end data flow, refer to the EdgeAI SDK documentation.
For AM62P, the image pipeline is simpler because there is no ISP on AM62P.
Me kahi node wikiō i hana ʻia no kēlā me kēia kiʻi kiʻi, ʻo ka pipeline kiʻi kiʻi GStreamer e ʻae i ka hoʻoili ʻana i nā mea hoʻokomo kiʻi kiʻi lehulehu (hoʻohui ʻia ma o ka CSI-2 RX interface hoʻokahi) i ka manawa like. Hāʻawi ʻia kahi hoʻolālā kuhikuhi e hoʻohana ana iā GStreamer no nā polokalamu kamepiula lehulehu ma ka mokuna aʻe.
Hoʻolālā Kuhikuhi
Hōʻike kēia mokuna i kahi hoʻolālā kuhikuhi no ka holo ʻana i nā noi kiʻi kamepiula lehulehu ma AM62A EVM, me ka hoʻohana ʻana i ka Arducam V3Link Camera Solution Kit e hoʻohui i nā kiʻi kiʻi 4 CSI-2 i AM62A a me ka ʻike ʻana i nā mea no nā kāmela 4 āpau.
Kākoʻo ʻia nā kāmela
The Arducam V3Link kit works with both FPD-Link/V3-Link-based cameras and Raspberry Pi-compatible CSI-2 cameras. The following cameras have been tested:
- D3 Engineering D3RCM-IMX390-953
- Leopard Imaging LI-OV2312-FPDLINKIII-110H
- IMX219 cameras in the Arducam V3Link Camera Solution Kit
Setting up Four IMX219 Cameras
Follow the instructions provided in the AM62A Starter Kit EVM Quick Start Guide to set up the SK-AM62A-LP EVM (AM62A SK) and ArduCam V3Link Camera Solution Quick Start Guide to connect the cameras to AM62A SK through the V3Link kit. Make sure the pins on the flex cables, cameras, V3Link board, and AM62A SK are all aligned properly.
Figure 4-1 shows the setup used for the reference design in this report. The main components in the setup include:
- 1X SK-AM62A-LP EVM board
- 1X Arducam V3Link d-ch adapter board
- FPC cable connecting Arducam V3Link to SK-AM62A
- 4X V3Link camera adapters (serializers)
- 4X RF coaxial cables to connect V3Link serializers to V3Link d-ch kit
- 4X IMX219 Cameras
- 4X CSI-2 22-pin cables to connect cameras to serializers
- Cables: HDMI cable, USB-C to power SK-AM62A-LP and 12V power sourced for V3Link d-ch kit)
- Other components not shown in Figure 4-1: micro-SD card, micro-USB cable to access SK-AM62A-LP, and Ethernet for streaming
Configuring Cameras and CSI-2 RX Interface
Set up the software according to the instructions provided in the Arducam V3Link Quick Start Guide. After running the camera setup script, setup-imx219.sh, the camera’s format, the CSI-2 RX interface format, and the routes from each camera to the corresponding video node will be configured properly. Four video nodes are created for the four IMX219 cameras. Command “v4l2-ctl –list-devices” displays all the V4L2 video devices, as shown below:
There are 6 video nodes and 1 media node under tiscsi2rx. Each video node corresponds to a DMA context allocated by the CSI2 RX driver. Out of the 6 video nodes, 4 are used for the 4 IMX219 cameras, as shown in the media pipe topology below:
E like me ka mea i hōʻike ʻia ma luna nei, loaʻa i ka hui media 30102000.ticsi2rx nā kumu kumu 6, akā hoʻohana ʻia nā 4 mua, kēlā me kēia no hoʻokahi IMX219. Hiki ke hōʻike kiʻi ʻia ka topology pipe media. E holo i kēia kauoha e hana i kahi kiko file:
Then run the command below on a Linux host PC to generate a PNG file:
He kiʻi ʻo 4-2 me ka hoʻohana ʻana i nā kauoha i hāʻawi ʻia ma luna. Hiki ke loaʻa nā ʻāpana o ka hoʻolālā polokalamu o ke Kiʻi 3-1 ma kēia pakuhi.
Streaming from Four Cameras
Me ka lako pono a me ka lako polokalamu i hoʻonohonoho pono ʻia, hiki ke holo nā polokalamu kamepiula lehulehu mai ka wahi mea hoʻohana. No AM62A, pono e hoʻokani ʻia ka ISP e hana i ka maikaʻi o ke kiʻi. E nānā i ka AM6xA ISP Tuning Guide no ka hana ʻana i ka hoʻokani ISP. Hōʻike nā pauku ma lalo exampka liʻiliʻi o ka hoʻoheheʻe ʻana i ka ʻikepili kamera i kahi hōʻike, ke kahe ʻana i ka ʻikepili kamera i kahi pūnaewele, a me ka mālama ʻana i ka ʻikepili kamera i files.
Streaming Camera Data to Display
ʻO kahi noi kumu o kēia ʻōnaehana multi-camera ʻo ia ka hoʻoheheʻe ʻana i nā wikiō mai nā kāmela āpau i kahi hōʻike pili i ka SoC like. ʻO ka mea aʻe kahi paipu GStreamer exampʻO ka hoʻoheheʻe ʻana i ʻehā IMX219 i kahi hōʻike (e loli paha nā helu node wikiō a me nā helu v4l-subdev i ka pipeline mai ka reboot a reboot).
Streaming Camera Data through Ethernet
Ma kahi o ke kahe ʻana i kahi hōʻike e pili ana i ka SoC like, hiki ke kahe ʻia ka ʻikepili kamera ma o ka Ethernet. ʻO ka ʻaoʻao e loaʻa ana he mea hana AM62A/AM62P ʻē aʻe a i ʻole PC host. He exampʻO ka hoʻoheheʻe ʻana i ka ʻikepili kamera ma o ka Ethernet (e hoʻohana ana i ʻelua kāmela no ka maʻalahi) (e nānā i ka plugin encoder i hoʻohana ʻia i ka pipeline):
He exampʻO ka loaʻa ʻana o ka ʻikepili pahupaʻikiʻi a me ke kahe ʻana i kahi hōʻike ma kekahi kaʻina hana AM62A/AM62P:
Storing Camera Data to Files
Instead of streaming to a display or through a network, the camera data can be stored in local files. Hoʻopaʻa ka pipeline ma lalo i ka ʻikepili o kēlā me kēia kamera i kahi file (e hoʻohana ana i ʻelua kāmela ma ke ʻano he example no ka maʻalahi).
Multicamera Deep Learning Inference
Ua lako ʻia ʻo AM62A me kahi accelerator hoʻonaʻauao hohonu (C7x-MMA) a hiki i ʻelua TOPS, hiki ke holo i nā ʻano hiʻohiʻona hohonu hohonu no ka hoʻokaʻawale ʻana, ka ʻike mea, ka ʻāpana semantic, a me nā mea hou aku. Hōʻike kēia ʻāpana pehea e hiki ai i ka AM62A ke holo i ʻehā mau kumu hoʻonaʻauao hohonu ma nā ʻano meaʻai kamera ʻehā.
Ke Koho Anaana
The TI’s EdgeAI-ModelZoo provides hundreds of state-of-the-art models, which are converted/exported from their original training frameworks to an anembedded-friendlyy format so that they can be offloaded to the C7x-MMA deep learning accelerator. The cloud-based Edge AI Studio Model Analyzer provides an easy-to-use “Model Selection” tool. It is dynamically updated to include all models supported in TI EdgeAI-ModelZoo. The tool requires no previous experience and provides an easy-to-use interface to enter the features required in the desired model.
The TFL-OD-2000-ssd-mobV1-coco-mlperf was selected for this multi-camera deep learning experiment. This multi-object detection model is developed in the TensorFlow framework with a 300×300 input resolution. Table 4-1 shows the important features of this model when trained on the cCOCO dataset with about 80 different classes.
Papa 4-1. Hōʻike i nā hiʻohiʻona o ka Model TFL-OD-2000-ssd-mobV1-coco-mlperf.
Hoʻohālike | Hana | Olelo Hooholo | FPS | mAP 50%
Accuracy On COCO |
Latency/Frame (ms) | DDR BW
Utilization (MB/ Frame) |
TFL-OD-2000-ssd-
mobV1-coco-mlperf |
Multi Object Detection | 300×300 | ~152 | 15.9 | 6.5 | 18.839 |
Pipeline Setup
Figure 4-3 shows the 4-camera deep learning GStreamer pipeline. TI provides a suite of GStreamer plugins e ʻae i ka hoʻokuʻu ʻana i kekahi o ka hoʻoili ʻia ʻana o ka media a me ka hoʻonaʻauao hohonu ʻana i nā mea hoʻokele lako. ʻO kekahi examples o keia plugins e komo pū me tiovxisp, tiovxmultiscaler, tiovxmosaic, a me tidlinferer. ʻO ka pipeline ma ka Figure 4-3 nā mea a pau e pono ai plugins for a multipath GStreamer pipeline for 4-camera inputs, each with media preprocess, deep learning inference, and postprocess. The duplicated plugins no ka mea, ua hoʻopaʻa ʻia kēlā me kēia ala kamera i ka pakuhi no ka hōʻike maʻalahi.
The available hardware resources are evenly distributed among the four camera paths. For instance, AM62A contains two image multiscalers: MSC0 and MSC1. The pipeline explicitly dedicates MSC0 to process camera 1 and camera 2 paths, while MSC1 is dedicated to camera 3 and camera 4.
The output of the four camera pipelines is scaled down and concatenated together using the tiovxmosaic plugin. The output is displayed on a single screen. Figure 4-4 shows the output of the four cameras with a deep learning model running object detection. Each pipeline (camera) is running at 30 FPS and a total of 120 FPS.
ʻO ka hope aʻe ka palapala pipeline piha no ka hihia hoʻohana hoʻonaʻauao hohonu multicamera i hōʻike ʻia ma ke Kiʻi 4-3.
Nānā Hana Hana
The setup with four cameras using the V3Link board and the AM62A SK was tested in various application scenarios, including directly displaying on a screen, streaming over Ethernet (four UDP channels), recording to 4 separate files, and with deep learning inference. In each experiment, we monitored the frame rate and the utilization of CPU cores to explore the whole system’s capabilities.
E like me ka mea i hōʻike mua ʻia ma ke Kiʻi 4-4, hoʻohana ka pipeline aʻo hohonu i ka plugin tiperfoverlay GStreamer e hōʻike i nā haʻahaʻa kumu CPU ma ke kiʻi pahu ma lalo o ka pale. Ma ka maʻamau, hōʻano hou ʻia ka pakuhi i kēlā me kēia ʻelua kekona e hōʻike i nā ukana ma ke ʻano he pākēneka hoʻohanatage. In addition to the tiperfoverlay GStreamer plugin, the perf_stats tool is a second option to show core performance directly on the terminal with an option for saving to a file. This tool is more accurate compared to the tTiperfoverlayas the latter adds extra load on theARMm cores and the DDR to draw the graph and overlay it on the screen. The perf_stats tool is mainly used to collect hardware utilization results in all of the test cases shown in this document. Some of the important processing cores and accelerators studied in these tests include the main processors (four A53 Arm cores @ 1.25GHz), the deep learning accelerator (C7x-MMA @ 850MHz), the VPAC (ISP) with VISS and multiscalers (MSC0 and MSC1), and DDR operations.
Table 5-1 shows the performance and resource utilization when using AM62A with four cameras for three use cases, including streaming four cameras to a display, streaming over Ethernet, and recording to four separate files. Two tests are implemented in each use case: with the camera only and with deep learning inference. In addition, the first row in Table 5-1 shows hardware utilizations when only the operating system was running on AM62A without any user applications. This is used as a baseline to compare against when evaluating hardware utilizations of the other test cases. As shown in the table, the four cameras with deep learning and screen display operated at 30 FPS each ,with a total of 120 FPS for the four cameras. This high frame rate is achieved with only 86% of the deep learning accelerator (C7x-MMA) full capacity. In addition, it is important to note that the deep learning accelerator was clocked at 850MHz instead of 1000MHz in these experiments, which is about only 85% of its maximum performance.
Papa 5-1. Hana (FPS) a me ka hoʻohana waiwai o AM62A i ka wā i hoʻohana ʻia me 4 IMX219 nā kāmela no ka hōʻike pale, ke kahawai Ethernet, hoʻopaʻa i ka Files, and Performing Deep Learning Inferencing
Palapala noi n | Pipeline (operation
) |
Hoʻopuka | FPS avg pipeline s | FPS
huina |
MPUs A53s @ 1.25
GHz [%] |
MCU R5 [%] | DLA (C7x- MMA) @ 850
MHz [%] |
VISS [%] | MSC0 [%] | MSC1 [%] | DDR
Rd [MB/s] |
DDR
Wr [MB/s] |
DDR
Total [MB/s] |
ʻAʻohe App | Baseline No operation | NA | NA | NA | 1.87 | 1 | 0 | 0 | 0 | 0 | 560 | 19 | 579 |
Kāmeʻa wale nō | Kahawai to Screen | Palena | 30 | 120 | 12 | 12 | 0 | 70 | 61 | 60 | 1015 | 757 | 1782 |
Stream over Ethernet | UDP: 4
ports 1920×1080 |
30 | 120 | 23 | 6 | 0 | 70 | 0 | 0 | 2071 | 1390 | 3461 | |
Hoʻopaʻa i files | 4 files 1920×1080 | 30 | 120 | 25 | 3 | 0 | 70 | 0 | 0 | 2100 | 1403 | 3503 | |
Kām with Deep learning | Deep learning: Object detection MobV1- coco | Palena | 30 | 120 | 38 | 25 | 86 | 71 | 85 | 82 | 2926 | 1676 | 4602 |
Deep learning: Object detection MobV1- coco and Stream over Ethernet | UDP: 4
ports 1920×1080 |
28 | 112 | 84 | 20 | 99 | 66 | 65 | 72 | 4157 | 2563 | 6720 | |
Deep learning: Object detection MobV1- coco and record to files | 4 files 1920×1080 | 28 | 112 | 87 | 22 | 98 | 75 | 82 | 61 | 2024 | 2458 | 6482 |
Hōʻuluʻulu manaʻo
This application report describes how to implement multi-camera applications on the AM6x family of devices. A reference design based on Arducam’s V3Link Camera Solution Kit and AM62A SK EVM is provided in the report, with several camera applications using four IMX219 cameras, such as streaming and object detection. Users are encouraged to acquire the V3Link Camera Solution Kit from Arducam and replicate these examples. The report also provides a detailed analysis of the performance of AM62A while using four cameras under various configurations, including displaying to a screen, streaming over Ethernet, and recording to files. It also showsAM62A’sA capability of performing deep learning inference on four separate camera streams in parallel. If there are any questions about running these examples, e hoʻouna i kahi nīnau ma ka hui TI E2E.
Nā kuhikuhi
- AM62A Starter Kit EVM Quick Start Guide
- ArduCam V3Link Camera Solution Quick Start Guide
- Edge AI SDK documentation for AM62A
- Edge AI Smart Cameras Using Energy-Efficient AM62A Processor
- Camera Mirror Systems on AM62A
- Driver and Occupancy Monitoring Systems on AM62A
- Quad Channel Camera Application for Surround View and CMS Camera Systems
- AM62Ax Linux Academy on Enabling CIS-2 Sensor
- Edge AI ModelZoo
- Edge AI Studio
- Perf_stats tool
TI Parts Referred in This Application Note:
- https://www.ti.com/product/AM62A7
- https://www.ti.com/product/AM62A7-Q1
- https://www.ti.com/product/AM62A3
- https://www.ti.com/product/AM62A3-Q1
- https://www.ti.com/product/AM62P
- https://www.ti.com/product/AM62P-Q1
- https://www.ti.com/product/DS90UB960-Q1
- https://www.ti.com/product/DS90UB953-Q1
- https://www.ti.com/product/TDES960
- https://www.ti.com/product/TSER953
HOOLAHA NUI A ME KA HOOLAHA
HĀʻawi ʻo TI i nā ʻike loea a me ka hilinaʻi (kokoʻo i nā pepa DATA), nā kumu waiwai hoʻolālā (including REFERENCE DESIGNS), noi a i ʻole nā ʻōlelo aʻoaʻo hoʻolālā ʻē aʻe, WEB Nā mea hana, ka ʻike palekana, a me nā kumuwaiwai ʻē aʻe "e like me" a me nā hewa a pau, a me ka hōʻole ʻana i nā palapala hōʻoia a pau, i hōʻike ʻia a i ʻōlelo ʻia, me ka ʻole o ka palena ʻole o nā palapala hōʻoia o ke kūʻai aku ʻana, kūpono no kahi kumu kūʻai kūʻokoʻa. NA HANA .
Kuhi ʻia kēia mau kumuwaiwai no nā mea hoʻomohala akamai e hoʻolālā me nā huahana TI. Nau wale no ke kuleana
- ke koho ʻana i nā huahana TI kūpono no kāu noi,
- hoʻolālā, hōʻoia, a hoʻāʻo i kāu noi, a
- ensuring your application meets applicable standards, and any other safety, security, regulatory, or other requirements.
These resources are subject to change without notice. TI permits you to use these resources only for the development of an application that uses the TI products described in the resource. Other reproduction and display of these resources is prohibited. No license is granted to any other TI intellectual property right or to any third party intellectual property right. TI disclaims responsibility for, and you will fully indemnify TI and its representatives against, any claims, damages, costs, losses, and liabilities arising out of your use of these resources.
Hāʻawi ʻia nā huahana a TI ma muli o nā Kūʻai Kūʻai a TI a i ʻole nā huaʻōlelo kūpono ʻē aʻe i loaʻa ma ti.com a i ʻole i hoʻolako pū ʻia me ia mau huahana TI. ʻAʻole hoʻonui a hoʻololi ʻole ka hoʻolako ʻana o TI i kēia mau kumuwaiwai i nā palapala hōʻoia kūpono a TI a i ʻole nā palapala hōʻoia no nā huahana TI.
Hoʻole a hōʻole ʻo TI i nā huaʻōlelo hou a ʻokoʻa paha āu i manaʻo ai.
HOOLAHA NUI
- Helu leka uila: Texas Instruments, Post Office Box 655303, Dallas, Texas 75265
- Kuleana kope © 2024, Texas Instruments Incorporated
Nīnau pinepine
Nīnau: Hiki iaʻu ke hoʻohana i kekahi ʻano kamera me ka ʻohana AM6x o nā polokalamu?
The AM6x family supports different camera types, including those with or without built-in ISP. Refer to the specifications for more details on supported camera types.
: What are the main differences between AM62A and AM62P in image processing?
The key variations include supported camera types, camera output data, presence of ISP HWA, Deep Learning HWA, and 3-D Graphics HWA. Refer to the specifications section for a detailed comparison.
Palapala / Punawai
![]() |
ʻO Texas Instruments AM6x Ke hoʻomohala nei i nā kāmela lehulehu [pdf] Ke alakaʻi hoʻohana AM62A, AM62P, AM6x Hoʻomohala ʻana i nā pahupaʻikiʻi lehulehu, AM6x, hoʻomohala ʻana i nā kāmela lehulehu, nā kāmela lehulehu, nā kāmela. |