Introduction
With its powerful AI Image Analysis, Synology Deep Video Analytics (DVA) can instantly calculate large amounts of object attributes, filter out environmental interference, and deliver accurate detection results. Backed with Smart Tag technology and a comprehensive management interface, it allows users to take control of events with ease and efficiency.
Among the supported algorithms, People and Vehicle detection specializes in detecting people or vehicles that have entered a specific area. To accommodate different scenarios and security levels, you can determine what types of objects (people, vehicles, or both) to track and customize your own trigger times.
This guide aims to introduce the key factors of setting up People and Vehicle detection tasks for optimal precision. For best results, please follow the listed points as closely as possible.
System Requirements
- Surveillance Station version 8.2.9 or later.
- Synology's Deep Learning NVR (Synology Deep Video Analytics—also known as DVA—installed by default).
Note: No additional licenses required for People and Vehicle detection.
Quick Camera Installation
Step 1: Select Appropriate Camera
Stream Quality: 1920x1080@20 FPS or above.
Sunshield: (Optional) Added to outdoor cameras to avoid direct sunlight on the lens.
Step 2: Check Installation Environment
Minimum Illumination: 300 lux.
Step 3: Mounting Height and Angle
Installation Height: 2.5 ~ 4 meters.
Camera Tilt Angle: Less than 30 degrees.
Detection Area: The recommended length ranges from 10 meters to 15 meters.
*Please check Triggering Mechanism section for more details.
Diagram illustrating camera mounting. Shows a camera positioned at an 'Installation Height' of 2.5-4 meters, angled downwards with a 'Camera Tilt Angle' less than 30 degrees. The 'Detection Area' is depicted as a region on the ground extending from the camera, ranging from 10 to 15 meters, shown as a trapezoidal shape.
Do's and Don'ts
- ✔️ Do: Directly face the object.
- ❌ Don't: Let the object be blocked.
- ❌ Don't: Let the object be too far.
- ❌ Don't: Use panoramic cameras.
Improve Detection Accuracy
It is still possible that objects will not be detected or will be wrongly recognized even with thorough planning of the camera placement and environment. The following situations can affect detection and tracking by the AI:
- Weather conditions: Rain and snow, changes of shadows, or differences between day and night can impact detection and recognition.
- Objects with similar appearances: Cardboard cut-outs or mirror reflections might be mistaken by the AI for real objects.
- Unstable network connection: May lead to incomplete or corrupt images. Wired connections are highly recommended.
- Lens condition: Dust, insects, or other stains can block the lens. Keep lenses clean for clear images.
- Camera stability: Cameras anchored to unstable surfaces can lead to blurry images.
- Reflective surfaces: Above, below, or in front (e.g., mirrors, shiny floors and ceilings).
- Lighting: Light shining directly into the camera's lens.
Configure Software Settings
Once your cameras are mounted successfully, you can configure software settings for the DVA to suit your requirements. This chapter covers the essential settings for the People and Vehicle detection algorithm.
Select a Stream Profile
For optimal detection accuracy, select a resolution of at least 1920x1080@20FPS. Stream profiles are set by the Intelligent Video Analytics Recording settings of the paired camera. To edit stream profiles, go to IP Camera and select the camera you want to configure. Then click Edit > Edit > Recording > Advanced > Intelligent Video Analytics Recording to set the stream profile.
Define the Detection Zone
DVA allows usage of two types of zones: Inclusive and Exclusive. An Inclusive zone means that detection will occur within the defined zone. An Exclusive zone means that detection will occur outside the defined zone. Both are highly compatible with various scenarios, allowing you to cover the areas that truly matter.
Simply drag the nodes to adjust the position of the detection zone. You can left-click on the zone border to add nodes or right-click on the nodes to delete them. The detection zone should not be too thin or small; it should at least be two times the size of the smallest object you want to detect. Up to three zones on one screen can be configured. If there are not enough overlapping areas between objects and detection zones, the detection will fail.
*Please check Triggering Mechanism section for more details.
Screenshot of the 'Edit Deep Video Analytics Task' interface, showing zone configuration. The 'Parameters' section includes 'Zone type' (set to Inclusive), 'Zone count', 'Zone display', and 'Ignore small objects'. A visual representation shows a camera view with adjustable blue nodes defining a detection zone.
Triggering Mechanism
People and Vehicle Detection works by tracking the overlapping areas between objects and detection zones. An object will be recognized as lingering when it enters a zone and the overlapping area exceeds the threshold. You can configure events to be triggered when an object is detected, when an object lingers over the set time duration, or when the number of objects in the zone reaches a set threshold. (You can also configure events to be triggered when any or all rules are met.)
People Detection
People detection events are triggered when 10% of the height from the bottom center of a person's bounding box enters the detection zone and meets one or more of the following pre-configured conditions:
- When at least one person is detected.
- When the number of people detected reaches the set number.
- When the occupancy time of at least one person reaches the set time.
Diagram illustrating person detection. A red rectangle represents the 'Bounding box' of a person. A green line indicates the detection zone. Text indicates the trigger point is '10% height from the bottom center' of the bounding box.
Image showing a person standing outside the defined detection area. A yellow bounding box is around the person, with a note stating 'The trigger point is not in the detection area'.
Vehicle Detection
Vehicle detection events are triggered when 10% of a vehicle enters the detection zone and meets one or more of the following pre-configured conditions:
- When a vehicle is detected.
- When the occupancy time of a vehicle reaches the set time.
Diagram illustrating vehicle detection. A red rectangle represents a vehicle's bounding box. A green line indicates the detection zone. Text indicates that the 'Overlapped area is less than 10% of the vehicle' in this specific example, and a '10%' marker is shown relative to the vehicle.
Recognized Vehicle Types
People and Vehicle Detection recognizes the following vehicle types: Car, pick-up, van, truck, bus, and motorbike. However, it is still possible that vehicles with a unique appearance are not detected or wrongly recognized.
License Plate Recognition (LPR)
Enable License plate recognition to record all identified vehicle license plates and label them in recognition results. Specifically enhanced detection accuracy is available for specific regions.
To compare recognition results with the license plate database, enable 'Add Allow or Block icons to detection results'. Clicking on 'License Plate Database' will allow you to manage license plates; you can add identified license plates, and remove or edit existing license plates.
Screenshot of the 'Edit Deep Video Analytics Task' interface, showing 'Vehicles' and 'License plate recognition' settings. Under LPR, a 'Region' dropdown is shown (e.g., 'Taiwan'), and an option to 'Add Allow or Block icons to detection results' is available to enable comparison with a license plate database.
For more information, refer to the Administrator's Guide for License Plate Recognition.
Ignore Small Objects
It is important to fine-tune the minimum object size to filter out false positives from small objects. In the Parameters page, click the 'Edit' button and adjust the blue object frame to define the minimum object size. (The percentage refers to the size of the object in relation to the camera image size.) Moving objects that are smaller than the defined object size will be filtered out.
Using the image below as an example, the minimum object size has been set to 0.2%. Objects smaller than this size, lingering or not, will be ignored by the system.
Screenshot of the 'Edit Deep Video Analytics Task' interface, showing the 'Parameters' section. The 'Ignore small objects' setting is enabled with a value of '0.2%'. A preview image displays a camera view with a defined detection zone.
Utilize Advance Features
Besides detailed configuration options, DVA also offers labeling features for easy file management and a Parameter Adjuster that helps fine-tune parameters.
Label and Comment
Labels and descriptions can be added to DVA detection results. For example, you can add the "People" label to mark all videos in which people are detected.
Screenshot of the 'Deep Video Analytics' interface, displaying 'Detection Results'. A list of detected events is shown with timestamps, labels like 'People' and 'Intrusion Detection', and server information. Options are available to 'Manage license plate labels' and 'Add description'.
Optimize Parameter Settings
The 'Parameter Adjuster' allows you to use previous camera recordings or DVA detection results to fine-tune task parameters. This helps fit your actual usage scenarios.
Select a clip from the 'Video Source' panel and drag the nodes to adjust the detection zones according to your needs using the clip as a guide. Basic settings and parameters in the left panel can be edited as well.
Screenshot of the 'Parameter Adjuster' interface. It shows 'Parameters' such as 'Specific objects', 'Detection direction', and 'Ignore small objects'. A 'Video Source' panel displays a timeline with recorded clips (e.g., 'School Fence' recordings), allowing users to select a clip for adjustment.
Find Your Information
Synology publishes a wide range of supporting documentation.
In Knowledge Center, you will find useful Help and FAQ articles, as well as video tutorials breaking up processes into handy steps. You can also find User's Guides, Solution Guides, brochures, and White Papers. Experienced users and administrators will find answers and guidance in technical Administrator's Guides and Developer Guides.
Got a problem and unable to find the solution in our official documentation? Search hundreds of answers by users and support staff in Synology Community or reach Synology Support through the web form, email or telephone.
Contact and Legal Information
SYNOLOGY INC.
9F, No. 1, Yuandong Rd., Banqiao Dist., New Taipei City 220545, Taiwan
Tel: +886 2 2955 1814
SYNOLOGY AMERICA CORP.
3535 Factoria Blvd SE, Suite #200, Bellevue, WA 98006, USA
Tel: +1 425 818 1587
SYNOLOGY UK LTD.
Unit 5 Danbury Court, Linford Wood, Milton Keynes, MK14 6PL, United Kingdom
Tel.: +44 (0)1908048029
SYNOLOGY FRANCE
102 Terrasse Boieldieu (TOUR W), 92800 Puteaux, France
Tel: +33 147 176288
SYNOLOGY GMBH
Grafenberger Allee 295, 40237 Düsseldorf, Deutschland
Tel: +49 211 9666 9666
SYNOLOGY SHANGHAI
200070, Room 201, No. 511 Tianmu W. Rd., Jingan Dist., Shanghai, China
SYNOLOGY JAPAN CO., LTD.
4F, No. 3-1-2, Higashikanda, Chiyoda-ku, Tokyo, 101-0031, Japan
Synology may make changes to specifications and product descriptions at any time, without notice. Copyright © 2022 Synology Inc. All rights reserved. Synology and other names of Synology Products are proprietary marks or registered trademarks of Synology Inc. Other products and company names mentioned herein are trademarks of their respective holders.
Visit us at: synology.com