AI Photogrammetry Advanced User Manual
for Artec Studio 20
Introduction
Congratulations on the purchase of Artec Studio.
This user manual provides detailed instructions for utilizing the new AI Photogrammetry feature in Artec Studio 20. It covers the entire process of capturing photos and videos to creating a final 3D model from the captured data and post-processing it.
NOTICE
Artec Studio recommends compiling neural networks during the first run after installation. Do not skip this step.
The content of this document is subject to change without prior notice. Ensure that the product is used in accordance with the latest version of this document.
Trademarks
Windows® is a registered trademark of Microsoft Corporation in the United States and other countries. All other trademarks are the property of their respective owners.
Artec 3D is a registered trademark of ARTEC EUROPE S.à r.l. in the European Union, the USA and other countries.
Customer support
If you have any questions regarding the use of Artec Studio, refer to the Artec 3D Support Team or fill out the question form available here.
Available documentation
Name | Function |
---|---|
Al Photogrammetry Advanced User Manual for Artec Studio 20 | All instructions required for creating a final 3D model from a set of photos or videos, as well as troubleshooting and data capture instructions. |
Al Photogrammetry Quick Start Guide for Artec Studio 20 | A brief overview of the feature, including data capture, preparation for model creation, and the essential steps to get started. |
Working principle
AI Photogrammetry leverages advanced algorithms to convert captured photos into detailed, feature-rich 3D models. The process begins by importing a set of photos into Artec Studio and creating a preview where the images are positioned in 3D space, producing a Photo scan object for further processing. Next, a triangular mesh is generated using specialized algorithms based on the scene type. This mesh can be processed and textured within Artec Studio, allowing users to create accurate and visually rich 3D representations, making 3D modeling accessible to everyone without the need for a 3D scanner.
Al Photogrammetry overview
1.1 Definitions of use
- Importing photos and videos
- Positioning images in 3D space
- Aligning photo sets
- Generating triangular meshes
- Creating feature-rich 3D models
- Texturing models
- Processing models with algorithms in Artec Studio
1.2 Types of Photogrammetry algorithms
The Photo Reconstruction pipeline in Artec Studio is divided into two consecutive stages:
Step 1. Create Preview: Where a set of photos imported into Artec Studio can be processed, resulting in positioning them in 3D space. The output is a Photo Scan in the Workspace, representing the alignment of the images for further processing.
Step 2. Create model: This stage involves creating a triangular mesh that can be used in Artec Studio in a traditional way (to process and texture). Depending on the scene type, there are two types of algorithms:
- Separate object
- Whole scene
Both algorithms will generate a mesh, but each is suited for different purposes. We recommend using the two algorithms under different conditions and for various scenes. While some scenes can be processed by either algorithm, others may be better handled by one over the other.
Separate object
Separate object algorithm is best suited for handling various objects, such as a controller, a statuette, a pen, or a chair. To enhance the quality of separate object algorithm, a specialized object-detection algorithm processes all photos to generate masks for each one. For optimal results, ensure the entire object is fully captured within the frame and well-separated from the background. This clear separation is essential for the algorithm to create accurate masks and avoid potential reconstruction failures.
Whole scene
In this photogrammetric scenario, there is no requirement for a strong separation between the object and the background. In fact, this could work both with or without masks. This type of reconstruction works best for feature-rich scenes, such as aerial or drone captures, or objects like stone, statues, architectural objects, etc.
General pipeline
2.1 Overview
Here is the general pipeline for processing photogrammetry data in Artec Studio. You can follow these instructions when performing your first reconstruction.
2.2 Install software
AI Photogrammetry requires Artec Studio 19 (and later) software which allows you to import sets of photos or video for further creating a 3D model. Artec Studio should be purchased separately. To obtain the software, contact Artec 3D Sales Team or Authorized Reseller. Note that an Internet connection must be available to download and license the software.
- Open my.artec3d.com.
- Once registered, your account manager or reseller will assign the licensed software to your account.
- Download Artec Installation Center.
- Install Artec Installation Center on your computer and launch it.
- Click Install near the Artec Studio label as the Artec Studio manual describes.
- Finish installation.
2.3 Use scale references
2.3.1 Definition of scale reference usage
Scale references are used to ensure that the 3D model is created with accurate real-world dimensions. Without them, the scale of the model will be arbitrary. By using scale references, such as scale bars or scale crosses, you can define the correct scale for your model.
[Image of scale bars and scale crosses]
2.3.2 Scale references types
Scale bar
Scale references come in two types: Scale bars and Scale crosses.
A Scale bar allows you to obtain the correct scale for your model by using the distance between two targets. It defines the scale of an object but only along a single axis, making it useful for determining size but providing no information about orientation in 3D space.
Scale cross
In contrast, a Scale cross provides a reference not only for the scale of an object but also for its position and orientation in 3D space. It consists of two intersecting scale bars. This comprehensive reference is particularly valuable when you need to determine both the size and alignment of an object relative to the scene.
If you do not have a physical scale cross-reference, you can use a printed version, available as a PDF document at: C:\Program Files\Artec\Artec Studio [version]. The file names are ASC A4.pdf for Europe and ASC US Letter.pdf for the USA.
[Image of a scale cross]
2.3.3 Creating scale references in Artec Studio
In order to detect the real object's dimensions, you need to add scale references in Artec Studio before running the Create Preview algorithm.
First, open the scale reference creation dialog in Artec Studio. You have two options:
- Go to File → Coded targets and scalebars.
- Alternatively, ensure that a photoset is selected in Workspace. Then, go to Tools → AI Photogrammetry → Create Preview → Settings. Click the Edit button in the Scaled reference section.
Adding scale bar
1. Define the IDs of the two targets, the distance between them (in millimeters), and the name of the scale bar. Note that the IDs must be unique and fall within the range of 1 to 516.
2. Finally, click the Create reference button.
The newly created scale bar will appear in the list of all references on the left.
[Screenshot of the Scale references dialog for adding a scale bar]
Adding scale cross
1. Define the IDs of the two pairs of targets, the distance between them (in millimeters) in each pair, and the name of the scale bar. Note that the IDs must be unique and fall within the range of 1 to 516.
2. Finally, click the Create reference button.
The newly created scale cross will appear in the list of all references on the left.
[Screenshot of the Scale references dialog for adding a scale cross]
Capture data
2.4 Capture data
Photos
In Artec Studio 20, there are several limitations related to photo acquisition.
- Try to capture your object in a well-lit environment. Aim for a strong ambient light. The best light conditions are typically achieved by capturing outside on a cloudy day.
- Ensure that the entire object is distinctly in focus, so no areas of it appear blurred. If you find any blur, it is generally advisable to infuse additional light into the scene, marginally close the lens aperture or do some combination of both.
- When capturing data suited for Create Preview (Separate object), ensure that each photo captures the entire object within the camera frame and separated from the background. Refrain from the scenarios where the majority of the frame is covered by the object with some parts of the background still visible, as this may confuse the object detector.
Note that instead of photos, you can record a video of your object, considering the points mentioned above. Videos are treated as a set of frames and can be imported into Artec Studio in the same way as photos.
Good photos for the algorithm:
[Image of good photos for photogrammetry]
Photos which may confuse the object detector:
Several objects within the camera frame
[Image of photos with multiple objects confusing the detector]
Closeups, when part of the object could be considered a background
[Image of close-up photos confusing the detector]
[Image of overloaded background confusing the detector]
- When capturing a scene, you may disregard the point above (point 3).
- You can use multiple cameras to capture the same object. When importing the photos, Artec Studio 20 will create a single Photos object for all images, regardless of which camera captured them.
Recommendations for camera selection
There are no strict restrictions on using different cameras, but we recommend avoiding significant differences in the field of view (FOV). Ideally, the FOV difference should not exceed a factor of 7 to ensure consistent results.
In some scenarios, using different types of cameras can be beneficial:
- Drone and ground photography: Capturing aerial views with a drone and detailed ground shots with a regular camera provides comprehensive coverage of the object.
- Wide-angle and standard Lenses: A wide-angle lens can efficiently capture a general scene, such as an entire room, while a standard lens can be used to capture detailed shots of specific elements, like a statue in the center of the room.
- Try to capture your object from all the directions so the algorithm is fed with a big variety of views. A good practice here is to imagine a virtual sphere around the object and try to capture images from different angles.
- You can turn the object to another side and repeat the capture to get full 3D reconstruction. In that case make sure that images from each object orientation are imported into Artec Studio as a separate photoset.
[Diagram illustrating capturing an object from multiple angles]
Videos
8. If your object lacks texture, ensure that the background contains many features.
9. For Create Preview (Separate object), 50-150 photos is typically enough to achieve good quality.
When recording a video for photogrammetry, ensure that the entire object remains fully visible within the frame at all times. Additionally, avoid changing the camera orientation during recording (e.g., switching between portrait and landscape modes).
Import photos/videos
2.5 Import photos/videos
To import photos/videos into Artec Studio:
- Drop a folder with photos or video files
- Alternatively, use the File menu via File → Import → Photos and videos.
If a video file is imported, Artec Studio will create a photo set in the Workspace out of it. You need to specify frame rate at which photos will be imported from the movie file by entering the desired value in the Frames per second option of the Import video popup window. The default value is 3.
[Screenshot of the Import video dialog with Frames per second option]
Selected files will be added to the Workspace as a new object of the Photos type. Video files will also be added as separate objects of the Photos type.
View photos
To view an imported photoset:
- Double-click the created object in the Workspace, then double-click a frame to open it.
- Alternatively, right-click the photoset and select the Open photo viewer option.
[Screenshot of the photo viewer dialog showing an image and navigation controls]
Use the left and right arrows in the photo viewer dialog to browse through the photos.
Run Create Preview
2.6 Run Create Preview
The Create Preview algorithm registers photos by determining their position in space, resulting in a Photo Scan with aligned photos.
Select the imported photos in the Workspace. Open the Tools panel and click the gear icon of the Create Preview algorithm to open its settings window.
Basic settings
- Scene type: Allows users to choose the optimal algorithm for processing either separate objects or complex scenes.
- Object orientation: Defines the direction the object is facing, which can be manually adjusted by selecting one of the available options.
- Optimize for: Suggests two options to prioritize optimization - either Speed or Quality.
- Scale references: Detect: Enables the recreation the object's original dimensions based on the added scaled references. For more information on how to add scaled references, refer to the 2.3 Use scale references section.
Advanced settings
- Object position: Specifies the object's position relative to its background.
- Same in all photos: Choose this if the object's position is the same across all photos.
- Changes between photos: Select this when the object's position changes within the same photoset. This is typical for cases where the object appears in different orientations relative to the background in each photo, such as when using a turntable.
- Changes between photosets: Use this option when the object's position remains consistent within a single photoset but varies between different photosets.
- Default FOV: Specifies the camera's field of view, used when this information is missing or unreadable from photo metadata. The default value is 60°.
[Screenshot of the Create Preview settings window]
2.6 Run Create Preview
Advanced settings
- Camera grouping mode: Defines how the software interprets which photos were taken by the same camera. This helps the algorithm handle variations in camera parameters, especially when metadata is missing or unreliable.
- Auto: Assumes that all photos were taken with the same camera. Suitable when there is no metadata and the images are likely from a single device without focal length changes.
- Shared per photoset: Treats each photoset as taken with a different camera or with significantly different camera settings. Recommended when switching phones, lenses, or focal lengths between photosets.
- Individual: Considers every photo as taken with a different camera. Best used when camera parameters, such as focal length, may vary between individual photos.
Max reprojection error
- Frame: specifies the maximum allowable deviation for matching points between individual frames or photos. It limits how much point positions can vary within a photoset; if the reprojection error exceeds this value, the program may mark such frames as mismatches. The default value is 4.000 px.
- Feature: sets the maximum error for matching object features, such as contours or textures; lower values lead to more precise reconstruction of object details. The default value is 4.000 px.
- Increase feature sensitivity: Enhances the algorithm's sensitivity to fine object features, allowing it to more accurately recognize and account for small elements during reconstruction. This can improve model quality but may slow down the process or increase demands on photo quality.
- Target color scheme: Defines the color scheme of the targets for detection, with options for white on black or black on white.
[Screenshot of the Create Preview settings window with advanced options]
Once calculation is finished, a Photo Scan object appears in the Workspace. This photo scan is colored so you can see the general shape of your object.
Run Create Model
2.7 Run Create Model
Preparation
Double-click on the newly created Photo Scan object in the Workspace and modify the cropping box around the object to adjust the region of reconstruction.
The cropping box is required as it narrows the region of reconstruction. It is advisable to align it to follow the main directions of the object and tightly enclose the object, while still maintaining some space between the object and the cropping box.
[Diagram showing the cropping box adjustment]
Masks inspection
The inspection of the masks should be carried out under two conditions:
- When using Create Model (Separate object)
- If you encounter poor results or suspect that you did not adhere to our guidelines during capture
Note: For Create Model (Separate object), masks are consistently used throughout the process.
Inspect masks by left-clicking the gear icon and enabling masks view. Alternatively, you can use hotkeys for faster navigation:
- Press 1 for images
- Press 2 for masks
- Press 3 for masked photos
[Screenshot of the Masks inspection interface]
Ensure that the masks are generally correct. If they are entirely inaccurate, users can switch off the photo from Create Model (Separate object).
Note that if you plan to use Create Model (Whole scene) and find that most of the masks are highly inaccurate, simply disable the Use Masks option in this algorithm. Manually turning off individual masks is unnecessary, as it will not improve the results.
It may happen at times that the object detector fails to detect the central object due to the complexity of the scene or additional objects appearing close to the scanned one. If this is the case, disable the photo entirely. Disabled photos will be skipped during the Create Model (Separate object) algorithm.
To do this, select a photo and press the 'P' key or, use the button in the left corner of the image thumbnail.
[Screenshot showing how to exclude a photo]
If a mask includes a stand or part of the object that extends beyond the cropping box, it can potentially lead to artifacts after Create Model. In this case, try expanding the cropping box to encompass both the object and the stand entirely.
2.7 Run Create Model
Create Model (Separate object) settings
Return to the Workspace using the arrow in the Workspace window header. Now, deselect everything except the Photo Scan object.
Open the Tools panel and click the gear icon of the Create Model algorithm to open its settings window.
Scene type: Allows users to choose the optimal algorithm for processing either separate objects or complex scenes.
When reconstructing an object that is well-separated from its background, switch to the Separate object option of the Scene type setting. The object should be captured in a way that it is fully within each frame and distinct from the background.
- Detail: Choose between Normal and High options. In most cases, the Normal option would be enough. Use the High option if you need extra level of details or better reconstruction of thin structures of the object. Note that the High option might result in more detailed but noisier reconstruction compared to the Normal option. It also takes longer to calculate.
- Use sparse point cloud: Utilizes preliminary geometric data to assist in reconstructing concave areas and cutting out holes where necessary. However, for highly reflective objects, it may introduce artifacts such as unwanted holes in the surface, so it's advisable to disable this option and retry the reconstruction if issues arise.
- Make object watertight: Toggles between creating a model with filled holes when enabled or leaving them open when disabled. Enabling this option ensures that the model is fully enclosed.
- Show preview: Enables a real-time preview.
[Screenshot of the Create Model settings for Separate object]
Create Model (Whole scene) settings
When reconstructing scenes or unbounded big objects, switch to the Whole scene reconstruction by changing the Scene Type option to the Scene (Beta).
Here you can adjust several parameters:
- 3D resolution: Defines the smoothness of resulted surface.
- Depth map resolution: Defines maximum image resolution during creating model. Higher values result in higher quality at a cost of increased process time.
- Depth map compression: Enables lossless compression of depth maps, which may slow down calculations due to additional processing time for compression and decompression. However, it reduces disk space usage, making it beneficial for systems with slow disks (HDD or network storage).
- Use masks: Defines whether to use masks during the reconstruction or not. This can greatly improve speed and quality but should be disabled for scenes or aerial scans.
[Screenshot of the Create Model settings for Whole scene]
Project masks
2.8 Project masks
Starting from Artec Studio 19.2 allows to project object on masks.
A mask defines which part of the photo belongs to the object you want to turn into a 3D model. It separates the object from the background. In AI Photogrammetry, masks help the algorithm focus only on the object, providing the high quality of the 3D model.
The masks projection feature enhances the accuracy of 3D mesh reconstruction by leveraging masks created from the registered preview (Photo scan). This tool is particularly useful when the initial mask detection during the Create Preview algorithm fails or produces inaccurate results. By projecting new masks onto the created 3D model, users can exclude unwanted details, such as background elements, and regenerate a cleaner, more precise 3D model.
Typical use case
- Run the Create Preview algorithm to generate a Photo scan.
- Identify inaccurate or faulty masks detected on certain images and exclude these images from the creating model process.
To disable these masks:
- Double-click the created Photo scan in the Workspace panel
- Click a mask that you need to disable or Shift-click to select multiple masks.
- Right-click on the selected masks.
- Select the Disable mask option
- Run the Create Model algorithm to generate a 3D model.
- Project masks onto the excluded images based on the 3D model.
To project masks:
- Ensure that the created 3D model and the Photo scan are selected in the Workspace panel.
- Double-click the Photo scan in the Workspace panel
- Enable the previously disabled masks by selecting them and choosing the Enable mask option from the right-click context menu. For a single mask, click the crossed rectangle icon in the upper-right corner of its preview.
- Analyze the other masks and select the Project masks option from the right-click context menu
- Recreate the 3D model with improved quality and accuracy by running the Create Model algorithm.
Example of projection masks before and after:
[Images showing projection masks before and after]
Additional guidelines
3.1 Limitations
Here are certain limitations and caveats exists that you should be aware of:
- Speed of reconstruction is an area for improvement. Now, we do not recommend processing large datasets (more than 1000 photos) in the current version of Artec Studio.
- The time required for the Separate object algorithm does not depend on the number of photos in the dataset and depends on the:
- Video card used (modern NVIDIA cards are required).
- Selected profile: Normal or High resolution. The latter is 1.5 to 2 times slower.
- The time required for the Create Model (Whole Scene) algorithm depends on the:
- Number of photos
- Video card, SSD speed, and CPU of your computer
- Selected resolution
- Graphics Card requirements:
- We highly recommend using a modern NVIDIA card (other graphics cards are not supported)
- We highly recommend having at least 8 GB of Video RAM
- We highly recommend updating your graphics card drivers
- The standard duration for Separate object algorithm typically ranges from 10 to 30 minutes when operating with the Normal resolution.
- Disk requirements
- During the Create Model (Whole Scene), a lot of disk space is needed to process the data. The amount of disk space required depends on the resolution of photos and selected resolution. This part can consume approximately – 15 GB of disk space per 100 photos. It is highly recommended to have 100 to – 200 GB of free disk space on the disc where the Artec Studio Temp folder is located.
- Whenever you encounter a shortage of free space on your system, do not hesitate to clear up some room by clicking the Clear Artec Studio temporary files button on the General tab of Settings (F10).
- Nevertheless, it is advisable to set your Temp folder in Artec Studio settings to the disk with the highest speed and ample free space.
To set the Temp folder, open Settings (F10) and browse to the new destination.
Temporary folder: C:\Users\Artec\AppData\Local\Temp [Browse...]
Troubleshooting
3.2 Troubleshooting
Element | Possible Cause(s) | Suggested Remedies |
---|---|---|
General processing failure | No compiled models | Ensure models are properly compiled before processing. Click the Set up neural network button in the Create Preview algorithm popup or access it via Settings → Performance → Neural Network Setup |
Photo registration issues | Insufficient number of photos or poor overlap between photos | Capture more photos with overlapping views to ensure smooth registration. |
Object captured too close (too many close-ups) | Maintain a distance that allows the entire object to fit within the frame. | |
Object flipped or rotated without corresponding setting adjustments | Set the Object Position setting appropriately: Same in all photos, Changes between photos, or Changes between photosets. | |
Inadequate photo coverage during object rotation | Ensure sufficient photo coverage from different angles with overlapping views. | |
Mask generation issues | Multiple objects visible in a single photo | Ensure only the target object appears in each photo. Use mask projection to refine the mask. |
Close-up photos causing indistinct separation between object and background | Maintain enough distance to clearly distinguish the object from the background. | |
Busy or cluttered background leaking into object masks | Use a clean, uniform background to avoid interference with mask generation. | |
Target detection issues | Scale bar or cross not clearly visible in photos | Ensure targets are well-lit and visible in all photos for accurate detection. |
Targets obscured or poorly captured in photos | Ensure the targets are visible and evenly distributed throughout the scene. | |
Poor image quality | Overexposed or blurry photos | Avoid overexposed or out-of-focus images by adjusting camera settings and using proper lighting. |