Inspection Tool

Use this tool to analyze dependencies among cameras in a scene.


There are 2 methods of inspection that you can use:

Misalignment detection

This tool enables you to find misalignned cameras or weak spots in your scene.

Inspection-Misalignment

Misaligned points treshold Marks with a color the given percentage of points that are the most likely misaligned. If you will choose for example a value of 50%, misalignment tool will mark 50% of the most misaligned points.
Misaligned camera pairs treshold Displays a given percentage of the most misaligned camera pairs for each misaligned camera group. If you will choose for example a value of 50%, misalignment tool will mark 50% of the most misaligned camera relations for each detected camera group.
Misalignment detector sensitivity Sets the sensitivity of the misalignment detector. Higher sensitivity takes more computational time but will generate more data to work with. In some cases, the higher sensitivity can create false positives.
Point size multiplier Use higher value to enlarge the displayed points to mak them more visible.
Display only misaligned points Show only points treated as misaligned. Changing to Yes hides all correctly aligned points.
Display only misaligned cameras Display only cameras treated as misaligned. Changing to Yes hides all corretly aligned cameras.
Misaligned components count Filter the number of misalignet camera components that are displayed. Only this number of worst components will be shown.

Inspection-Misalignment_example

After the Misalignment detection was selected, the color of points in the 3Ds view will split into multiple colors. Blue colored points are representing the correctly aligned points, while red points or points of a different color than blue are the ones RealityCapture calculated as misaligned.

Each set of points represents a "misaligned component", which cannot be seen in 1Ds view, but are related to Inspection tool. With this you can adjust the use of control points to correct the alignment, and create a single component with higher accuracy.

Random colored cameras (misaligned cameras) are also going to be displayed in 3Ds view. These cameras should be larger then the cameras which represent the inputs. If you click on one of the misaligned cameras, cameras related to it will be selected. Their color represents their accuracy. Light blue cameras are the one RealityCapture deems as the one with lower accuracy, and grey cameras should have good accuracy. Cameras may be connected with white lines which are usually heading toward the problematic camera.

Scene uncertainty and camera relations

Inspection-Tool-Panel

Camera relations

When using this approach, cameras are connected into virtual components, based on their mutual connections and the defined settings. The cameras included in the same virtual components are marked with the same color.
If there are several virtual (color) components created, then the connection among the cameras from these separate components is not that stable and we recommend adding images in these areas.

Component connectivity Minimal connections with neighboring cameras/images. If there is at least this number of edges among cameras, then those cameras are connected into one component (cameras marked with the same color). The edges among the cameras are created based on the following settings.
Apical angle Minimal apical angle among neighboring cameras/images. It is an angle created by two projection lines (from two cameras) and the tie point is a vertex of the angle. The smaller the number, the better.
Feature consistency stands for the minimal consistency of a feature, defines the number of cameras that need to 'see' that feature. If the feature is visible in this number of images and if the angle among the pairs of cameras is bigger than the angle defined by Apical angle, the Matches count for the edge is raised by one.
Matches count Here you can specify the desired overall number of connections among images. Two cameras are connected into one virtual component only if there is at least this number of common features (matches) among those two cameras. Matches are features that meet the conditions defined by Apical angle and Feature consistency.
Minimal matches Here you may specify the desired smallest number of connections between 2 images. Edges with the number of features lower that this value are not displayed at all. Edges with this number of features are displayed in a dark blue color.
Maximal matches Use this to specify a desired greatest number of connections between 2 images. Type 0 for no such limit. The edge with the highest number of features is displayed in a dark red color. If you set the specific value, then it is the greatest number of connections between two images. The color scale is recomputed according to the minimal and maximal matches.
Show edges Type of edges to be shown: internal / external / both. Internal edges are edges inside the virtual components. External edges are edges outside the virtual components, the ones connecting individual virtual components. If you use both, you can see all the created edges (grey ones are internal, colored ones are external).
Analyze selection Set this one to True when you want to restrict the connectivity analysis to a camera/cloud selection.

In the picture below, you can see a majority of cameras connected into one virtual component, 3 pink cameras connected into the second virtual component (in the red box) and separate cameras marked with different colors, not connected to any component. In such disconnected areas, it is recommended to add more images in order to make the connections more stable.

inspection_example

The scale for the edges (connections among cameras) is as follows:

inspection_scale

Scene structure uncertainty

Uncertainty enumeration method defines a method used for calculating the uncertainty of tie points. This tool is applicable to points that are displayed w.r.t. to the chosen Track length in the 3D SCENE context tab. Use the slider to restrict the points.


Uncertainty line multiplier A scale factor of the displayed uncertainty lines. The higher the value, the longer and more visible lines in a 3Ds view.

In the following picture, the areas with more red lines are considered with higher uncertainty and therefore with lower precision. Add more images in such areas to improve the precision:

inspection_example2

Uncertainty of points is displayed in this color scale:

uncertainty_scale

See also: