View_Integrator exploits the correspondence between fiduciary points (markers) in different adjacent views. The procedure requires that the user interactively selects corresponding markers in the views to be aligned. Then it estimates with sub-pixel accuracy the 3D position of the centres of each marker and minimizes the sum of all the distances between the estimated centres until a preset threshold is reached. The surface shape suggests the typology of the markers used to determine the coordinates of the fiduciary points.
Placement of ‘hard’ markers
In some cases, markers of circular shape are physically placed on the surface. This approach has the advantage that we can freely move the object with respect to the optical head, acquire all the views needed to completely acquire it, with the only constraint that the overlapping regions contain the same set of markers. However, the markers are still present on the range information, inducing additional noise. Fig. 1 and Fig. 2 illustrate this experimental case. The object under test is a mannequin head. Fig. 1 shows the marker selection; Fig. 2 presents the corresponding 3D range images and their alignment.
Placement of “soft markers”
As shown in Fig. 3, we can turn off the projection of the markers during the measurement, and turn it on for the acquisition of the color/texture information. In this way, the markers do not disturb the surface, and the alignment can be performed more accurately.
Fig. 4 illustrates the View-Integrator interface during the selection of the markers, and the result of the alignment is presented in Fig. 5.
Feature based selection of the markers
The last set of figures illustrate how the alignement of the views is performed in the case that neither “hard” nor “soft” markers are used. In this situation, the selection of the fiduciary points is based on the choice of corresponding features in the images; however, this task is very time consuming and critical for the operator, especially when the number of partial views to be aligned is high and when the color information superimposed to the range does not help the operator, as is the case of the two views shown in Fig. 6.
Our approach to solve this problem is the elaboration of the range information by means of the Canny edge detector. As shown in Fig. 7, the 3D images present significant edges that are well enhanced by the filter and dramatically simplify the operator work. Fig. 8 shows the effect of the Canny edge detector and Fig. 9 the matching between the views.