Single Grating Phase-Shift (SGPS): a novel approach to phase demodulation of fringes

This project was developed in the context of the National Italian Project: Low-cost 3D imaging and modeling automatic system (LIMA3D). The Laboratory aim was to design, develop and perform a metrological characterization of a low-cost optical digitizer based on the projection of a single grating of non-coherent light.

SGPS (Single Grating Phase-Shift) is a whole field profilometer based on the projection of a single pattern of Ronchi fringes: a simple slide projector can be used instead of sophysticated, very expensive devices, to match the low-cost requirement.

A novel approach to the phase demodulation of the fringes has been developed to obtain phase values monotonically increasing along the direction perpendicular to the fringe orientation. As a result, the optical head can be calibrated in an absolute way, very dense point clouds expressed in an absolute reference system are obtained, the system set-up is very easy, the device is portable and reconfigurable to the measurement problem, and multi-view acquisition is easily performed.

DFGM_R_1.jpg

DFGM: combining steady fringe patterns to obtain high definition point clouds

The objective of this research project was the reduction of the mesurement time, and the possibility of using a simple slide projector instead of those based on LCD or DLP matrices.

Fig. 1 shows the concept: the projecion of a number of fringe patterns, typical of the GCM, GCPS and PSM techniques is replaced by the projection of a single pattern, as for the SGM approach.

Fig. 1 - Example of a number of fringe pattern project onto surfaces and the novel single pattern projection proposed.

As depicted in Fig. 2, the optical head is simplified, and, in principle, it is possible to design the instrument in such a way that a very compact, low cost optical head is obtained.

The slide projector typically projects steady patterns as the one shown in Fig. 3.

The elaboration follows the DFGM approach and results into two phase maps whose sensitivity to height variations is proportional to the period of the two components. The pattern at higher spatial period is used to save the range of the measurement, and the pattern at the lower period is used to increase the resolution (Fig. 4).

Fig. 4 - Scheme of the combination of the two patterns with high and low period.

The information from the two phase maps is used by the tringulation formula to compute the height of the object (the relative approach is used). The triangulation is performed on a relative basis, i.e., the geometrical parameters of the optical head must be accurately determined and given as input to the system. The shape map is relative with respect to a reference surface (a plane).

Fig. 5 shows an example of the quality of the obtained point clouds. Typical values of the measurement errors are in the range of 0.2 mm – 0.3 mm over an illumination area of 300mm x 230mm, and a measurement interval up to 100mm.

Fig. 5 - Point Cloud example obtained by the system.
3D_Optolab_1.jpg

3D-Optolab: optical digitization based on active triangulation and non-coherent light projection

As shown in Fig. 1, the system is composed by the optical head and two moving stages. The optical head exploits active stereo vision. The projection device, LCD based, projects on the target object bi-dimensional patterns of non-coherent light suitable to implement the SGM, PSM GCM and GCPS techniques for light coding. Each point cloud is expressed in the reference system of the optical head (XM, YM, ZM).

The optical head is mounted on a moving stage, allowing its translation along the vertical and the horizontal directions (Tv, To); as shown in Fig. 2, areas large up to two square meters can be scanned and large surfaces can be measured by acquiring and aligning a number of separate patches. The objects presenting circular symmetry can be placed on a rotation stage: it enables the acquisition of views at different values of the rotation angle.

The system is equipped with stepped motors, to fully control the rotation and the translation of the optical head. Each view is aligned by means of rototraslation matrices in the coordinate system (XR, YR, ZR), centerd on the rotation stage. The sequence of acquisition of the range images is decided on the basis of the dimension and the shape of the object, and is performed automatically.

Fast calibration is provided, both for the optical head and for the rotation stage.

3D_Optolab_1.jpg
Fig. 1 - Schematization and image of the prototype.
Fig. 2 - Example of acquisitions of a large object (top) and a smaller one (bottom).

System Performances

  • Optical, non contact digitization based on active triangulation and non-coherent light projection;
  • Adjustable measuring area, from 100 x 100 mm to 400 x 400 mm (single view);
  •  Measuring error from 0.07 mm to 0.2 mm, scaled with the Field of View;
  • Automatic scanning, alignment and merging, mechanically controlled;
  • Color/texture acquisition;
  • PC based system, with 333 Pentium II, Windows 98 and Visual C++ software
  • Import/Export formats for CAD, rapid prototyping, 3D-viewers, graphic platforms;
  • Software interface for the handling, the processing and the visualization of both partial and complete point clouds.
OPL-3D_1.jpg

OPL-3D: a portable system for point cloud acquisition

OPL-3D has been specifically designed for applications of reverse engineering and rapid prototyping, as well as for applications of measurement and quality control.

The system exploits active stereo vision (the absolute approach is implemented) using time-multiplexing based on the Gray-Code-Phase-Shifting method.

The projector-camera pair

OPL-3D can host a wide variety of projectors. In the left figure in Fig. 1 the device is the ABW LCD 320: it is a microprocessor-controlled and column-driven projector, specifically intended to be used in this class of systems. Alternatively, those devices currently available for video projection can be succesfully used, as that one shown on the right figure in Fig. 1 (Kodak DP 900, based on DLP technology).

The detector is a commercial CCD video camera. In the configurations shown in Fig. 1, the camera is an inexpensive colour Hitachi KP D50, with standard resolution (752 x 582 px). However, any type of camera (black/white or colour, with single or multiple CCDs for colour separation, and with different pixel densities) can be mounted on the system, depending on the application and on the projector used. In Fig. 2, for example, a 1300 x 1030 px digital video camera (Basler model) is mounted, to acquire at the required resolution large fields of views

The mount

The projector and the camera are mounted onto a rigid bar, that can be easily moved around the scene by means of a tripod, and that holds the adjustment units for proper orientation. The mount is fully reconfigurable: all parameters can be varied according to the distance from the target, the required measurement resolution and the FoV (Fig. 3).

Given the fact that through sophisticated calibration procedures the system is able to finely estimate the operating parameters, no accurate positioning equipment (micropositioners, microrotators) is required, the only requirement being stability of the mount during the measurement procedure.

Fig. 4 shows two examples of on-site measurements of complex shapes where the full flexibility of the system was mandatory to perform the acquisition.

Fig. 3 - Images of the tripods used and of the equipment of the prototype.
Fig. 4 - Two on-site acquisition campaigns carried out by the Laboratory: the Winged Victory point cloud acquisition (left) and the Ferrari point cloud acquisition (right).

The electronic hardware

OPL-3D is equipped with a PC, that has the purpose of (i) driving the projector with the appropriate pattern sequence, (ii) acquiring the image sequences from the target, and (iii) elaborating the images. In addition, it contains all the features to perform sophisticated procedures for setting up and reconfiguration.

The PC is in the current configuration a Pentium III 900 MHz, 1 GB Ram, equipped with a Matrox Meteor II Frame Grabber. The Projector is operated by the PC through the Serial Connector.

PERFORMANCE

OPL-3D exhibits low-measurement uncertainty (120 mm) over large measurement areas (450 x 340 mm), linearly scalable in the case of smaller areas. Special care has been devoted to flexibility of use, in-field measurement setting, reconfigurability and robustness against environmental light changes and surface colour and texture

Fig. 5 shows the acquisition of the blue car already seen in Fig. 2. Multiview alignment and registration is performed by either purposely designed software or by means of commercially available products, depending on the complexity of the process.

Fig. 5 - Point Cloud obtained with every acquisition aligned to form a complete and dense reconstruction.

Technology transfer

OPL 3-D has been put into the market by Open Technologies s.r.l., Italy, a start-up company of the University of Brescia, under the Trade Name of 3DShape, in a manifold of versions, including sophisticated software for multi-view combination, point cloud manipulation and transformation, up to surface generation.

Commercial software for Reverse Engineering

The softwares

Two commercial software suites are successfully used by the Group to carry out the reverse engineering of very complex objects. These are the Polyworks 7.0 suite and the Raindrop Geomagic Studio 3.1 suite of programs.

Polyworks is specifically designed to obtain triangle meshes from point clouds. The IM-Align module is very powerful and allows us to perform the multiview acquisition when the number of point clouds is very high (from 30 to 500). The IM-Merge, IM-Edit and IM-Compress are used to create the triangle models, depending on the level of accuracy of the original point cloud, and on the accuracy required to the polygonal mesh. The work environment allows the operator to finely adjust, smooth, fill, join, close the final model by means of a considerable number of functions.

The Geomagic environment is designed to produce, from the original point cloud, the triangle models and the NURBS models. These are obtained starting from the triangle meshes. The software privileges the automation of the whole process with respect to the fine, local adjusting of the surfaces.

In the work carried out untill now, the Polyworks Suite has been preferred when (i) the measurement targets are characterised by a high level of complexity and by the presence of small details, (ii) the acquired point clouds result into a high number of invalid points and the quality of the measurement is not optimal, and (iii) the reverse engineering process requires only the generation of triangles. This is the case of the experimental work carried out in the summer of 2001 at the Civici Musei of Brescia, dealing with the modelling of the ‘Winged Victory’.

On the other hand, the Geomagic suite is used when (i) the shapes are generally regular and are efficiently elaborated (edited, filtered, topologically controlled) in an automatic way, (ii) the process time has to be kept low, (iii) the CAD model is required. The reverse engineering of the Ferrari 250MM has been performed in spring 2002 by using this software environment.

A Reverse Engineering example

The example reported here fully documents the reverse engineering process of the object in Fig. 1 carried out by using both the mentioned software products.
It is a 1:4 scaled model of a F333 (by courtesy of Ferrari and Officine Michelotto). The following figures illustrate all the main steps of the test. These are:

  1. the optical acquisition by means of OPL-3D (Fig. 1);
  2. the alignment process to obtain the point cloud of the whole object (Fig. 2). It has been performed by using the IM-Align module;
  3. the generation of the triangle model (Fig. 3). IM-Merge has been used in this step: it allowed the creation of a number of models at different levels of detail;
  4. the generation of the CAD model (Fig. 4). It has been obtained by exporting the triangle model from the Polyworks environment to the Geomagic Environment (the STL format has been used), and by exploiting the powerful tools for the generation of the patch layout and the matematics of the surfaces available in Geomagic Studio 3.0. 

Fig. 5 shows the rendered view of the CAD model.

Fig.1 - The acquisition of the F333 by means of OPL-3D.

OptoSurfacer: an optical digitizer for reverse engineering of free-form surfaces

What is OptoSurfacer?

The purpose of this activity is the development of descriptive 3D models of the point clouds acquired by the optical digitisers developed at the Laboratory, for the implementation of the Reverse Engineering of complex shapes and in applications that priviledge the efficiency of the whole process with respect to its accuracy.

Typical fields are the production of prototypes and moulds within the collaborative design process and for copying applications, the restitution of cultural heritage, and the Virtual Reality.

The objective is also the implementation of an alternative path with respect to the traditional CAD-based process, to allow the user to model the physical shapes by means of meshes of simple geometrical elements, without requiring specialised knowledge and background, and at the same time providing total compatibility with the higher performance, higher cost, market available software environments, dedicated to CAD and copying applications.

The activity resulted in the development of a software tool called OptoSurfacer, with the following characteristics:

  1. importing and ordering of dense and sparse point clouds, optically acquired;
  2. detection and editing of undercuts and outlayers;
  3. scalingmirroring and translation of the entities;
  4. automatic definition of the meshes that model the original measurement data;
  5. flexible trimmering of the mesh topology depending on the object local curvature;
  6. coding of the models in the IGES format to guarantee their usability in the CAD and CAM environments market available.

HOW TO OBTAIN THE MESHES?

The flow-chart in Fig. 1 describes the tasks performed by OptoSurfacer. They are illustrated for the study case of the object shown in Fig. 2 (a roof tile). The corresponding point cloud, shown in Fig. 3 has been acquired by means of the prototype DFGM (see the Prototypes page), and is characterised by a variability of the measurement of about 200 microns.

Fig. 3 - Corresponding point cloud of the roof tile obtained by means of the prototype DFGM.

OptoSurfacer automatically performs the ordering of the points by creating a regular reference grid and by using the surface shown in Fig. 4 as the basic geometrical element of the mesh. For the roof tile, the shapes have been modelled as shown in Fig. 5, and the resulting mesh is presented in Fig. 6. The irregularities well observable in this figure mainly depend on the roughness and the porosity of the material.

Fig. 4 - Basic geometrical element of the mesh.
Fig. 5 - Reference model to model the roof tile.
Fig. 6 - Resulting mesh of the roof tile obtained.

The solid model of the object has been obtained from the mesh representation of Fig. 6. OptoSurfacer generated the sections presented in Fig. 7 and, by blending them, the mathematics of the object. The final solid model is shown in Fig. 8: it is saved in the IGES format, and presents full compatibility with a wide number of CAD-CAM products market available.

Relevant Publications

Sansoni, G.; Docchio, F. “In-field performance of an optical digitizer for the reverse engineering of free-form surfaces“, The International Journal of Advanced Manufacturing Technology, Vol. 26, no. 11–12, pp. 1353–1361. 2005

OPL_Align: a software for point cloud alignment

OPL_Align is used to increase the quality of the alignment, especially in the case of very accurate point clouds that the performance of View-Integrator cannot align at the measurement accuracy. Also in this case, a pair-wise matching is used, based on the Iterative Closest Point approach. Fig. 1 shows an example of the performance of this tool.

The bass-rilief of Fig. 1 is chosen as the test object. The two partial views in the frames are shown in the OPL_Align environment in Fig. 2, where it is also evidenced the common region used to perform the alignment. The top figure in Fig. 3 shows the relative position between the two patches (Vista 1 and Vista 2 respectively) is shown before and after the alignment. The bottom figure in Fig. 3 shows the performance of the alignment.

Fig. 3 - Relative position between the two patches (top), performances of alignment (bottom).

View_Integrator: an interactive way to align point cloud views

View_Integrator exploits the correspondence between fiduciary points (markers) in different adjacent views. The procedure requires that the user interactively selects corresponding markers in the views to be aligned. Then it estimates with sub-pixel accuracy the 3D position of the centres of each marker and minimizes the sum of all the distances between the estimated centres until a preset threshold is reached. The surface shape suggests the typology of the markers used to determine the coordinates of the fiduciary points.

Placement of ‘hard’ markers

In some cases, markers of circular shape are physically placed on the surface. This approach has the advantage that we can freely move the object with respect to the optical head, acquire all the views needed to completely acquire it, with the only constraint that the overlapping regions contain the same set of markers. However, the markers are still present on the range information, inducing additional noise. Fig. 1 and Fig. 2 illustrate this experimental case. The object under test is a mannequin head. Fig. 1 shows the marker selection; Fig. 2 presents the corresponding 3D range images and their alignment.

Fig. 1 - Marker selection on the mannequin head.
Fig. 2 - Alignment of the two views.

Placement of “soft markers”

As shown in Fig. 3, we can turn off the projection of the markers during the measurement, and turn it on for the acquisition of the color/texture information. In this way, the markers do not disturb the surface, and the alignment can be performed more accurately.
Fig. 4 illustrates the View-Integrator interface during the selection of the markers, and the result of the alignment is presented in Fig. 5.

Fig. 5 - Result of the alignment of different views in a completed mesh.

Feature based selection of the markers

The last set of figures illustrate how the alignement of the views is performed in the case that neither “hard” nor “soft” markers are used. In this situation, the selection of the fiduciary points is based on the choice of corresponding features in the images; however, this task is very time consuming and critical for the operator, especially when the number of partial views to be aligned is high and when the color information superimposed to the range does not help the operator, as is the case of the two views shown in Fig. 6.

Our approach to solve this problem is the elaboration of the range information by means of the Canny edge detector. As shown in Fig. 7, the 3D images present significant edges that are well enhanced by the filter and dramatically simplify the operator work. Fig. 8 shows the effect of the Canny edge detector and Fig. 9 the matching between the views.

Fig. 6 - Dense views of a bas-relief.
Fig. 7 - The edges of the views are quite significant and can be used to enhance the alignment.
nuvola OPL_viewer.jpg

OPL_Viewer: useful tool to visualize point clouds

This module has been developed to visualize the point clouds.

The main features of the tool are:

  1. visualization by means of regular grids;
  2. availability of basic functions as rotationtranslation and scaling of the point clouds;
  3. variable setting of the sampling step (i.e., of the resolution);
  4. visualization of the colour information;
  5. setting of the parameters of the visualization (grid colour, frame colour, point dimension, point colour, ecc.);
  6. multi-document environment, for the visualization of different views of the same point cloud and/or different point clouds;
  7. filtering of isolated points (outlayers);
  8. editing of groups of points;
  9. compatibility with the ROT (rotate), the OPL (Optolab), the OPT (Open Technologies s.r.l) and PIF (InnovMetric Inc.) file formats;
  10. availability of an interactive, user friendly working environment, developed on the OpenGL library.

Fig. 1 shows an example of the facilities available in the module. The same point cloud is visualized in two independent windows, at different resolution, with the colour information (left view) and representing the only range information (right view). The visualization of multiple views is presented in Fig. 2.

Fig. 1 - Visualization of the same point cloud in two independant windows.
Fig. 2 - Visualization of multiple views.

The BARMAN project

The aim of this project is to add 2D vision to the BARMAN demonstrator shown in Fig. 1. The BARMAN is composed of two DENSO robots. In its basic release it picks up bottles, uncorks them and places them on the rotating table. It then rotates the table, so that people can pick them up and drink.

The tasks of the Barman are summarized here:

  1. to survey the foreground and check if empty glasses are present;
  2. to rotate the table and move glasses to the background;
  3. to monitor for a bottle on the conveyor, recognize it, pick it up, uncork it and fill the glasses;
  4. to rotate the table to move glasses to the foreground zone.

These simple operations require that suitable image processing is developed and validated. The software environment is the Halcon Library 9.0; the whole-project is deveoped in VB2005. The robot platform is the ORiN 2 (from DENSO).

The work performed so far implements the following basic functions described below.

Calibration of BOTH cameras and robot

The aim is to define an absolute reference system for space point coordinates, where the camera coordinates can be mapped to the robot coordinates and viceversa. To perform this task a predefined master is used acquired under different perspectives (see Fig. 1 and Fig. 2 ).

The acquired images are elaborated by the Halcon calibration functions, and both extrinsic and intrinsic camera parameters are estimated. In parallel, special procedures for robot calibration have been reproduced and combined with the parameters estimated for the cameras (Fig 3).

Fig. 1 - Acquisition of the master in correspondence with the background zone (left); corresponding image in the Halcon environment (right).
Fig. 2 - Images of the master before (left) and after (right) the elaboration.
master_robot.JPG
Fig. 3 - Calibration process of the robot.

Detection of empty glasses in the foreground

Flexible detection has been implemented to monitor the number of empty glasses. In addition, some special cases have been taken into account. Some exaples are shown in the following images (Fig. 4 and Fig. 5).

Fig. 4 - Detection of a single glass (left); detection of three glasses (right).
Fig. 5 - Detection of four glasses very close to each other (left); detection in the presence of glasses turned upside down.

Detection of glasses in the background

The position of the glasses in the background are calculated very precisely, since the camera is calibrated. In addition, it is possible to recognize semi-empty glasses and glasses turned upside down. This detection is mandatory to guarantee that the filling operation is performed correctly. Fig. 6 shows some significant examples.

Fig. 6 - Detection of the position of the glasses in the background. The system detects the presence of semi-empty glasses (center image) and turned glasses and do not mark them as available for subsequent operations.

Bottle detection

Three different types of bottles are recognized as “good” bottles by the system. The system is able to detect any other object which does not match the cases described above. It can recognize either if the “unknown” object can be picked up and disposed by the Barman, or if it must be removed manually.

bottle_detection_NOK.JPG
Fig. 8 - Detection of unmatched objects.

Filling of the glasses

The robot is moved to the positions detected by the camera (see Fig. 6), and fills the glasses. Since both the cameras and the robots share the same reference system, this operation is “safe”: the robot knows where the glasses are.

Fig. 9 - Filling operation.

RAI Optolab Interview TG Neapolis 12/05/2010