Student Projects 2018/2019

The projects presented by the students to complete the exam are described here. Every group was composed of two members, randomly selected.

Group 1: Extrinsic calibration of 2 Kinects using both a sphere-based calibration and a skeletonization-based calibration

The group task was to compare the extrinsic calibration results obtained both from a calibration made using a green sphere and a custom algorithm developed by University of Trento, and a calibration obtained by a skeletonization algorithm developed by our group.

Group 2: Intrinsic calibration evaluation

The group task was to empirically determine the best way to perform an intrinsic calibration of Kinect v2 cameras using a chessboard target (i. e. how many acquisitions? Distances? Inclinations?). Then, they calibrated four different Kinect v2 and analyzed the results of the calibration for each camera.

Group 3: Evaluation of a trabecular structure point cloud acquisition (1)

The group task was to compare two different acquisitions of a small trabecular structure 3D printed in Titanium, obtained (i) from the 3D digitizer Vivid-910 and (ii) from the 2D/3D Profile Sensor Wenglor MLWL132.

Group 4: People tracking system evaluation

The group task was to create a simple people tracking algorithm based on the 3D point cloud acquired from a Real Sense D435 camera mounted on the ceiling. The evaluation of the performances focused on how well the developed algorithm was able to track the path of the person compared to the theorethical path.

Group 5: Extrinsic calibration of 3 Kinects using a skeletonization algorithm

The group task was to analyze the result of the point cloud alignment obtained by the extrinsic calibrations performed. The group tested different configurations with 2 and 3 Kinects in different positions, used the skeletonization algorithm to obtained the rototranslation matrixes and finally analyzed the resulting alignments in PolyWorks.

Group 6: Evaluation of a trabecular structure point cloud acquisition (2)

The group task was to compare two different acquisitions of a small trabecular structure 3D printed in Titanium, obtained (i) from the 3D digitizer Vivid-910 and (ii) from the 2D/3D Profile Sensor Wenglor MLWL132. The trabecular structure used is different from the one used by Group 3.

Group 7: Instrumented crutches for gait analysis results evaluation

The group task was to perform some acquisitions of a person walking with a pair of our instrumented crutches in different outdoor set ups (uphill, downhill, planar). The acquisitions have been elaborated by our software to analyze gait phases, and their task was to choose the best set up conditions and filtering options according to the results of the algorithm.

Best Paper Award at IEEE I2MTC 2019

Congratulations to our team for winning the Best Paper Award for the paper “Qualification of Additive Manufactured Trabecular Structures Using a Multi-Insutrumental Approach” presented at the 2019 IEEE International Instrumentation and Measurment Technology Conference!

This research is part of a Progetto di Ricerca di Interesse Nazionale (PRIN) carried out in collaboration with the University of Brescia, the University of Perugia, the Polytechnic University of Marche and the University of Messina.

To read more about the project, check out this page!

Optical analysis of Trabecular structures

Rapid prototyping, known as 3D printing or Additive Manufacturing, is a process that allows the creation of 3D objects by depositing material layer by layer. The materials used vary: plastic polymers, metals, ceramics or glass, depending on the principle used by the machine for prototyping, such as the deposit of the molten material or the welding of dust particles of the material itself by means of high-power lasersThis technique allows the creation of particular objects of extreme complexity including the so-called “trabecular structures“, structures that have very advantageous mechanical and physical properties (Fig. 1). They are in fact lightweight structures and at the same time very resistant and these characteristics have led them, in recent years, to be increasingly studied and used in application areas such as biomedical and automotive research fields.

Despite the high flexibility of prototyping machines, the complexity of these structures often generates differences between the designed structure and the final result of 3D printing. It is therefore necessary to design and build measuring benches that can detect such differences. The study of these differences is the subject of a Progetto di Ricerca di Interesse Nazionale (PRIN Prot. 2015BNWJZT), which provides a multi-competence and multidisciplinary approach, through the collaboration of various universities: the University of Brescia, the University of Perugia, the Polytechnic University of Marche and the University of Messina

The aim of this thesis was to study the possible measurement set-ups involving both 2D and 3D vision. The solutions identified for the superficial dimensioning of the prototyped object (shown in Fig. 2) are:

  1. a 3D measurement set-up with a light profile sensor;
  2. a 2D measurement set-up with cameras, telecentric optics and collimated backlight.

In addition, a dimensional survey of the internal structure of the object was carried out thanks to to a tomographic scan of the structure made by a selected company.

Fig. 1 - Example of a Trabecular Structure.
Fig. 2 - The prototyped object studied in this thesis.

The 3D measurment set-up

The experimental set-up created involved a light profile sensor WENGLOR MLWL132. The object has been mounted on a micrometric slide to better perform the acquisitions (Fig. 3).
The point cloud is acquired by the sensor using a custom made LabView software. The whole object is scanned and the point cloud is then analyzed by using PolyWorks. Fig. 4 shows an example of acquisition, while Fig. 5 shows the errors between the point cloud obtained and the CAD model of the object.
Fig. 3 - 3D experimental set-up.
Fig. 4 - Example of acquisition using the light profile sensor.
Fig. 5 - Errors between the measured point cloud and the CAD model.

The 2D measurment set-up

The experimental set-up involving telecentric lenses is shown in Fig. 6. Telecentric lenses are fundamental to avoid camera distorsion especially when high resolution for low dimension measurments are required. The camera used is a iDS UI-1460SE, the telecentric lenses are an OPTO-ENGINEERING TC23036 and finally the retro-illuminator is an OPTO-ENGINEERING LTCLHP036-R (red light). In this set-up a spot was also dedicated to the calibration master required for the calibration of the camera.
The acquisitions obtained have some differences according to the use of the the retro-illuminator. Fig. 7, 8 and 9 show some examples of the acquisitions conducted.
Finally, the measured object was then compared to the tomography obtained from a selected company, resulting in the error map in Fig. 10.

 

Fig. 6 - 2D experimental set-up.
Fig. 10 - Error map obtained comparing the measured object to the tomography.

If you are interested in the project and want to read more about the procedure carried out in this thesis work, as well as the resulting measurments, download the presentation below.

Vision and safety for collaborative robotics

The communication and collaboration between humans and robots is one of main principles of the fourth industrial revolution (Industry 4.0). In the next years, robots and humans will become co-workers, sharing the same working space and helping each other. A robot intended for collaboration with humans has to be equipped with safety components, which are different from the standard ones (cages, laser scans, etc.).

In this project, a safety system for applications of human-robot collaboration has been developed. The system is able to:

  • recognize and track the robot;
  • recognize and track the human operator;
  • measure the distance between them;
  • discriminate between safe and unsafe situations.

The safety system is based on two Microsoft Kinect v2 Time-Of-Flight (TOF) cameras. Each TOF camera measures the 3D position of each point in the scene evaluating the time-of-flight of a light signal emitted by the camera and reflected by each point. The cameras are placed on the safety cage of a robotic cell (Figure 1) so that the respective field of view covers the entire robotic working space. The 3D point clouds acquired by the TOF cameras are aligned with respect to a common reference system using a suitable calibration procedure [1].

Figure 1 - Positions of the TOF cameras on the robotic cell.

The robot and human detections are developed analyzing the RGB-D images (Figure 2) acquired by the cameras. These images contain both the RGB information and the depth information of each point in the scene.

Figure 2 - RGB-D images captured by the two TOF cameras.

The robot recognition and tracking (Figure 3) is based on a KLT (Kanade-Lucas-Tomasi) algorithm, using the RGB data to detect the moving elements in a sequence of images [2]. The algorithm analyzes the RGB-D images and finds feature points such as edges and corners (see the green crosses in figure 3). The 3D position of the robot (represented by the red triangle in figure 3) is finally computed by averaging the 3D positions of feature points.

Figure 3 - Robot recognition and tracking.

The human recognition and tracking (figure 4) is based on the HOG (Histogram of Oriented Gradient) algorithm [3]. The algorithm computes the 3D human position analyzing the gradient orientations of portions of RGB-D images and using them in a trained support vector machine (SVM). The human operator is framed in a yellow box after being detected, and his 3D center of mass is computed (see the red square in figure 4).

Figure 4 - Human recognition and tracking.

Three different safety strategies have been developed. The first strategy is based on the definition of suitable comfort zones of both the human operator and the robotic device. The second strategy implements virtual barriers separating the robot from the operator. The third strategy is based on the combined use of the comfort zones and of the virtual barriers.

In the first strategy, a sphere and a cylinder are defined around the robot and the human respectively, and the distance between them is computed. Three different situations may occur (figure 5):

  1. Safe situation (figure 5.a): the distance is higher than zero and the sphere and the cylinder are far from each other;
  2. Warning situation (figure 5.b): the distance decreases toward zero and sphere and cylinder are very close;
  3. Unsafe situation (figure 5.c): the distance is negative and sphere and cylinder collide.
Figure 5 - Monitored situations in the comfort zones strategy. Safe situation (a), warning situation (b), and unsafe situation (c).

In the second strategy, two virtual barriers are defined (Figure 6). The former (displayed in green in figure 6) defines the limit between the safe zone (i.e. the zone where the human can move safely and the robot can not hit him) and the warning zone (i.e. the zone where the contact between human and robot can happen). The second barrier (displayed in red in figure 6) defines the limit between the warning zone and the error zone (i.e. the zone where the robot works and can easily hit the operator).

Figure 6 - Virtual barriers defined in the second strategy.

The third strategy is a combination of comfort zones and virtual barriers (figure 7). This strategy gives redundant information: both the human-robot distance and positions are considered.

Figure 7 - Redundant safety strategy: combination of comfort zones and virtual barriers.

Conclusions

 The safety system shows good performances:
  • The robotic device is always recognized;
  • The human operator is recognized when he moves frontally with respect to the TOF cameras. The human recognition must be improved (for example increasing the number of TOF cameras) in case the human operator moves transversally with respect to the TOF cameras;
  • The safety situations are always identified correctly. The algorithm classifies the safety situations with an average delay of 0.86 ± 0.63s (k=1). This can be improved using a real time hardware.

Related Publications

Pasinetti, S.; Nuzzi, C.; Lancini, M.; Sansoni, G.; Docchio, F.; Fornaser, A. “Development and characterization of a Safety System for Robotic Cells based on Multiple Time of Flight (TOF) cameras and Point Cloud Analysis“, Workshop on Metrology for Industry 4.0 and IoT, pp. 1-6. 2018

Gesture control of robotic arm using the Kinect Module

The aim of this project is to create a remote control system for a robotic arm controlled by using the Kinect v2 sensor, to track the movements of the user arm, without any additional point of measurement (marker-less modality).

The Kinect camera acquires a 3D point cloud of the body and a skeleton representation of the gesture/pose is obtained using the SDK library software provided by the Kinect. The skeleton joints are tracked and used to estimate the angles.

Figure 1 - The point cloud acquired by the Kinect, and the skeleton. Points A, B, and C are the joints.

Point A is the joint of the wrist, point B is the joint of the elbow and point C is the joint of the shoulder. In the three dimensional space, vectors BA and BC are calculated with using the space coordinates of points A, B and C, which are taken from the skeleton. Angle α is calculated by using the dot product of the two vectors. 

The software has been developed in C# in Visual Studio 2015.

Figure 2 - Elbow angle geometry.

Related Publications

Sarikaya, Y.; Bodini, I.; Pasinetti, S.; Lancini, M.; Docchio, F.; Sansoni, G. “Remote control system for 3D printed robotic arm based on Kinect camera“, Congresso Nazionale delle Misure Elettriche ed Elettroniche GMEE-GMMT. 2017

The Smart Gym Concept

Goals

Subjects with complete spinal cord injury (SCI) experience several limitations in their daily activities. Rehabilitative gait training by means of powered gait orthosis (PGO) has been shown to decrease the risk of secondary pathologies (e.g. skin injuries, osteoporosis, cardiovascular issues) and can significantly improve the quality of life, provided that they are used on a regular basis and in the correct way.
The traditional training is based on three steps: gait trial, data analysis, and gait correction. Gait is monitored using force platforms and dedicated instrumentation directly applied to the patient (EMGs, IMUs, markers). These devices require time to be positioned and care by the patient while walking.
Data analysis and gait correction are totally dependent on the therapist experience. The process must be performed in specialized centers and is very time consuming. This leads to a high cost for the community health services.

Costs and efforts needed for longer training sessions could be reduced by the availability of a SMART GYM environment to:

  1. provide real-time feedback directly to the patient about his/her gait performance;
  2. provide information to clinicians and therapists to remotely monitor the patient;
  3. involve the patient in personalized gait exercises depending on his/her performance in time;
  4. allow the patient to train on his/her own, along paths longer than those permitted by the limited room available in gait laboratories.
Figure 1 - The smart gym objectives.

The SMART GYM components

  • BMM (BioMechanical Model): this component evaluates patient posture and motion in real time. The BMM is fed with (i) the spatiotemporal gait parameters, (ii) the kinematics of the lower limbs, and (iii) the kinematics of the upper limbs;
  • IC (Instrumented Crutches): this system is specifically designed measure the kinematics of the lower limbs and the spatiotemporal gait parameters;
  • OMT (Optical Motion Tracking): this system estimates the upper limb kinematics using a suitable set of Kinect devices;
  • VR (Virtual Reality): this system carries out the self-training of the patient;
  • TD (Therapist Dashboard): a dedicated software service that collects the data from BMM and allows the therapist to follow the patient progress remotely, an to plan new train patterns.
Figure 2 - The SMART GYM components.
Figure 3 - The therapist dashboard architecture.

Novelty of the Project

  • The IC system, based on “A Vision System that walks with the patient” (challenging, but very promising);
  • The OMT based on multiple Kinect devices is new with respect to state of the art, since most applications are performed using a single device.
  • The combination of the OMT in the VR system will lead to the realization of a novel human machine interface, able to accurately render the posture of the whole body.
  • The paradigm of adaptiveness that lays under the BMM system is novel, as it requires new approaches to the estimate of the gait and posture indices.
  • The smart gym project is highly interdisciplinary: knowledge and expertise from the mechanical and the electronic measurement community will allow the development of either sensors (both contact and contact-less), models (both in the biomechanics and in the artificial vision contexts), measurement procedures, virtual reality scenarios and clinical experimentation, which will be fused together and integrated using ICT technologies. This aspect well fits one of the most significant Horizon 2020 priority, i.e., transversality.

Impact of the Project

  • The SMART GYM represents a new environment where the patient can practice autonomously and yet under the therapist control;
  • The SMART GYM will reduce community costs;
  • The SMART GYM will be valuable for elderly people;
  • The SMART GYM will be valuable even for healthy people.

A depth-from-defocus (DFD) measurement system using a liquid lens objective for extended depth range

A novel Depth From Defocus (DFD) measurement system is has been developed. Here the extension of the measurement range is performed using an emergent technology based on liquid lenses. A suitable set of different focal lengths, obtained by properly changing the liquid lens supply voltage, provides multiple camera settings without duplicating the system elements or using moving parts.

A simple and compact setup, with a single camera/illuminator coaxial assembly is obtained. The measurement is based on an active DFD technique using modulation measurement profilometry(MMP) for the estimation of the contrast at each image point as a function of the depth range.

A suitable combination of multiple contrast curves, each one derived at a specific focal length, is proposed to extend the measurement range and to improve the measurement performances with respect to the state of the art.

The system measurement errors are 0.53 mm over an extended measurement depth range of 135 mm, corresponding to 0.39 % of the depth range, resulting in an improved performance with respect to the state of the art DFD systems, for which typical values are in the 0.7-1.6 % of the depth range.

Related publications

Pasinetti, S.; Bodini, I.; Lancini, M.; Docchio, F.; Sansoni, G. “A Depth From Defocus Measurement System Using a Liquid Lens Objective for Extended Depth Range“, IEEE Transactions on Instrumentation and Measurement, Vol 66, no. 3, pp. 441-450. 2017

A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination

Rolling contact wear/fatigue tests on wheel/rail specimens are important to produce wheels and rails of new materials for improved lifetime and performance, able to work in harsh environments and at high rolling speeds. We have developed a novel non-invasive, all-optical system, based on a high-speed video camera and multiple laser illumination sources, which is able to continuously monitor the dynamics of the specimens used to test wheel and rail materials, in a Laboratory test bench.

3D macro-topgraphy and angular position of the specimen are simultaneously performed, together with the acquisition of surface micro-topography, at speeds up to 500 rpm, making use of a fast camera and image processing algorithms. Synthetic indexes for surface micro-topography classification are defined, the 3D macro-topography is measured with a standard uncertainty down to 0.019 mm, and the angular position is measured on a purposely developed analog encoder with a standard uncertainty of 2.9°. The operate with very small camera exposure time enables to obtain blur-free images with excellent definition. The system will be described with the aid of end-cycle specimens, as well as of in-test specimens.

Related Publications

Bodini, I.; Sansoni, G.; Lancini, M.; Pasinetti, S.; Docchio, F. “A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination“, Review of Scientific Instruments, Vol 87. 2016

Bodini, I.; Sansoni, G.; Lancini, M., Pasinetti, S.; Docchio, F. “Feasibility study of a vision system for on-line monitoring of rolling contact fatigue tests“, Journal of Physiscs: Conference Series, Vol 778. 2017

Bodini, I.; Petrogalli, C.; Mazzù, A.; Faccoli, M.; Lancini, M.; Pasinetti, S.; Sansoni, G.; Docchio, F. “On-line 2D monitoring of rolling contact fatigue/wear phenomena in dry tests“, Journal of Physics: Conference Series, Vol. 882. 2017

Biomechanical models of human posture based on the dynamics of a disturbance

The objective of this research activity is to analyze and model the behavior of healthy subjects during equilibrium perturbations to understand how the posture strategies of a human subject changes from a static environment to a dynamic situation.

The balance function is a complex mechanism that involves, in particular, the nervous system. The diseases of the nervous system affect the structures involved in equilibrium with a consequent reduction in the capacity of gait and balance. Clinicians have different tools to characterize the instability. The most used is the static posturography, which assesses the balance from the registration of the position of the centre of pressure of the subject who is standing on a force platform.

The sensitivity of static posturography can be improved placing the subject in conditions of unstable equilibrium or stability limit and, thus, increasing the oscillation of the centre of pressure. This technique is defined as dynamic posturography, which measure the capability of the subject to maintain the balance during different test, performed on a motorized platform. Using vision systems and force platforms both the kinematic and the dynamic behaviour of the subject can be studied.

Related Publications

Pasinetti, S.; Lancini, M.; Pasqui, V. “Development of an optomechanical measurement system for dynamic stability analysis“, 2015 6th International Workshop on Advances in Sensors and Interfaces (IWASI), pp. 199-203. 2015

Automotive applications: Reverse engineering of a Ferrari MM

This project was performed to demonstrate the feasibility of using an optical 3d range sensor based on fringe projection (OPL-3D) to acquire the shell of the Ferrari Mille Miglia shown in the figure. The point cloud were merged and the whole mash was obtained. A scaled copy of the shell was prototyped.

Related Publications