University, companies, and a school to create companies

The C-Lab of the University of Brescia has been baptized by those in charge and now begins to take its first steps. First thing: to let people know what it is, what it will do, with whom and with what means and objectives. This is the Contamination Lab, a structure emanating from the University of Brescia, which puts its first appropriation into it and keeps it under its wing.

Watch the video and the full article here!

Extrinsic Calibration procedure by using human skeletonization

Metrology techniques based on Industrial Vision are increasingly used both in research and industry. These are contactless techniques characterized by the use of optical devices, such as RGB and Depth cameras. In particular, multi-camera systems, i. e. systems composed of several cameras, are used to carry out three-dimensional measurements of various types. In addition to mechanical measurements this type of systems are often used to monitor the movements of human subjects within a given space, as well as for the reconstruction of three-dimensional objects and shapes.

To perform an accurate measurment, multi-camera systems must be carefully calibrated. The calibration process is a fundamental concept in Computer Vision applications that involve image-based measurements. In fact, Vision System calibration is a key factor to obtain reliable performances, as it allows to find the correspondence between the workspace and the points present on the images acquired by the cameras.

A multi-camera system calibration is the process that allows to obtain the geometric parameters related to distortions, positions and orientations of each camera that constitutes the system. It is therefore necessary to identify a mathematical model that takes into account both the internal functioning of the device and the relationship between the camera and the external world.

The calibration process consists of two parts:

  1. The intrinsic calibration, necessary to model the individual devices in terms of focal length, coordinates of the main point and distortion coefficients of the acquired image;
  2. The extrinsic calibration, necessary to determine the position and orientation of each device with respect to an absolute reference system. By means of extrinsic calibration it is possible to obtain the position of the camera reference system with respect to the absolute reference system in terms of rotation and translation.
Traditional calibration methods are based on the use of a calibration target, i. e. a known object, usually a plane object, on which there are calibration points that are unambiguously recognizable and have known coordinates. These calibration targets are automatically recognized by the system in order to calculate the position of the object with respect to the camera and thus obtain the parameters of rotation and translation of the camera with respect to the global reference system (Fig. 1).
Fig. 1 - Examples of traditional calibration masters.

The methods based on the recognition of three-dimensional shapes are based on the geometric coherence of a 3D object positioned in the field of view (FoV) of the various cameras. Each device records only a part of the target object and then, combining each view with the portion actually recorded by the camera, the relative displacement of the sensors with respect to the object is evaluated, thus obtaining the rotation and translation of the acquisition device itself (Fig. 2).

Fig. 2 - Example of a known object used as the calibration master, in this case a green sphere of a known diameter.

However, both traditional calibration methods and those based on the recognition of three-dimensional shapes have the disadvantage of being very complex and computationally expensive. In fact, former methods exploit the use of a generally flat calibration target which limits the positioning of the target itself because, under certain conditions and positions, it is not possible to obtain simultaneous views. The latter methods also require to converge to a solution, hence a good initialization of the parameters of 3D recognition by the operator is needed, resulting in low reliability and limited automation.

Finally, the methods based on the recognition of the human skeleton use as targets the skeleton joints of the human positioned within the FoV of the cameras. Skeleton based methods represent therefore an evolution of the 3D shape matching methods, since the human is considered as the target object (Fig. 3).

In this thesis work, we exploited skeleton based methods to create a new calibration method which is (i) easier to use compared to known methods, since users only need to stand in front of a camera and the system will take care of everything and (ii) faster and computationally inexpensive.

Fig. 3 - Example of a skeleton obtained by connecting the joints acquired from a Kinect camera (orange dots) and from a calibration set up based on multiple cameras (red dots).

STEP 1: Measurment set-up

The proposed set-up is very simple, as it is composed of a pair of Kinect v2 intrinsecally calibrated by using known procedures, a process required to correctly align the color information on top of the depth information acquired by the camera.

The cameras are placed in order to acquire the measurment area in a suitable way. After the calibration images have been taken, these are evaluated by a skeletonization algorithm based on OpenPose, which also takes into account the depth information to correctly place the skeleton in the 3D world [1]. Joints coordinates are therefore extracted for every pair of color-depth images correctly aligned both spatially and temporally.

To obtain accurate joint positions we also perform an optimization procedure afterwards to correctly place the joints on the human figure according to a minimization error procedure written in MATLAB.

Fig. 4 shows the abovementioned steps, while Fig. 5 shows a detail of the skeletonization algorithm used.

 

Fig. 4 - Steps required for the project. First we acquire both RGB and Depth frames from every Kinect, second we perform the skeletonization of the human using the skeletonization algorithm of choice and finally we use it to obtain the extrinsinc calibration matrix for each kinect, in order to reproject them to a known reference system.
Fig. 5 - Skeletonization procedure used by the algorithm.

STEP 2: Validation of the proposed procedure

To validate the system we used a mannequin placed still in 3 different positions (2 m, 3.5 m, 5 m) showing both the front and the back of it to the cameras. In fact, skeletonization procedures are usually more robust when humans are in a frontal position since also the face keypoints are visible. The validation positions are shown in Fig. 6, while the joint positions calculated by the procedure are shown in Fig. 7 and 8.

Fig. 6 - Validation positions of the mannequin. Only a single camera was used in this phase.
Fig. 7 - Joints calculated by the system when the mannequin is positioned in frontal position at (a) 2 m, (b) 3.5 m and (c) 5 m.
Fig. 8 - Joints calculated by the system when the mannequin is positioned in back position at (a) 2 m, (b) 3.5 m and (c) 5 m.

STEP 3: Calibration experiments

We tested the system in 3 configurations shown in Fig. 9: (a) when the two cameras are placed in front of the operator, one next to the other; (b) when a camera is positioned in front of the operator and the other is positioned laterally with an angle between them of 90°; (c) when the two cameras are positioned with an angle of 180° between them (one in front of the other, operator in the middle).

We first obtained the calibration matrix by using our procedure, hence the target used is the human subject placed in the middle of the scene (at position zero). Then, we compared the calibration matrix obtained in this way to the calibration matrix obtained from another algorithm developed by the University of Trento [2]. This algorithm is based on the recognition of a 3D known object, in this case the green sphere shown in Fig. 2.

The calibration obtained from both methods is evaluated using a 3D object, a cylinder of known shape which has been placed in 7 different positions in the scene as shown in Fig. 10. The exact same positions have been used for each configuration.

Fig. 9 - Configurations used for the calibration experiments.
Fig. 10 - Cylinder positions in the scene.

We finally compared the results aligning the point clouds obtained by using both calibration matrixes. These results have been elaborated in PolyWorks for better visualization, and are shown in the presentation below. Feel free to download it to know more about the project and to view the results!

Gait Phases analysis by means of wireless instrumented crutches and Decision Trees

In recent years there has been a significant increase in the development of walking aid systems and in particular of robotic exoskeletons for the lower limbs, used to make up for the loss of walking ability following injuries to the spine (Fig. 1).

However, in the initial training phase the patient is still forced to use the exoskeleton in specialized laboratories for assisted walking, usually equipped with vision systems, accelerometers, force platforms and other measurement systems to monitor the patient training (Fig. 2).

Fig. 1 - Example of a patient wearing the ReWalk exoskeleton. The crutches are needed as aid for the patients, since they tend to tilt forward when using the exoskeleton.

To overcome the limitations given by both the measuring environment and the instruments themselves needed, the University of Brescia joined forces with other research laboratories and created a pair of instrumented wireless crutches able to assess the loads on the upper limbs through a suitable biomechanical model that measures the load exchanged between the soil and the crutch. 

In addition to providing a great help to the physiotherapist for the assessment of the quality of walking and reduce the risk of injury to the upper limbs due to the use of the exoskeleton, the real advantage of the developed device lies in the fact that it is totally wireless, allowing its use in external environments where the user can feel more comfortable. It is in fact known in the literature that the behavior of a subject during a walk varies depending on the environment in which it is performed, both because of the space limitations and of the range of the measurment instrumentation, which limits the trajectory the patient performs in a laboratory compared to the one performed outdoors without strict limitations.

It is also important to relate and evaluate the measured quantities by averaging them on a percentage basis associated with the phases of the step instead of time, in order to compare them with the physiological behavior of the subject, usually related to the phases of the step (swing and stance, Fig. 3).

Fig. 3 - Detail of a gait cycle: the stance phase is when the foot is placed forward, the swing phase is when the foot kicks off the ground moving the body forward.

STEP 1: Instrumented wireless crutches design

A first prototype of instrumented crutches was already been developed to measure the upper limbs forces. In this project, however, we wanted to create a new prototype able to measure and predict the gait phases without the need of an indoord laboratory. To do this, we mounted two Raspberry Pi on the crutches and two PicoFlexx Time of Flight cameras which will see the feet while walking (Fig. 4 and 5).
Fig. 4 - Detail of the proposed instrumented crutches. The two Raspberry Pi send the acquired images to an Ubuntu PC wireless.
Fig. 5 - (1) PicoFlexx ToF cameras, (2) Raspberry Pi boards, (3) powerbanks for the Raspberry Pi boards.

STEP 2: Supervised Machine Learning algorithm for gait phases prediction

We choose a supervised algorithm (Decision Trees) to predict the gait phases according to the depth images of the feet acquired during the walking.
Each image was processed offline to (i) perform a distance filtering, (ii) calculate and fit a ground plane according to the measured one and (iii) calculate the distance between the ground plane and the feet. The latter values are used as features for the algorithm (Fig. 6 and 7).
Fig. 6 - Detail of the supervised algorithm of choice: a set of features are selected and used for the training of the model, which is then used to predict the gait phases.
Fig. 7 - Detail of the processing of the images needed to extract the features used by the model.

STEP 3: Experiments

The proposed system has been tested on 3 healthy subjects. Each subject has been tested on a walk indoor and on a walk outdoor, different for every subject.

For each subject we trained a model specific for the person. This choice was driven by the fact that every person walks in its way, so it is best to comprehend its specific style and use it to monitor its performances while walking.

To learn more about the project and to view the results, feel free to download the presentation below!

Performance Benchmark between an Embedded GPU and a FPGA

Nowadays is impossible to think of a device without considering it “intelligent” to some extent. If ten years ago “intelligent” systems where carefully designed and could only be used in specific cases (i. e. industry or defense or research), today smart sensors which are big as a button are everywhere, from cellphones to refrigerators, from vacuum cleaner to industrial machines.
If you think that’s it, you’re wrong: the future is tiny.

Embedded systems are more and more needed even in industries, because they’re capable to perform complex elaborations on board, sometimes comparable to the ones carried out by standard PCs. Small, portable, flexible and smart: it is not hard to understand why they’re more and more used!

A plethora of embedded systems is available on the market, according to the needs of the client. One important characteristic to check out is the capability of the embedded platform of choice to be “smart”, which nowadays means if it’s able to run a Deep Learning model with the same performances obtained while running it on a PC. The reason is that Deep Learning models require a lot of resources to perform well, and running them on CPUs usually means to lose accuracy and speed compared to running them on GPUs.

To solve this issue, some companies started to produce embedded platforms with GPUs on board. While the architecture of these systems is still different to the architecture of PCs, they are a quite good improvement on the matter. Another type of embedded systems is the FPGA: these platforms lead the market for a while before embedded GPUs became common, are low-level programmed and because of that are usually high performing.

In this thesis work, conducted in collaboration with Tattile, we performed a benchmark between the Nvidia Jetson TX2 embedded platform and the Xilinx FPGA Evaluation Board ZCU104.

STEP 1: Determine the BASELINE model

To perform our benchmark we selected an example model. We kept it simple by choosing the well-known VGG Net model, which we trained on a host machine equipped with a GPU on the standard dataset CIFAR-10. This dataset is composed by 10 classes of different objects (dogs, cats, airplanes, cars…) with standard image dimensions of 32×32 px.

The network was trained in Caffe for 40000 iterations, reaching an average accuracy of 86.1%. Note that the model obtained after this step is represented in floating point 32bit (FP32), which allows a refined and accurate representation of weights and activations.

Fig. 3 - CIFAR-10 dataset examples for each class.

STEP 2: Performances on Nvidia Jetson TX2

This board is equipped with a GPU, thus allowing a representation in floating point. Even if the FP32 representation is supported, we choose to perform a quantization procedure to reduce the representation complexity of the trained model to a FP16 representation. This choice was driven by the fact that the performances obtained with this representation are considered the best ones by the literature.

We used TensorRT, which is natively installed on the board, to perform the quantization procedure. After this process the model obtained an average accuracy of 85.8%.

STEP 3: Performances on Xilinx FPGA Zynq ZCU104

The FPGA do not support floating point representations. The toolbox used to modify the original model and adapt it for the board is a proprietary one, called DNNDK.

Two configurations were tested: a configuration where the original model was quantized from FP32 to INT8, thus losing the floating point and critically reduce the dimension of the network to few MB. The average accuracy obtained in this case is 86.6%, slightly better than the baseline probably because of the big representation gap, which in some cases lead to correct predictions the ones that were borderline between correct and incorrect.

The second configuration applies a pruning process after the quantization procedure, thus deleting useless layers. In this case the average accuracy reached is of 84.2%, as expected after the combination of both processes.

STEP 4: Performance benchmark of the boards

Finally we compared the performances obtained by the two boards when running inference in real-time. The results can be found in the presentation below: if you’re interested, feel free to download it!

The thesis document is also available on request.

Real-Time robot command system based on hand gestures recognition

With the Industry 4.0 paradigm, the industrial world has faced a technological revolution. Manufacturing environments in particular are required to be smart and integrate automatic processes and robots in the production plant. To achieve this smart manufacturing it is necessary to re-think the production process in order to create a true collaboration between human operators and robots. Robotic cells usually have safety cages in order to protect the operators from any harm that a direct contact can produce, thus limiting the interaction between the two. Only collaborative robots can really collaborate in the same workspace as humans without risks, due to their proper design. They pose another problem, though: in order to not harm human safety, they must operate at low velocities and forces, hence their operations are slow and quite comparable to the ones a human operator does. In practice, collaborative robots hardly have a place in a real industrial environment with high production rates.

In this context, this thesis work presents an innovative command system to be used in a collaborative workstation, in order to work alongside robots in a more natural and straightforward way for humans, thus reducing the time to properly command the robot on the fly. Recent techniques of Computer Vision, Image Processing and Deep Learning are used to create the intelligence behind the system, which is in charge of properly recognize the gestures performed by the operator in real-time.

Step 1: Creation of the gesture recognition system

A number of suitable algorithms and models are available in the literature for this purpose. An Object Detector in particular has been chosen for the job, called “Faster Region Proposal Convolutional Neural Network“, or Faster R-CNN, developed in MATLAB.

Object Detectors are especially suited for the task of gesture recognition because they are capable to (i) find the objects in the image and (ii) classify them, thus recognizing which objects they are. Figure 1 shows this concept: the object “number three” is showed in the figure, which the algorithm has to find. 

Fig. 1 - The process undergone by Object Detectors in general. Two networks elaborate the image in different steps: first the region proposals are extracted, which are the positions of object of interest found. Then, the proposals are evaluated by the classification network, which at the end outputs both the position of the object (the bounding box) and the name of the object class.

After a careful selection of gestures, purposely acquired by means of different mobile phones, and a preliminary study to understand if the model was able to differentiate between left and right hand and at the same time between the palm and the back of the hand, the final gestures proposed and their meaning in the control system are showed in Fig. 2.

Fig. 2 - Definitive gesture commands used in the command system.

Step 2: creation of the command system

The proposed command system is structured as in Fig. 3: the images are acquired in real-time by a Kinect v2 camera connected to the master PC and elaborated in MATLAB in order to obtain the gesture commands frame by frame. The commands are then sent to the ROS node in charge of translating the numerical command into an operation for the robot. It is the ROS node, by means of a purposely developed driver for the robot used, that sends the movement positions to the robot controller. Finally, the robot receives the ROS packets of the desired trajectory and executes the movements. Fig. 4 shows how the data are sent to the robot.

Fig. 3 - Overview of the complete system, composed of the acquisition system, the elaboration system and the actuator system.
Fig. 4 - The data are sent to the "PUB_Joint" ROS topic, elaborated by the Robox Driver which uses ROS Industrial and finally sent to the controller to move the robot.

Four modalities have been developed for the interface, by means of a State Machine developed in MATLAB:

  1. Points definition state
  2. Collaborative operation state
  3. Loop operation state
  4. Jog state
Below you can see the initialization of the system, in order to address correctly the light conditions of the working area and the areas where the hands will probably be found, according to barycenter calibration performed by the initialization procedure. 
 
If you are interested in the project, download the presentation by clicking the button below. The thesis document is also available on request.

Related Publications

Nuzzi, C.; Pasinetti, S.; Lancini, M.; Docchio, M.; Sansoni, G. “Deep Learning based Machine Vision: first steps towards a hand gesture recognition set up for Collaborative Robots“, Workshop on Metrology for Industry 4.0 and IoT, pp. 28-33. 2018

Nuzzi, C.; Pasinetti, S.; Lancini, M.; Docchio, M.; Sansoni, G. “Deep learning-based hand gesture recognition for collaborative robots“, IEEE Instrumentation & Measurement Magazine 22 (2), pp. 44-51. 2019

Best Paper Award at IEEE I2MTC 2019

Congratulations to our team for winning the Best Paper Award for the paper “Qualification of Additive Manufactured Trabecular Structures Using a Multi-Insutrumental Approach” presented at the 2019 IEEE International Instrumentation and Measurment Technology Conference!

This research is part of a Progetto di Ricerca di Interesse Nazionale (PRIN) carried out in collaboration with the University of Brescia, the University of Perugia, the Polytechnic University of Marche and the University of Messina.

To read more about the project, check out this page!

Optical analysis of Trabecular structures

Rapid prototyping, known as 3D printing or Additive Manufacturing, is a process that allows the creation of 3D objects by depositing material layer by layer. The materials used vary: plastic polymers, metals, ceramics or glass, depending on the principle used by the machine for prototyping, such as the deposit of the molten material or the welding of dust particles of the material itself by means of high-power lasersThis technique allows the creation of particular objects of extreme complexity including the so-called “trabecular structures“, structures that have very advantageous mechanical and physical properties (Fig. 1). They are in fact lightweight structures and at the same time very resistant and these characteristics have led them, in recent years, to be increasingly studied and used in application areas such as biomedical and automotive research fields.

Despite the high flexibility of prototyping machines, the complexity of these structures often generates differences between the designed structure and the final result of 3D printing. It is therefore necessary to design and build measuring benches that can detect such differences. The study of these differences is the subject of a Progetto di Ricerca di Interesse Nazionale (PRIN Prot. 2015BNWJZT), which provides a multi-competence and multidisciplinary approach, through the collaboration of various universities: the University of Brescia, the University of Perugia, the Polytechnic University of Marche and the University of Messina

The aim of this thesis was to study the possible measurement set-ups involving both 2D and 3D vision. The solutions identified for the superficial dimensioning of the prototyped object (shown in Fig. 2) are:

  1. a 3D measurement set-up with a light profile sensor;
  2. a 2D measurement set-up with cameras, telecentric optics and collimated backlight.

In addition, a dimensional survey of the internal structure of the object was carried out thanks to to a tomographic scan of the structure made by a selected company.

Fig. 1 - Example of a Trabecular Structure.
Fig. 2 - The prototyped object studied in this thesis.

The 3D measurment set-up

The experimental set-up created involved a light profile sensor WENGLOR MLWL132. The object has been mounted on a micrometric slide to better perform the acquisitions (Fig. 3).
The point cloud is acquired by the sensor using a custom made LabView software. The whole object is scanned and the point cloud is then analyzed by using PolyWorks. Fig. 4 shows an example of acquisition, while Fig. 5 shows the errors between the point cloud obtained and the CAD model of the object.
Fig. 3 - 3D experimental set-up.
Fig. 4 - Example of acquisition using the light profile sensor.
Fig. 5 - Errors between the measured point cloud and the CAD model.

The 2D measurment set-up

The experimental set-up involving telecentric lenses is shown in Fig. 6. Telecentric lenses are fundamental to avoid camera distorsion especially when high resolution for low dimension measurments are required. The camera used is a iDS UI-1460SE, the telecentric lenses are an OPTO-ENGINEERING TC23036 and finally the retro-illuminator is an OPTO-ENGINEERING LTCLHP036-R (red light). In this set-up a spot was also dedicated to the calibration master required for the calibration of the camera.
The acquisitions obtained have some differences according to the use of the the retro-illuminator. Fig. 7, 8 and 9 show some examples of the acquisitions conducted.
Finally, the measured object was then compared to the tomography obtained from a selected company, resulting in the error map in Fig. 10.

 

Fig. 6 - 2D experimental set-up.
Fig. 10 - Error map obtained comparing the measured object to the tomography.

If you are interested in the project and want to read more about the procedure carried out in this thesis work, as well as the resulting measurments, download the presentation below.

OpenPTrack software metrological evaluation

Smart tracking systems are nowadays a necessity in different fields, especially the industrial one. A very interesting and successful open source software has been developed by the University of Padua, called OpenPTrack. The software, based on ROS (Robotic Operative System), is capable to keep track of humans in the scene, leveraging well known tracking algorithms that use point cloud 3D information, and also objects, leveraging the colour information as well.

Amazed by the capabilites of the software, we decided to study its performances further. This is the aim of this thesis project: to carefully characterize the measurment performances of OpenPTrack both of humans and objects, by using a set of Kinect v2 sensor.

Step 1: Calibration of the sensors

It is of utmost importance to correctly calibrate the sensors when performing a multi-sensor acquisition. 

Two types of calibration are necessary: (i) the intrinsic calibration, to align the colour (or grayscale/IR like in the case of OpenPTrack) information acquired to the depth information (Fig. 1) and (ii) the extrinsic calibration, to align the different views obtained by the different cameras to a common reference system (Fig. 2).

The software provides the suitable tools to perform these steps, and also provides a tool to further refine the extrinsic calibration obtained (Fig. 3). In this case, a human operator has to walk around the scene: its trajectory is then acquired by every sensor and at the end of this registration the procedure aligns the trajectories in a more precise way.

Each of these calibration processes is completely automatic and performed by the software.

Fig. 1 - Examples of intrinsic calibration images. (a) RGB hd image, (b) IR image, (c) syncronized calibration of RGB and IR streams.
Fig. 2 - Scheme of an extrinsic calibration procedure. The second camera K2 must be referred to the first on K1, finally the two must be referred to an absolute reference system called Wd.
Fig. 3 - Examples of the calibration refinement. (a) Trajectories obtained by two Kinect not refined, (b) trajectories correctly aligned after the refinement procedure.

Step 2: Definition of measurment area

Two Kinect v2 were used for the project, mounted on tripods and placed in order to acquire the larger FoV possible (Fig. 4). A total of 31 positions were defined in the area: these are the spots where the targets to be measured have been placed in the two experiments, in order to cover all the FoV available. Note that not every spot lies in a region acquired by both Kinects, and that there are 3 performance regions highlighted in the figure: the overall most performing one (light green) and the single camera most performing ones, where only one camera (the one that is closer) sees the target with a good performance.

Fig. 4 - FoV acquired by the two Kinects. The numbers represent the different acquisition positions (31 in total) where the targets where placed in order to perform a stable acquisition and characterization of the measurment.

Step 3: Evaluation of Human Detection Algorithms

To evaluate the detection algorithms of OpenPTrack it has been used a mannequin placed firmly on the different spots as the measuring target. Its orientation is different for every acquisition (N, S, W, E) in order to better understand if the algorithm is able to correctly detect the barycenter of the mannequin even if it is rotated (Fig. 5).
 

The performances were evaluated using 4 parameters:

  • MOTA (Multiple Object Tracking Accuracy), to measure if the algorithm was able to detect the human in the scene;
  • MOTP (Multiple Object Tracking Precision), to measure the accuracy of the barycenter estimation relative to the human figure;
  • (Ex, Ey, Ez), the mean error between the estimation of the barycenter position and the reference barycenter position known, relative to every spatial dimension (x, y, z);
  • (Sx, Sy, Sz), the errors variability to measure the repetibility of the measurments for every spatial dimension (x, y, z).
Fig. 5 - Different orientations of the mannequin in the same spot.

Step 4: Evaluation of Object Detection algorithms

A different target has been used to evaluate the performances of Object Detection algorithms, shown in Fig. 6: it is a structure on which three spheres have been positioned on top of three rigid arms. The spheres are of different colours (R, G, B), to estimate how much the algorithm is colour-dependant, and different dimensions (200 mm, 160 mm, 100 mm), to estimante how much the algorithm is dimension-dependant. In this case, to estimate the performances of the algorithm the relative position between the spheres has been used as the reference measure (Fig. 7). 
 

The performances were evaluated using the same parameters used for the Human Detection algorithm, but referred to the tracked object instead. 

Fig. 6 - The different targets: in the first three images the spheres are of different colours but same dimension (200 mm, 160 mm and 100 mm), while in the last figure the sphere where all of the same colour (green) but of different dimensions.
Fig. 7 - Example of the reference positions of the spheres used.

If you want to know more about the project and the results obtained, please download the master thesis below.

The Lab at EMVA 2018

The Laboratory staff and the Students currently involved in Thesis Projects with us participated at the 2018 European Machine Vision Forum held in Bologna from the 5th to the 7th of September 2018. The theme of this edition was “Vision for Industry 4.0 and beyond“.

Below you can find the two posters that have been presented at the conference.