Despite the uncertain weather conditions, the TRAP-Lab and BRaIN teams had an intense day of testing. The new motion capture system, installed experimentally at the TRAP CONCAVERDE field in Lonato del Garda, allows to measure the kinematics and dynamics of the athlete, both before and during the shooting action. The exploratory test will allow to evaluate the system’s reliability and the measurements’ accuracy.
This month we presented our work on Machine Learning used to assess Rolling Contact Fatigue damage progression in two different conference: IMEKO International Metrology Confederation) and ISMA (International Noise and Vibration Engineering Conference).
At the IMEKO in Belfast we presented the results of our project focused on vibrations as a tool to characterize materials.
At the ISMA in Leuven we introduced the preliminary results of the addition of an high-speed camera to our measurement system, to monitor in real-time the specimen surface.
For more information on this project: email@example.com
In the first year of master at “rehabilitation” course in human science motion of Groningen University students are invited to experience directly the tools that are going to be key of their studies, in a funny and at the same time pragmatic way, to understand the difficulties that occur to people who use these tools daily, also in sports.
Our students Paolo and Ridi, working there, had the pleasure to participate and understand this context in a conscientious and recreative way.
First hand experience on what you are working on is important: every time you design a measurement system it’s best to understand deeply the subject beforehand.
The Laboratory staff and the Students currently involved in Thesis Projects with us participated at the 2018 European Machine Vision Forum held in Bologna from the 5th to the 7th of September 2018. The theme of this edition was “Vision for Industry 4.0 and beyond“.
Below you can find the two posters that have been presented at the conference.
During the summer our staff, including our Bachelor Student Marcello, went to Madrid, to work at a joint project on exoskeleton evaluation with the Cajal Institute. Here you can find some photos of our preliminary tests: more info on the project will be posted in the next days.
The communication and collaboration between humans and robots is one of main principles of the fourth industrial revolution (Industry 4.0). In the next years, robots and humans will become co-workers, sharing the same working space and helping each other. A robot intended for collaboration with humans has to be equipped with safety components, which are different from the standard ones (cages, laser scans, etc.).
In this project, a safety system for applications of human-robot collaboration has been developed. The system is able to:
recognize and track the robot;
recognize and track the human operator;
measure the distance between them;
discriminate between safe and unsafe situations.
The safety system is based on two Microsoft Kinect v2 Time-Of-Flight (TOF) cameras. Each TOF camera measures the 3D position of each point in the scene evaluating the time-of-flight of a light signal emitted by the camera and reflected by each point. The cameras are placed on the safety cage of a robotic cell (Figure 1) so that the respective field of view covers the entire robotic working space. The 3D point clouds acquired by the TOF cameras are aligned with respect to a common reference system using a suitable calibration procedure .
The robot and human detections are developed analyzing the RGB-D images (Figure 2) acquired by the cameras. These images contain both the RGB information and the depth information of each point in the scene.
The robot recognition and tracking (Figure 3) is based on a KLT (Kanade-Lucas-Tomasi) algorithm, using the RGB data to detect the moving elements in a sequence of images . The algorithm analyzes the RGB-D images and finds feature points such as edges and corners (see the green crosses in figure 3). The 3D position of the robot (represented by the red triangle in figure 3) is finally computed by averaging the 3D positions of feature points.
The human recognition and tracking (figure 4) is based on the HOG (Histogram of Oriented Gradient) algorithm . The algorithm computes the 3D human position analyzing the gradient orientations of portions of RGB-D images and using them in a trained support vector machine (SVM). The human operator is framed in a yellow box after being detected, and his 3D center of mass is computed (see the red square in figure 4).
Three different safety strategies have been developed. The first strategy is based on the definition of suitable comfort zones of both the human operator and the robotic device. The second strategy implements virtual barriers separating the robot from the operator. The third strategy is based on the combined use of the comfort zones and of the virtual barriers.
In the first strategy, a sphere and a cylinder are defined around the robot and the human respectively, and the distance between them is computed. Three different situations may occur (figure 5):
Safe situation (figure 5.a): the distance is higher than zero and the sphere and the cylinder are far from each other;
Warning situation (figure 5.b): the distance decreases toward zero and sphere and cylinder are very close;
Unsafe situation (figure 5.c): the distance is negative and sphere and cylinder collide.
In the second strategy, two virtual barriers are defined (Figure 6). The former (displayed in green in figure 6) defines the limit between the safe zone (i.e. the zone where the human can move safely and the robot can not hit him) and the warning zone (i.e. the zone where the contact between human and robot can happen). The second barrier (displayed in red in figure 6) defines the limit between the warning zone and the error zone (i.e. the zone where the robot works and can easily hit the operator).
The third strategy is a combination of comfort zones and virtual barriers (figure 7). This strategy gives redundant information: both the human-robot distance and positions are considered.
The safety system shows good performances:
The robotic device is always recognized;
The human operator is recognized when he moves frontally with respect to the TOF cameras. The human recognition must be improved (for example increasing the number of TOF cameras) in case the human operator moves transversally with respect to the TOF cameras;
The safety situations are always identified correctly. The algorithm classifies the safety situations with an average delay of 0.86 ± 0.63s (k=1). This can be improved using a real time hardware.