Mostafa Derraz, Faouzya El Farissi and Abdellatif Ben Abdellah
Human-machine interaction is one of the most impactful actors for the future of robotics. Due to the need to improve the robot input and to move as far away from the command lines as possible and change it to sensors and controllers. Human-machine interaction (HMI) refers to the communication and interaction between a Human-machine via a user interface. Nowadays, natural user interfaces such as gestures are attracting more and more attention, as they allow humans to control machines through natural and intuitive behaviors. In gesture-based HMI, for example, a sensor and a camera are used to capture human postures and movements to recognize the human face (user), which is processed to control a machine. The key task of the gesture-based HMI is to recognize significant expressions of the human face and movements using the data provided by the camera and sensor, including RGB (red, green, blue), depth and skeleton information. Many facial recognition algorithms are based on feature-based methods that detect a set of geometric features on the face such as eyes, eyebrows, nose and mouth. Properties and relationships such as areas, distances and angles between minutiae (characteristic points) serve as descriptors for facial recognition. Generally, the need to detect 30 to 60 characteristic points is generated to describe a face in a robust way. The performance of facial recognition based on geometric characteristics depends on the accuracy of the feature location algorithm or let's explore geometric theorems and formulas more closely. However, there is no universal answer to the problem of the number of points that give the best performance, important characteristics and how to extract them automatically. This implies that the overall geometric configuration of the facial features is enough for recognition. As mentioned above, there are many approaches to the problem of facial recognition. One of them is based on the points of facial characteristics. In this case, these are digital images of the frontal portrait. It takes 30 to 60 points to describe a face in a robust way. The location of some points depends on facial expression. There are two problems: Define and extract the most invariant points, and find the optimal geometric feature set for facial recognition. Before ten years ago, we developed a theorem that restated from another philosophy the concept/function of Sinus backward, so after five years, we published the theorem under the name "The General Sins". In General Sinus's paper, we discussed the results, context and background. And how to generalize the Sinus function? The general sinus has been defined by Sin (x, y) with two parameters, which can be used in n-gon not necessary in a rectangle. And how we applied the general sinus function in n-gon? in order to determine all the intrinsic properties of n-gon, using a minimal and reasonable amount of data, where no conditions applied in the n-gon nature. We have proved that this formula is the most generalized in Euclidean geometry. Based on the general sinus theorem, we can improve the performance of the facial recognition algorithm. The application of general sinus formulas allows treating more characteristic points and to obtain more precise information such as the distance and angles between each point, at the same time, improving the processing time of the algorithm to be faster.