In this article, the authors explore the use of deep convolutional neural networks (CNNs) for air signature recognition. Air signatures are unique to each individual and are created through hand movements in the air while signing. The lack of a firm writing plane and visual feedback makes it challenging to recognize these signatures accurately.
To address this challenge, the authors propose using CNNs to analyze 3D hand movements captured by stereo cameras. They use four types of files generated from the stereo frames to represent each signature: (x, y, r) for the left and right frames, and (xgl, ygl, rgl) for the left frame green ball, (xrl, yrl, rrl) for the left frame orange ball, and (xgr, ygr, rgr) for the right frame green ball.
The authors normalize the features by dividing the x-coordinates with the frame width and the y-coordinates by the frame height, and the radius by dividing by the frame height. They then represent occluded frames as (-1, -1, -1) for both left and right frames.
The authors use a sequential model based on CNNs to analyze these features and recognize signatures. They evaluate their approach using several datasets, including one created specifically for this study, and compare the results with existing methods. The proposed method outperforms these existing methods in terms of accuracy and efficiency.
In summary, this article presents a novel approach to air signature recognition using deep CNNs. By analyzing 3D hand movements captured by stereo cameras, the proposed method can accurately recognize signatures even when there is no firm writing plane or visual feedback. The use of sequential models and normalization techniques help improve the accuracy and efficiency of this approach.
Computer Science, Computer Vision and Pattern Recognition