In this article, we explore the concept of inverse biometrics, which involves reconstructing personal data from raw sensor data. The author presents three main approaches to achieve this goal: transformation methods, combination methods, and feature extraction methods.
Transformation methods start with one or more genuine samples of a given subject and apply different transformations to produce synthetic (or transformed) samples that belong to the same subject. These methods are commonly used for face, 3D facial models, signature, and handwriting synthesis.
Combination methods use a pool of bona fide units, such as n-phones in speech or n-grams in handwriting, as input for the algorithm, which combines or concatenates them to form the synthetic sample. This approach is followed by most speech, signature, and handwriting synthesizers.
To understand the potential risks associated with these methods, the author categorizes them based on the level of knowledge required to use them. The four groups considered in the classification are: (1) knowledge of the template format; (2) knowledge of the similarity scores; (3) knowledge of the similarity scores and the comparison function; and (4) knowledge of the feature extraction method.
The author emphasizes that a lower knowledge of the system to be attacked is easier to obtain, implying a higher threat. By understanding these methods and their potential risks, we can better appreciate the complexity of inverse biometrics and the importance of considering privacy and security when collecting and using personal data.
In summary, inverse biometrics involves reconstructing personal data from raw sensor data through various methods, including transformation and combination approaches. Understanding these methods and their potential risks is crucial for ensuring privacy and security in data collection and use.
Computer Science, Computer Vision and Pattern Recognition