Face classification, which has been an active research area over the last decade, has the advantages of being nonintrusive and well accepted by users. However, the variations of, for example, illumination, pose, and facial expression, always deteriorate the performance of a face classification system. Although many efforts have been directed towards improving the classification rate, face classification in an uncontrolled environment is still a challenging problem.
To settle the problem, led by Prof. YANG Jun, a group of researchers of Key Laboratory of Noise and Vibration Research, Institute of Acoustics conducted a series of studies and investigated the human face classification using ultrasonic sonar imaging.
Based on Freedman's "image pulse" model, they employed the scattering center model to simplify the complex geometry of the human face into a series of scattering centers and proposed an effective feature extraction algorithm for improving the classification rate. A chirp signal is utilized to detect the human face for its high range resolution and large signal-to-noise ratio. Ultrasonic sonar images, also named high-resolution range profiles, are obtained by demodulating the echoes with a reference chirp signal. Features directly related to the geometry of the human face are extracted from ultrasonic sonar images and verified in the experiments designed with different configurations of transmitter-receiver (TR) pairs. Experimental results indicate that the improved feature extraction method can achieve a high recognition rate of over 99% in the case of ultrasonic transmitters angled at 45 degrees above and orthogonal to the face, and this method improves the performance of ultrasonic face recognition compared with the previous result. This research result was published in the Japanese Journal of Applied Physics.