Image redundancy reduction for neural network classification using discrete cosine transforms
High information redundancy and strong correlations in face images result in inefficiencies when such images are used directly in recognition tasks. In this paper, discrete cosine transforms (DCT) are used to reduce image information redundancy because only a subset of the transform coefficients are necessary to preserve the most important facial features, such as hair outline, eyes and mouth. We demonstrate experimentally that when DCT coefficients are fed into a backpropagation neural network for classification, high recognition rates can be achieved using only a small proportion (0.19%) of available transform components. This makes DCT-based face recognition more than two orders of magnitude faster than other approaches.
Item Type | Monograph (UNSPECIFIED) |
---|---|
Uncontrolled Keywords | backpropagation; face recognition |
Date Deposited | 14 Nov 2024 10:28 |
Last Modified | 14 Nov 2024 10:28 |
-
picture_as_pdf - 903609.pdf