Abstract:
It is widely believed in the pattern recognition field that when a fixed number of training samples is used to design a classifier, the generalization error of the classi...Show MoreMetadata
Abstract:
It is widely believed in the pattern recognition field that when a fixed number of training samples is used to design a classifier, the generalization error of the classifier tends to increase as the number of features gets larger. In this paper, we discuss the generalization error of the artificial neural network (ANN) classifiers in high-dimensional spaces, under a practical condition that the ratio of the training sample size to the dimensionality is small. Experimental results show that the generalization error of ANN classifiers seems much less sensitive to the feature size than 1-NN, Parzen and quadratic classifiers.
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence ( Volume: 18, Issue: 5, May 1996)
DOI: 10.1109/34.494648