Classifier training based on synthetically generated samples


  • Hélène Hoessler
  • Christian Wöhler
  • Frank Lindner
  • Ulrich Kreßel



Classifier training, Synthetic samples, Real-time vision, Driver assistance systems, DDC: 004 (Data processing, computer science, computer systems)


In most image classification systems, the amount and quality of the training samples used to represent the different pattern classes are important factors governing the recognition performance. Hence, it is usually necessary to acquire a representative set of training samples by acquisition of data in real-world environments. Such procedures may require considerable efforts and furthermore often generate a training set which is unbalanced with respect to the number of available samples per class. In this contribution we regard classification tasks for which each real-world training sample is derived from an ideal class representative which undergoes a geometric and photometric transformation. This transformation depends on system-specific influencing quantities of the image formation process such as illumination, characteristics of the sensor and optical system, or camera motion. The parameters of the transformation model are learned from object classes for which a large number of real-world samples are available. For each individual real-world sample a set of model parameters is derived by correspondingly fitting the transformed ideal sample to the observed sample. The obtained probability distribution of model parameters is used to generate synthetic sample sets for all regarded pattern classes. This training approach is applied to a vehicle-based vision system for traffic sign recognition. Our experimental evaluation on a large set of real-world test data demonstrates that the classification rates obtained for classifiers trained with synthetic samples are comparable to those obtained based on real-world training data.






The 5th International Conference on Computer Vision Systems