Reference vectors during the ordering process, square array.The numbers at lower right-hand comer indicate learning cycles
**Calibration**. When a sufficient number of input samples x(t) has been presented and the mi(t), in the process defined by (3.1) and (3.2), have converged to practically stationary values, the next step is *calibration* of the map, in order to locate images of different input data items on it. In practical applications for which such maps are used, it may be self-evident how a particular input data set ought to be interpreted and labeled. By inputting a number of typical, manually analyzed data sets, looking where the best matches on the map according to Eq. (3.1) lie, and labeling the map units correspondingly, the map becomes calibrated. Since this mapping is assumed to be continuous along a hypothetical "elastic surface", the unknown input data are approximated by the closest reference vectors, like in Vector Quantization.
**Comment**. An "optimal mapping" might be one that projects the probability density function p(x) in the most "faithful" fashion, trying to preserve at least the local structures of p(x) in the output plane. (You might think of p(x) as a flower that is pressed!)
It has to be emphasized, however, that description of the exact form of p(x) by the SOM is not the most important task. It will be pointed out that the SOM automatically finds those dimensions and domains in the signal space where x has significant amounts of sample values, conforming to the usual philosophy in regression problems.
~
KOHONEN, Teuvo, 1995. Self-organizing maps. Berlin ; New York: Springer. Springer series in information sciences, 30. ISBN 978-3-540-58600-5, p. 82
⇒ “The Magic TV”