CONCLUSION
Though I did not have time to do an exhaustive study, clearly the network
appeared to train as expected, usually after about 5 cycles (240 patterns).
The first pattern, three clusters linearly distributed, appear on the
resultant 5 x 5 grid pretty much as anticipated, with most A, B and C data
points grouped together. They approximate a two-dimensional distribution
similar to the linear distribution on the unit sphere as indicated by the
selected ideal data points. This result was evident almost completely by
the 5th cycle.
The second pattern, three clusters in a triangular pattern, show up on the
resultant grid generally in three clumps, as expected. Though more testing
is called for, the small sample shown suggests that it may take longer to
learn this pattern than the first.
The third pattern, four clusters linearly distributed, is reflected clearly
in the resultant grid by the 12th iteration. The 30th iteration interestingly
looks less "correct" than the 12th; I'd like to try varying the parameters
to see if I can understand this further. Is this an aberration, or is there
an error somewhere?
The fourth pattern, four clusters not coplanar, shows up on the grid as the
four data points grouped together, perhaps in a square-like pattern. I'm not
exactly sure what this should look like (I thought maybe they would not
converge at all, but this is not the case), but this result does not seem
surprising.
It would be interesting to try the simulation again using a larger grid size,
as well as trying other iteration counts and different "fuzz factors."
The program reduces the learning rate (both alpha and beta values, for the
"winner" and its neighbors) over time. These factors could also be adjusted
experimentally, and further tests run.
I was pleased that in the short amount of time available I was able to achieve
such coherent results. With more time to tune the model I would have more
confidence in concluding that the simulation behaves as expected.