On Mon May 26 09:02:01 EDT 2008
Marc Weber [marco-oweber at gmx.de] wrote:

>Searching haskell.org only gives one match:
>http://jpmoresmau.blogspot.com/2007/06/very-dumb-neural-network-in-haskell.html
>(I haven't read it)
>http://hackage.haskell.org/packages/archive/pkg-list.html
>is the other most commonly used source of finding aready existing code
>(There is package about nn)
>
>Should your nn also support kind of learning by giving feedback?

Marc, thanks for explaining how to use State and the links!
As for your question about my NN learning, I need to tell more how it all works.

My classifier NN (Memory Tree - MT - as I call it), that I previously described may work in two modes:

1) Usupervised learning. In this mode each node learns new categories when it encounters new input vectors that classifier has not seen before. These input vectors become new categories and get stored in node category memory (CM). When MT starts in this mode all nodes have empty CMs. Size of CM defines the number of categories that node may learn.

2) Supervised learning. In this mode MT also has a category node. Category node feeds in category numbers to a top level node. Learning is done with prepared in advance training data set. Traing data consits of pairs (input vector, category number). Bottom nodes read input vector from this pair while category node simultaneously sends corresponding category number to the top level node. Thus when input reaches top node it already knows what category this input belongs to. After MT consumes all training data it is belived to be fully learned and ready to infer categories from 'work' data set.

Hope this answers your question about learning.

--
Dmitri O. Kondratiev
dokondr@gmail.com
http://www.geocities.com/dkondr