On Mon, Jun 15, 2009 at 5:00 PM, Trin Trin <trin.cz@gmail.com> wrote:
Hi Alp,
- even with correctly programmed back-propagation, it is usually hard to make the net converge. 

Yeah, I know, that's why we're training it until the quadratic error goes under 0.1.
 
- usually you initialize neuron weights with somewhat random values, when working with back-propagation.

Yeah, that'll be done too, once the algorithm will be ready. I'll provide fancy and easy functions to create a neural net just giving the numbers of layers and their sizes.
 
- do some debug prints of the net error while training to see how it is going

Good idea, yeah.
 
- xor function cannot be trained with a single layer neural net !!!

That's why there are 2 layers there, one hidden and the output one. I consider the "inputs" as ... inputs, not as a first layer of the NN.

Thanls for your time. If you have any clues when reading the code, don't hesitate of course.

--
Alp Mestan
http://blog.mestan.fr/
http://alp.developpez.com/