
I'm building a library of components for artificial neural networks. I'm used to object-oriented languages, so I'm struggling a bit to figure out how to do a good design in a functional programming language like Haskell. Q1: I've come up with two designs, and would appreciate any advice on improvements and what approach to take. ===== Design #1 ===== class Neuron n where activate :: [Double] -> n -> Double train :: [Double] -> Double -> n -> n ...and then I would have instances of this typeclass. For example: data Perceptron = Perceptron { weights :: [Double], threshold :: Double, learningRate :: Double } deriving (Show) instance Neuron Perceptron where activate inputs perceptron = ... train inputs target perceptron = ... The disadvantage of this approach is that I need to define and name each instance of neuron before I can use it. I'd rather create a neuron on-the-fly by calling a general-purpose constructor and telling it what functions to use for activation and training. I think that would make it easier to re-use activation and training functions in all sorts of different combinations. So I came up with... ===== Design #2 ===== data Neuron = Neuron { weights :: [Double], activate :: [Double] -> Double, train :: [Double] -> Double -> Neuron } Q2: I thought there might be some way to define a function type, but the following doesn't work. Is there something along these lines that would work? type activationFunction = [Double] -> Double

On Mon, Dec 28, 2009 at 05:14:53PM +0000, Amy de Buitléir wrote:
I'm building a library of components for artificial neural networks. I'm used to object-oriented languages, so I'm struggling a bit to figure out how to do a good design in a functional programming language like Haskell.
Q1: I've come up with two designs, and would appreciate any advice on improvements and what approach to take.
===== Design #1 ===== class Neuron n where activate :: [Double] -> n -> Double train :: [Double] -> Double -> n -> n
Looks reasonable.
The disadvantage of this approach is that I need to define and name each instance of neuron before I can use it. I'd rather create a neuron on-the-fly by calling a general-purpose constructor and telling it what functions to use for activation and training.
Indeed, if you want to create new types of neurons on-the-fly you should use your second design.
===== Design #2 ===== data Neuron = Neuron { weights :: [Double], activate :: [Double] -> Double, train :: [Double] -> Double -> Neuron }
Mostly makes sense. Note that this is really making explicit what the type class mechanism is doing -- a type class instance corresponds to a "dictionary", a record of type class methods, which is implcitly passed around. Here you are just explicitly declaring a dictionary type for neurons. The one thing that confuses me is why you included "weights" in the explicit dictionary but not in the Neuron type class. I would think you'd want it in both or neither, I don't see any reason you'd need it in one but not the other.
Q2: I thought there might be some way to define a function type, but the following doesn't work. Is there something along these lines that would work?
type activationFunction = [Double] -> Double
Type names must start with an uppercase letter. type ActivationFunction = [Double] -> Double should work just fine. -Brent

Am Montag 28 Dezember 2009 18:14:53 schrieb Amy de Buitléir:
I'm building a library of components for artificial neural networks. I'm used to object-oriented languages, so I'm struggling a bit to figure out how to do a good design in a functional programming language like Haskell.
I can't say what design would be best to implement neural networks in Haskell, but
Q2: I thought there might be some way to define a function type, but the following doesn't work. Is there something along these lines that would work?
type activationFunction = [Double] -> Double
if the type name starts with a lower case letter, it's a type variable, so you'd want type ActivationFunction = [Double] -> Double

Sorry for my late answer... Check this e-mail wren ng thornton sent to me (and the list) on Nov 5, 2009. Sorry I'm not giving you the direct address to the whole discusion, but I'm sending you this from my cellphone, and the e-mail too... The original discusion was on the haskell-cafe mailing list and the title was: Memory Leak - Artificial Neural Network
Also, there's a package somebody uploaded a few days ago to hackage on ann's, it is called: hnn-0.1, a haskell neural network library.
I hope this can be usefull to you.
Hector Guilarte
Here's wren ng thornton e-mail:
As a more general high-level suggestion, the most efficient way to
implement feedforward ANNs is to treat them as matrix multiplication
problems and use matrices/arrays rather than lists. For a three layer
network of N, M, and O nodes we thus:
* start with an N-wide vector of inputs
* multiply by the N*M matrix of weights, to get an M-vector
* map sigmoid or other activation function
* multiply by the M*O matrix of weights for the next layer to get
an O-vector
* apply some interpretation (e.g. winner-take-all) to the output
There are various libraries for optimized matrix multiplication, but
even just using an unboxed array for the matrices will make it much
faster to traverse through things.
-----Original Message-----
From: Amy de Buitléir
participants (4)
-
Amy de Buitléir
-
Brent Yorgey
-
Daniel Fischer
-
Hector Guilarte