
Hi, all, I'm developing a back end for GHC and I have the following problem: my program is throwing an "empty list exception" due to head [] and I need to compile GHC with -prof -auto-all in order to see the stack trace when running it with +RTS -xc -RTS. I changed the makefile but the option +RTS -xc -RTS was not recognized as an available RTS option Does anyone have any idea about how I can do that ? Thanks in advance, __________________________________________________ Monique Louise Monteiro Mestranda em Ciencia da Computacao Centro de Informatica - CIn - UFPE

I'm developing a back end for GHC and I have the following problem: my program is throwing an "empty list exception" due to head [] and I need to compile GHC with -prof -auto-all in order to see the stack trace when running it with +RTS -xc -RTS. I changed the makefile but the option +RTS -xc -RTS was not recognized as an available RTS option
Does anyone have any idea about how I can do that ?
Hi,
no direct answer to your question, but a general comment on the original
problem (speaking from bad experience;-): things like "head" have no place
in a Haskell program of any non-trivial size, because of their useless error
messages.
Whenever you read some code that includes an unguarded call to
head (where the non-emptyness of the list is not immediately obvious),
you are looking at a nasty trouble spot - get rid of it now!
At the very least, use something like Control.Exception.assert to guard
uses of partial functions like head, or define your own "safeHead":
safeHead msg [] = error msg
safeHead msg (h:_) = h
and replace unguarded uses of head with calls to safeHead "

"Claus Reinke"
no direct answer to your question, but a general comment on the original problem (speaking from bad experience;-): things like "head" have no place in a Haskell program of any non-trivial size, because of their useless error messages.
I must say I liked John Meacham's description of his "magic underscore". My solution to this problem is redefining the troublesome functions as cpp macros, e.g: #define BUG(C_,M_) (error ("Program error - '"++C_++"' failed: "++M_++". Location: '"++__FILE__++"' line "++show __LINE__)) #define head (\xs -> case xs of { (x:_) -> x ; _ -> BUG("head","empty list")}) Ideally, I think something like this should be the default behavior for these functions. -kzm PS: If anybody wants, I can mail my additional cpp definitions for 'head', 'at' (array indexing, I couldn't work out how to redefine the bang), 'read' and 'fromJust'. -- If I haven't seen further, it is by standing in the footprints of giants

On Tue, 26 Apr 2005, Ketil Malde wrote:
"Claus Reinke"
writes: no direct answer to your question, but a general comment on the original problem (speaking from bad experience;-): things like "head" have no place in a Haskell program of any non-trivial size, because of their useless error messages.
I must say I liked John Meacham's description of his "magic underscore". My solution to this problem is redefining the troublesome functions as cpp macros, e.g:
#define BUG(C_,M_) (error ("Program error - '"++C_++"' failed: "++M_++". Location: '"++__FILE__++"' line "++show __LINE__)) #define head (\xs -> case xs of { (x:_) -> x ; _ -> BUG("head","empty list")})
Ideally, I think something like this should be the default behavior for these functions.
But something like this should happen for any function, shouldn't it? I mean, ideally when we write a large program, we try to write many functions that each support some general operation and are called many times throughout the program at various levels, and maybe have some potential to fail. The ideal situation is when I can tell, from the top level error handler output emailed to me by the person who ran into the problem, who called who all the way from "main" to "head", because the key function is going to be one somewhere in the middle. Anything less general than this seems less than ideal to me. If it's obvious that it isn't good enough for head to announce that the error came from "head", and instead we need to also identify head's caller, then it should be obvious that this requirement is recursive. Donn Cave, donn@drizzle.com

Donn Cave
Ideally, I think something like this should be the default behavior for these functions.
But something like this should happen for any function, shouldn't it?
Any function where pattern match could fail, yes. (Or should that be any partial function?) The burden is on the programmer to prove that they won't fail, and some of these tend, IME, to be more troublesome than others.
[I want to know] who called who all the way from "main" to "head", because the key function is going to be one somewhere in the middle.
Perhaps. I am told stack backtraces are difficult with non-strict semantics. (Are you being ironic?) File names and line numbers are a compromise that happens to be easy to implement.
Anything less general than this seems less than ideal to me.
I guess that's why we still have to suffer nearly information-free error messages, when the 90% solution is fairly trivial. -kzm -- If I haven't seen further, it is by standing in the footprints of giants

On Wed, 2005-04-27 at 07:45 +0200, Ketil Malde wrote:
[I want to know] who called who all the way from "main" to "head", because the key function is going to be one somewhere in the middle.
Perhaps. I am told stack backtraces are difficult with non-strict semantics.
This is true, at least for _lazy_ implementations of non-strict semantics. The reason is that the (graph) context in which a function application is constructed can be very different to the context in which it is reduced. Partial application of functions introduces a similar problem. This is not a problem in first-order eager languages because the construction of a (saturated) function application is followed immediately by its reduction. Thus the contexts of construction and reduction are the same. Debugging tools like Hat, Freya and Buddha, "remember" the construction context of an application, so you can get call graphs that reflect the dependencies between symbols in the source code. Thus you can construct a meaningful backtrace etc. Actually, Hat remembers quite a bit more context than Freya and Buddha, but that's another story. Another way around the problem is to opt for a non-lazy, but still non-strict, evaluation order, such as optimistic evaluation. Think: mostly eager, with the occasional suspension. HsDebug is based on this idea. (Though it doesn't solve the problem with partial applications.) Cheers, Bernie.

On Wed, Apr 27, 2005 at 05:15:30PM +1000, Bernard Pope wrote:
On Wed, 2005-04-27 at 07:45 +0200, Ketil Malde wrote:
[I want to know] who called who all the way from "main" to "head", because the key function is going to be one somewhere in the middle.
Perhaps. I am told stack backtraces are difficult with non-strict semantics.
This is true, at least for _lazy_ implementations of non-strict semantics.
The reason is that the (graph) context in which a function application is constructed can be very different to the context in which it is reduced.
Is it that backtraces are difficult, or just require a lot of overhead? It doesn't seem very hard to me, at least in principle. Add a "stack trace" argument to every function. Every time a function is called, the source location of the call is prepended to the "stack trace". I'm not familiar with the implementation of functional programming languages, though. (It seems like if the operation of GHC or GHCI could be parametrized by an arbitrary monad, here a Reader, then transformations like the above wouldn't be so difficult, and compiler compatibility for debuggers wouldn't be so much of an issue.)
Partial application of functions introduces a similar problem.
This is not a problem in first-order eager languages because the construction of a (saturated) function application is followed immediately by its reduction. Thus the contexts of construction and reduction are the same.
Debugging tools like Hat, Freya and Buddha, "remember" the construction context of an application, so you can get call graphs that reflect the dependencies between symbols in the source code. Thus you can construct a meaningful backtrace etc. Actually, Hat remembers quite a bit more context than Freya and Buddha, but that's another story.
Are the following correct? 1. Hat requires users to restrict themselves to a certain small subset of the standard libraries, and to use hmake 2. Buddha doesn't work with GHC 6.4 3. I can't find Freya 4. I can't find HsDebug. Maybe it's part of the fptools cvs repository? But solander.dcs.gla.ac.uk seems to be down :( But getting a stack backtrace when there is an error should be a pretty basic feature. It's very hard to debug a large program when you can randomly get messages like "*** Exception: Prelude.head: empty list" and have no idea where they came from. So GHC's many features become much less useful when there is no debugger which supports a program that has been written with them. Furthermore, in my opinion, this sort of error location information is much more valuable to debugging a program, than being able to step through its execution, which is the more difficult problem that a lot of the debuggers seem to be aimed at solving. So maybe it would be good if GHC had basic stack trace support built in? It could be a compiler option, which would produce slower but more debuggable code... Frederik -- http://ofb.net/~frederik/

On Thu, 2005-09-01 at 14:48 -0700, Frederik Eaton wrote:
Is it that backtraces are difficult, or just require a lot of overhead? It doesn't seem very hard to me, at least in principle. Add a "stack trace" argument to every function. Every time a function is called, the source location of the call is prepended to the "stack trace". I'm not familiar with the implementation of functional programming languages, though.
Adding an extra argument to record the application context is one part of the transformation employed by buddha.
Are the following correct?
1. Hat requires users to restrict themselves to a certain small subset of the standard libraries, and to use hmake
Depends what you mean by standard libraries. As far as I know it supports all the libraries which are specified in the Haskell 98 Report. I believe it also supports some libraries in the new hierarchy that come with the compilers. Also, many libraries can be used by Hat, if you include them in your own source tree. Supporting all libraries that come packed with GHC would be nice, but there are numerous hurdles that need to be jumped over to get to that point. For instance, some libraries do not use portable Haskell code. Also the issue of how libraries are distributed in Haskell is a little bit in flux at the moment, since Cabal is still being polished.
2. Buddha doesn't work with GHC 6.4
True. This is a cabal issue, that I haven't had time to resolve. buddha is limited to even fewer libraries than Hat. So if your program doesn't work with Hat it will probably not work with buddha. I realise this is a big problem. I'll be the first to admit that buddha is still an experimental system. It works fine for some small programs, and might be useful to beginner programmers. I'm trying to finish my thesis at the moment, so development has stopped, but I have plenty of ideas to try out later on.
3. I can't find Freya
You can get a binary for Sparc off Henrik Nilsson's homepage: http://www.cs.nott.ac.uk/~nhn/ Note that Freya supports a subset of Haskell. From memory, no IO functions, and no classes. Probably none of the extensions of GHC.
4. I can't find HsDebug. Maybe it's part of the fptools cvs repository? But solander.dcs.gla.ac.uk seems to be down :(
I don't know about the status of HsDebug. I believe it is not being maintained. It relies on optimistic evaluation, which is in an experimental branch of GHC.
But getting a stack backtrace when there is an error should be a pretty basic feature. It's very hard to debug a large program when you can randomly get messages like "*** Exception: Prelude.head: empty list" and have no idea where they came from. So GHC's many features become much less useful when there is no debugger which supports a program that has been written with them.
I agree with you. Note that debugging lazy functional languages is a notoriously difficult problem. Work is being done, but the Haskell community is small, and there is a definite shortage of labour. Cheers, Bernie.

... It's very hard to debug a large program when you can randomly get messages like "*** Exception: Prelude.head: empty list" and have no idea where they came from.
As a purely pragmatic suggestion: don't use head, fromJust, last, or any other function that is likely to fail in impossible-to-find way, at least not directly. In GHC, you can wrap or replace them with irrefutable patterns which are almost as easy to write, and will give you a sensible error message if they fail. Example: replace x = head xx with (x:_) = xx replace x = fromJust mX with (Just x) = mX replace x = last xx with y@(_:_) = xx x = last y Ben.

On Fri, Sep 02, 2005 at 05:10:35PM +1000, Ben Lippmeier wrote:
... It's very hard to debug a large program when you can randomly get messages like "*** Exception: Prelude.head: empty list" and have no idea where they came from.
As a purely pragmatic suggestion: don't use head, fromJust, last, or any other function that is likely to fail in impossible-to-find way, at least not directly.
In GHC, you can wrap or replace them with irrefutable patterns which are almost as easy to write, and will give you a sensible error message if they fail.
That's a good suggestion. One can also use the C preprocessor to get decent error messages: #define fromJust (\m_fromJust_funny_name -> case m_fromJust_funny_name of {Nothing -> bug ("fromJust error at "++__FILE__++":"++show (__LINE__ :: Int)++" compiled "++__TIME__++" "++__DATE__); Just x -> x}) Do to the usage of the C preprocessor, this is likely to fail if you've got variables names something like x', but apart from that it works nicely, and allows you to do stuff like foo = head . tail . sort . head which could be ugly when written in terms of irrefutable patterns. -- David Roundy http://www.darcs.net

Bernard Pope
On Thu, 2005-09-01 at 14:48 -0700, Frederik Eaton wrote: (snip)
Are the following correct?
1. Hat requires users to restrict themselves to a certain small subset of the standard libraries, and to use hmake
Depends what you mean by standard libraries. As far as I know it supports all the libraries which are specified in the Haskell 98 Report. I believe it also supports some libraries in the new hierarchy that come with the compilers. Also, many libraries can be used by Hat, if you include them in your own source tree. Supporting all libraries that come packed with GHC would be nice, but there are numerous hurdles that need to be jumped over to get to that point. For instance, some libraries do not use portable Haskell code. Also the issue of how libraries are distributed in Haskell is a little bit in flux at the moment, since Cabal is still being polished.
This doesn't really have anything to do with Cabal as far as I know. Hat comes with pre-translated libraries for a subset of the GHC libraries. It's true that the libraries that come with the compilers may change in the future, partly due to Cabal, but I don't think this is the reason that Hat doesn't come with all the libraries. Hat doesn't even use Cabal, AFAIK, but hmake.
2. Buddha doesn't work with GHC 6.4
True. This is a cabal issue, that I haven't had time to resolve. buddha is limited to even fewer libraries than Hat.
Why is this a Cabal issue? Are you interested in adding Buddah support to Cabal? peace, isaac

Isaac Jones
1. Hat requires users to restrict themselves to a certain small subset of the standard libraries, and to use hmake
Also the issue of how libraries are distributed in Haskell is a little bit in flux at the moment, since Cabal is still being polished.
This doesn't really have anything to do with Cabal as far as I know. Hat comes with pre-translated libraries for a subset of the GHC libraries. It's true that the libraries that come with the compilers may change in the future, partly due to Cabal, but I don't think this is the reason that Hat doesn't come with all the libraries. Hat doesn't even use Cabal, AFAIK, but hmake.
Well, the hope is that, eventually, Hat should be able to take any Cabal-ised library and transparently build it for tracing. Or maybe it will be Cabal that will support building for tracing as one "way" amongst others (profiling, etc). In any case, the point is that Hat pre-dates Cabal and so has no support for it, that Cabal is still under development, and that eventually there should be a good story for using Cabal+Hat together, but it isn't there right now.
2. Buddha doesn't work with GHC 6.4
True. This is a cabal issue, that I haven't had time to resolve. buddha is limited to even fewer libraries than Hat.
Why is this a Cabal issue? Are you interested in adding Buddah support to Cabal?
I think what Bernie is referring to is that ghc-pkg-6.4 uses an input file format very similar to Cabal's file format, for registering a new package. I would guess that Buddha needs to register a "buddha" package with ghc, but for now doesn't have the right syntax. The file formats of Cabal and ghc-pkg are so similar that many people think they are the same thing, hence he can be forgiven for referring to it as a Cabal issue, rather than a ghc-pkg issue. Regards, Malcolm

Malcolm Wallace
Isaac Jones
writes: 1. Hat requires users to restrict themselves to a certain small subset of the standard libraries, and to use hmake
Also the issue of how libraries are distributed in Haskell is a little bit in flux at the moment, since Cabal is still being polished.
This doesn't really have anything to do with Cabal as far as I know. Hat comes with pre-translated libraries for a subset of the GHC libraries. It's true that the libraries that come with the compilers may change in the future, partly due to Cabal, but I don't think this is the reason that Hat doesn't come with all the libraries. Hat doesn't even use Cabal, AFAIK, but hmake.
Well, the hope is that, eventually, Hat should be able to take any Cabal-ised library and transparently build it for tracing. Or maybe it will be Cabal that will support building for tracing as one "way" amongst others (profiling, etc). In any case, the point is that Hat pre-dates Cabal and so has no support for it, that Cabal is still under development, and that eventually there should be a good story for using Cabal+Hat together, but it isn't there right now.
I think the later is the way to go, add a "way" to cabal to build hat-enabled libraries. Cabal has all the information, the list of source files, extensions, which compiler you're building for (does that matter to hat?). This would be a great feature to add to Cabal :) But we don't really yet have a way to build a set of libraries in a particular "way". It's _less_ painful now to build profiling libraries, but you still have to go through and build each one. Maybe cabal-get can help with that. One problem for GHC is that ghc-pkg doesn't have any sense of "way" afaik... if I build package A without profiling support, and package B depends on package A, cabal's configure stage for package B can't detect whether or not A is built with profiling support... well, maybe it could go look for the profiling libraries or something. We might have a similar problem w/ hat-enabled libraries, and maybe slightly worse... I know that GHC profiling libraries can live alongside non-profiling versions; if you build package B with profiling, GHC just looks for the right version of package A's library in a standard place. But for Hat, I'd guess we want to keep a separate hierarchy for Hat-enabled libraries, maybe even a different package database (hmmm!). In fact, that's probably what should happen for profiling and any other "way" which requires that all packages be built the same "way". peace, isaac

On Mon, 2005-09-05 at 11:12 +0100, Malcolm Wallace wrote:
Why is this a Cabal issue? Are you interested in adding Buddah support to Cabal?
I think what Bernie is referring to is that ghc-pkg-6.4 uses an input file format very similar to Cabal's file format, for registering a new package. I would guess that Buddha needs to register a "buddha" package with ghc, but for now doesn't have the right syntax. The file formats of Cabal and ghc-pkg are so similar that many people think they are the same thing, hence he can be forgiven for referring to it as a Cabal issue, rather than a ghc-pkg issue.
Malcolm is right. I have a ghc-pkg problem, not a cabal one. I was looking in the wrong place (cabal docs), when I should have been looking in the ghc docs. Thanks Malcolm. Cheers, Bernie.

On Thu, 01 Sep 2005, Frederik Eaton
But getting a stack backtrace when there is an error should be a pretty basic feature. It's very hard to debug a large program when you can randomly get messages like "*** Exception: Prelude.head: empty list" and have no idea where they came from.
From the GHC FAQ:
My program is failing with head [], or an array bounds error, or some other random error, and I have no idea how to find the bug. Can you help? Compile your program with -prof -auto-all (make sure you have the profiling libraries installed), and run it with +RTS -xc -RTS to get a “stack trace” at the point at which the exception was raised. See Section 4.14.4, “RTS options for hackers, debuggers, and over-interested souls” for more details. I tried this out under GHC 6.4/Linux and got a segmentation fault instead of a stack trace. Under GHC 6.2.2 it seemed to work, though. -- /NAD

Nils Anders Danielsson
My program is failing with head [], or an array bounds error, or some other random error, and I have no idea how to find the bug. Can you help?
Compile your program with -prof -auto-all (make sure you have the profiling libraries installed), and run it with +RTS -xc -RTS to
I also have experienced - ahem - varying results with -xc. My solution is to use 'ghc -cpp' instead, and something like the following: import Prelude hiding (head,read) /* ugly, but a real functon would block subsequent imports */ #define BUG(C_,M_) (error ("Program error - '"++C_++"' failed: "++M_++". Location: "++__FILE__++" line: "++ show __LINE__)) #define head (\xs -> case xs of { (x:_) -> x ; _ -> BUG("head","empty list")}) #define at (let {at_ (y:_) 0 = y; at_ (y:ys) n = if n>0 then at_ ys (n-1) else BUG("at","negative index"); at_ _ _ = BUG ("at","index too large")} in \a x -> at_ a x) #define read (\s -> case [ x | (x,t) <- reads s, ("","") <- lex t] of { [x] -> x ; [] -> BUG("read","no parse"); _ -> BUG("read","ambigous parse")}) #define fromJust (\x -> case x of Just a -> a; Nothing -> BUG("fromJust","Nothing")) #define undefined (error ("Hit 'undefined' in "++__FILE__++", "++show __LINE__)) This redefines a bunch of "difficult" functions to report file name and line number, instead of just an anonymous error message. It won't work for (infix, non-alpha) operators -- like array indexing -- or identifiers with apostrophes, unfortunately. -k -- If I haven't seen further, it is by standing in the footprints of giants

Hello Nils, Friday, September 02, 2005, 10:47:05 AM, you wrote: NAD> Compile your program with -prof -auto-all (make sure you have the NAD> I tried this out under GHC 6.4/Linux and got a segmentation fault NAD> instead of a stack trace. Under GHC 6.2.2 it seemed to work, though. this error is already fixed in current pre-6.4.1 version -- Best regards, Bulat mailto:bulatz@HotPOP.com

On Fri, Sep 02, 2005 at 04:40:05PM +0400, Bulat Ziganshin wrote:
Hello Nils,
Friday, September 02, 2005, 10:47:05 AM, you wrote:
NAD> Compile your program with -prof -auto-all (make sure you have the
NAD> I tried this out under GHC 6.4/Linux and got a segmentation fault NAD> instead of a stack trace. Under GHC 6.2.2 it seemed to work, though.
this error is already fixed in current pre-6.4.1 version
I'm using a 2005/9/3 version of 6.4.1 and running into situations where the "stack trace" has function A calling function B, where when I look at the code, A never calls B. Is this normal? Is it some side-effect of laziness? It sure makes the traces a lot less useful. Frederik -- http://ofb.net/~frederik/

Quoting Nils Anders Danielsson
On Thu, 01 Sep 2005, Frederik Eaton
wrote: But getting a stack backtrace when there is an error should be a pretty basic feature. It's very hard to debug a large program when you can randomly get messages like "*** Exception: Prelude.head: empty list" and have no idea where they came from.
From the GHC FAQ:
My program is failing with head [], or an array bounds error, or some other random error, and I have no idea how to find the bug. Can you help?
Compile your program with -prof -auto-all (make sure you have the profiling libraries installed), and run it with +RTS -xc -RTS to get a ¡Èstack trace¡É at the point at which the exception was raised. See Section 4.14.4, ¡ÈRTS options for hackers, debuggers, and over-interested souls¡É for more details.
I tried this out under GHC 6.4/Linux and got a segmentation fault instead of a stack trace. Under GHC 6.2.2 it seemed to work, though.
I was trying to debug a smallish program where I was getting this exact error and the trick with profiling did "work" on my system, but I remember it being almost useless for me. What what did end up working for me was: myhead :: [a] -> String -> a myhead [] s = error s myhead xs _ = head xs And then I did a M-x occur head <RET>, replaced all the calls to head with |myhead xs "myhead callsite n"| and incremented n appropriately. This technique is pedestrian, but it generalizes quite well and it will work in any situation given enough time to do all the search/replace. There are certainly better methods and it's not a replacement for a real debugger, but this one is easy for a beginner to come up with and it does work. Maybe you'll find it useful. Jason

On Fri, 2 Sep 2005 dagit@eecs.oregonstate.edu wrote: ...
I was trying to debug a smallish program where I was getting this exact error and the trick with profiling did "work" on my system, but I remember it being almost useless for me. What what did end up working for me was: myhead :: [a] -> String -> a myhead [] s = error s myhead xs _ = head xs
And then I did a M-x occur head <RET>, replaced all the calls to head with |myhead xs "myhead callsite n"| and incremented n appropriately.
This technique is pedestrian, but it generalizes quite well and it will work in any situation given enough time to do all the search/replace. There are certainly better methods and it's not a replacement for a real debugger, but this one is easy for a beginner to come up with and it does work.
Just more or less as an aside, at its origin in April (!) this thread didn't mention any debugger - the question was just how to build ghc so that a stack trace would come out. A real debugger is no replacement for that (because you have to be on hand and know how to repeat the problem to get anywhere with a debugger), but that's just my opinion. Donn Cave, donn@drizzle.com

Just more or less as an aside, at its origin in April (!) this thread didn't mention any debugger - the question was just how to build ghc so that a stack trace would come out. A real debugger is no replacement for that (because you have to be on hand and know how to repeat the problem to get anywhere with a debugger), but that's just my opinion.
Donn Cave, donn@drizzle.com
I agree. Some could argue that "stack traces are no replacement for a debugger" but it is also true that "a debugger is no replacement for stack traces". :) I will try the "+RTS -xc -RTS" method, many thanks to everybody for the advice. Frederik -- http://ofb.net/~frederik/

Thanks, Ketil, your suggestion really helped me ! Thanks to Claus for the tips !
On 4/26/05, Ketil Malde
"Claus Reinke"
writes: no direct answer to your question, but a general comment on the original problem (speaking from bad experience;-): things like "head" have no place in a Haskell program of any non-trivial size, because of their useless error messages.
I must say I liked John Meacham's description of his "magic underscore". My solution to this problem is redefining the troublesome functions as cpp macros, e.g:
#define BUG(C_,M_) (error ("Program error - '"++C_++"' failed: "++M_++". Location: '"++__FILE__++"' line "++show __LINE__)) #define head (\xs -> case xs of { (x:_) -> x ; _ -> BUG("head","empty list")})
Ideally, I think something like this should be the default behavior for these functions.
-kzm
PS: If anybody wants, I can mail my additional cpp definitions for 'head', 'at' (array indexing, I couldn't work out how to redefine the bang), 'read' and 'fromJust'. -- If I haven't seen further, it is by standing in the footprints of giants
-- ________________________________ Monique Louise B.Monteiro Msc Student in Computer Science Center of Informatics Federal University of Pernambuco
participants (14)
-
Ben Lippmeier
-
Bernard Pope
-
Bulat Ziganshin
-
Claus Reinke
-
dagit@eecs.oregonstate.edu
-
David Roundy
-
Donn Cave
-
Frederik Eaton
-
Isaac Jones
-
Ketil Malde
-
Malcolm Wallace
-
Monique Louise
-
Monique Louise de Barros Monteiro
-
Nils Anders Danielsson