
On 11/9/06, Brandon Moore
Looks nice, especially if you're just getting started.
The overall structure looks good, I've just made a bunch of little changes to the details. Mostly I found repeated patterns to replace with library functions or extract as helper functions.
Thanks very much! I really appreciate you taking the time to look through this code and perform the refactoring you did. Now, I hope you don't mind me asking a lot of questions about it :)
Getting a little fancier, defining the fold over your expression type captures the recursion pattern in eval and generate. It's fairly handy for defining constant folding too, if you want that.
Do you have any tips for recognizing these patterns? Its still hard for me to see them. Is there a general way to think of them? Comparing the two code pieces, I can see how the structure of the recursion was similar, but not the same. Is there a "pattern" for which pieces are common and which are unique? For example, I can think of foldl as "folding" a function over a list, with a given base case. Is there something similar for thinking about recursion?
I wonder, what's the programming equivalent of a black hole?
To stretch the analogy to the breaking point, what about virtual particles and Hawking radiation? And what does the event horizon look like? LOL.
foldExpression val stmt = f where f (Val n) = val n f (Statement op l r) = stmt op (f l) (f r)
This is great. It took me a while to realize that 'val' is a function for translating values, and 'stmt' is for translating statements. Really cool!
number = fmap (Val . read) (many1 digit) > "number"
How is this working? I read it as 'map (Val (read)) (string)' ('map', because its applied the List version of fmap). Is that correct? How does 'read' get the string argument? I would assume read is evaluated, and then its result and the string would be passed as arguments to Val. Clearly that's not right - can you correct me?
-- Takes an AST and turns it into a byte code list generate = foldExpression generateVal (\op l r -> l ++ r ++ generateOp op) where generateVal n = if abs n > 2^(2*8)-1 then [CONST n] else [LCONST n] generateOp op = case op of Plus -> [ADD] Minus -> [SUB] Mult -> [MUL] Div -> [DIV] Mod -> [MOD] Pow -> [POW]
This is what clued me into how foldExpression was working. I especially like how the lambda works to generate the correct bytecode for the operator, and how "l" and "r" are already recursively evaluated by the "f" function returned from foldExpression. I just wonder how I'll ever spot similar patterns ;)
eval_tests = suiteResults (checkResult (eval . parse))
generate_tests = suiteResults (showResult (generate . parse))
interpret_tests = suiteResults (checkResult (fromIntegral . interpret [] . compile))
Above are all more examples of partial functions and function composition. I understand the first concept, but function composition escapes me somehow. What are the rules for partial functions getting arguments when they are eventually supplied? For example, in 'interpret_tests' I can see that the function (fromIntegral . interpret . compile) gets applied to the statement via 'checkResult', but it seems to me that fromIntegral should get teh argument (i.e. because I read it is '(fromIntegral (interpret (compile)))'). Clearly, I'm wrong. Do arguments get consumed by partially applied functions regardless of their "depth"? Thanks again for your time looking at this code and maybe even answering these questions. I've already learned a ton just seeing the refactor. Justin