Just to clarify the point of my earlier comment...
I suggest that the "separation of concerns" (SOC) principle has many applications. A common use shows up in the advice to represent each distinct concept exactly one place in the code, and to do so in a way that supports orthogonality (the freedom to mix-and-match).
In this case, I used it to separate the thought process of designing the code from the lexical layout of the code.
I have no business legislating the order in which someone else thinks about the cases (sometimes more than two!) encountered in decomposing a problem. However, in my experience, the order in which I think about parts of the code, and the order in which they are laid out in the source file, are separate concerns. And I have often found it useful to consider them separately.
For example, in some problems (and language implementations) it may help performance to ensure that the most frequent case is considered first, especially when there are multiple cases to consider or when the distinguishing conditions are costly to evaluate.
I find that making my guards (conditions) explicit allows me the freedom to order the alternatives in whatever way I find useful, without having to worry about introducing a defect in the code.
Incidentally, I also find it interesting to see the subtle effects that our terminology has on the way we approach problems. Thinking of a list as "it may be empty or not" takes my thoughts in a different direction than if I think "it may have a head or not".
By all means, think about your recursive functions any way you wish! Just please don't tell me that I must place them is a specific order in my code.
Regards,-jn-
On Thu, Feb 19, 2015 at 3:02 AM, Dudley Brooks <dbrooks@runforyourlife.org> wrote:
On 2/18/15 5:29 PM, Mike Meyer wrote:
I don't understand. I don't believe I said anything about avoiding else-if, or about not avoiding it. But I'm not quite sure what you mean. Are you referring toOn Wed, Feb 18, 2015 at 7:16 PM, Dudley Brooks <dbrooks@runforyourlife.org> wrote:
Hmm. Well, I'd say that that's a feature of, specifically, Haskell's pattern-matching strategy and list-description syntax, rather than of recursion in general or the structure of this particular problem. In other languages with recursion you might have no choice except to start with the base case, even for this problem, or else you'd get the same kind of error you mention below (depending on the language). I think it's good when you're *learning* recursion to always start with the base case(s).
I disagree that this is a Haskell-specific feature. Any else-if like structure will have this property, no matter what language it's in. That Haskell provides a syntax as part of the function declaration is special, but that doesn't let you avoid the else-if construct when the problem requires it.
if condition1
then instruction1
elseif condition2
then instruction2
?
But what is condition1? Wouldn't it probably be the base case, and instruction1 the procedure on the base case?
Is there something special about "elseif" that guarantees that instruction1 *before* it won't crash if condition1 isn't the base case??? I'm probably totally missing your intention here.
But anyway, isn't it actually just Haskell's syntax "x:xs" that lets the pattern be tested and bypassed without crashing on an empty list, so that it *can* fall through to the base case at the end? If Haskell only had the syntax "(head xs), then that *would* crash on the empty list if the empty list had not previously been taken care of as a base case, as Joel Neely pointed out.
I didn't mean that *no* other language might have such a syntactical construction. (I didn't mean "specifically Haskell", I meant "specifically the pattern matching". Sorry about the ambiguity.) So if some other language has such a construction, then it's still the *syntax* that lets you cheat on the base case; it's not the structure of the problem itself nor the basic underlying notion of recursion.
I would also argue that in Mr Neely's first example, while the *explicit* base case [] is at the end, nevertheless the first line still *implicitly* refers to the base case: pattern matching on "x:xs" says "*if* the data has the structure x:xs", i.e. "if it is not a bunch of other stuff ... including *not the empty list*!)". Certainly you couldn't merely do the recursive step first without a condition like this particular one. The reason this syntax *seems* to let you avoid thinking about the base case first is because secretly it says "only try this step if we're not looking at the base case"!
I was a math major, not a CS major, so I'm also prejudiced in favor of base case first. And, as stated above, I think we *are* actually *considering* the base case first! (We're merely putting off telling what to *do* with that base case.) I think that the "syntactic sugar" of some languages obscures (intentionally, for purposes of convenience) what's really happening mathematically.It may be my fondness for proof by induction, but I think doing the base case first is a good idea for another reason. The code for the recursive cases assumes that you can correctly handle all the "smaller" cases. If that's wrong because some assumption about the base case turns out to be false when you actually write it, then you have to rewrite the recursive cases for the correct base case. So it's better to make sure your base case is going to work before you start writing the code that's going to use it.
_______________________________________________
Beginners mailing list
Beginners@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
--
Beauty of style and harmony and grace and good rhythm depend on simplicity. - Plato