
Am 20.09.20 um 14:02 schrieb Ignat Insarov:
The Darcs code you show illustrates the point Chris Done speaks for as well. Observe top level names: `displayPatch`, `commuteConflicting`, `cleanMerge` — quite German!
Yes, top level functions are typical candidates for longer names. I am not opposed to the "german notation" in any way, I just don't think it is always appropriate to use this style for every variable, including function parameters, as suggested in the blog.
Then there is `ctxAddInvFL` and `mapFL_FL`, but that from other modules.
Well, sometimes you have to compromise between legibility and conciseness, especially when distinguishing between variants. The FL and RL sequence types are ubiquitous in our code base and the convention of suffixing a function with them to indicate what type of parameter it takes is well established. I wouldn't want to write out "monoidConcat" instead of "mconcat" everywhere. (Or would that have to be "semigroupConcat" nowadays? Thankfully we could avoid bikeshedding this to death...) Or "foldRight" or even "foldAssociatingRightwards" instead of "foldr".
Finally, I tried to find out what `Prim` stands for — I went as far as to the index of `darcs` on Hackage[3] but no luck. And `prim` is the most frequent in the list of words of the module, with 125 occurrences in normalized case. Primitive? Primary? Prime? Primavera?
Your first guess was correct ;-) Though I doubt that knowing this helps to understand the code better. Knowing that 'log' is short for 'logarithm' doesn't help you understand a formula containing 'log' unless you already know what 'logarithm' means. Long "german" names don't relieve you from the task of familiarizing yourself with the problem domain and its concepts and conventions. As regards type setting and unicode symbols, I am not a great fan of that stuff. IMO it makes no sense to mimic mathematical style in any literal sense. The point of a formula is not that it contains fancy special notation. Rather, the point is to avoid distracting the reader with irrelevant details. The only difference between a mathematical formula and a (functional) program is that the latter can be (efficiently) executed by a machine, *in addition* to being read and understood by humans. Besides, a lot of notational conventions in mathematics are not well suited to programming or formally proving things. Many (if not most) constructs that traditionally have special notation in math (e.g. sum, integral, etc) are subsumed by the concept of a higher order function. This has been well-known for several decades now, but the mathematical community is extremely conservative with its established notation. My personal explanation for this phenomenon is that all the existing work in math (books, papers) serve as a giant "standard library for the math language" and changing established notation would mean a huge effort in factorizing (i.e. re-writing) most of that existing work. That said, there are cases where a graphical notation is actually better suited for abstracting irrelevant details than the equivalent textual formula. The most well-known example for this is category theory with its arrow diagrams. As I found out a few years ago, patch theory is another instance where an arrow diagram is often more succinct and less cluttered than the textual formula. Ever since I wished I could include such diagrams directly in the code. Here is an example where I used ASCII graphics to explain a fairly complicated piece of code: https://hub.darcs.net/darcs/darcs-screened/browse/src/Darcs/Repository/Merge... This crutch clearly has limits: to picture a three-way merge you need to move from squares to cubes which gets quite annoying to do in ASCII. Cheers Ben