
Hello cafe, This is just a small thought, but its been bugging me. We have these things called type classes for a reason (I like to think). When making a new data type 'Data', it is not productive to avoid type classes such as 'Show' and export a 'showData' function. Examples of what I'm talking about include showHtml, showTrie, showInstalledPackageInfo... I know the default derivation (and thus generally accepted) instance of Show isn't pretty, but that just means to me that we need either more methods within the Show type class or start using the prettyclass package more. If the problem is an API issue lets fix Pretty or Show. But this show* stuff should disappear in the long run. Tom

On Fri, 26 Dec 2008, Thomas DuBuisson wrote:
Hello cafe, This is just a small thought, but its been bugging me. We have these things called type classes for a reason (I like to think). When making a new data type 'Data', it is not productive to avoid type classes such as 'Show' and export a 'showData' function.
Examples of what I'm talking about include showHtml, showTrie, showInstalledPackageInfo...
I know the default derivation (and thus generally accepted) instance of Show isn't pretty, but that just means to me that we need either more methods within the Show type class or start using the prettyclass package more.
If the problem is an API issue lets fix Pretty or Show. But this show* stuff should disappear in the long run.
I disagree: http://www.haskell.org/haskellwiki/Slim_instance_declaration http://www.haskell.org/pipermail/libraries/2006-September/005791.html There is not much to fix in Show (except the showList issue) since it is for showing Haskell expressions. One could however blame developers of calling pretty printing functions 'show*'. :-)

On Fri, 2008-12-26 at 11:51 +0000, Thomas DuBuisson wrote:
Hello cafe, This is just a small thought, but its been bugging me. We have these things called type classes for a reason (I like to think).
Type classes were invented for two reasons: 1) To imitate mathematical convention. Addition, in full, is written as x +_{A} y where A is a mathematical structure supplying addition. However, the convention is that the subscript may be omitted `when no ambiguity may arise'. Programming languages (generally) take this and run with it, allowing x + y to mean (depending on the language) pretty much anything. Type classes are an ingenious step back toward the mathematical convention, where the operation + must come from some complete structure. 2) To allow conversion from structured data into strings to be treated as a single operation. Most languages support this in some form, but I am increasingly failing to see why. There are usually several different ways in which a given piece of structured data can meaningfully be `shown'; in languages which try to do the right thing when given print "string", 'c', (2 + 2), [true, false, :maybe] you *still* end up defining (multiple!) special-purpose output or conversion-to-string functions so you can print the same data in multiple ways. I think making Show a type class was a mistake. jcc

I don't think that making Show a type class was a mistake. I think
that we have long since overloaded the meaning of Show and made it
ambiguous. There are multiple distinct reasons people use Show, and
this gets confusing. It would be good if we as a community tried to
nail down these different meanings that people tend to attach to Show
and fork out new type classes that each encompass those meanings.
Text is useful and often ignored as a means of debugging, inspecting,
logging, and serializing.
Off the top of my head, I would say that the traditional meaning of
Show could be changed to Serial, where serial encompasses both Read
and Show -- possibly we could find a more efficient read function,
several have been proposed. Then a separate class could be made for
HumanReadable (or Loggable) where the point would bet that we write
something that can be read by humans without conforming to a
particular grammar that Haskell could read back in.
-- Jeff
On Fri, Dec 26, 2008 at 1:31 PM, Jonathan Cast
On Fri, 2008-12-26 at 11:51 +0000, Thomas DuBuisson wrote:
Hello cafe, This is just a small thought, but its been bugging me. We have these things called type classes for a reason (I like to think).
Type classes were invented for two reasons:
1) To imitate mathematical convention. Addition, in full, is written as
x +_{A} y
where A is a mathematical structure supplying addition. However, the convention is that the subscript may be omitted `when no ambiguity may arise'. Programming languages (generally) take this and run with it, allowing
x + y
to mean (depending on the language) pretty much anything. Type classes are an ingenious step back toward the mathematical convention, where the operation + must come from some complete structure.
2) To allow conversion from structured data into strings to be treated as a single operation. Most languages support this in some form, but I am increasingly failing to see why. There are usually several different ways in which a given piece of structured data can meaningfully be `shown'; in languages which try to do the right thing when given
print "string", 'c', (2 + 2), [true, false, :maybe]
you *still* end up defining (multiple!) special-purpose output or conversion-to-string functions so you can print the same data in multiple ways. I think making Show a type class was a mistake.
jcc
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Fri, 26 Dec 2008, Jeff Heard wrote:
I don't think that making Show a type class was a mistake. I think that we have long since overloaded the meaning of Show and made it ambiguous. There are multiple distinct reasons people use Show, and this gets confusing. It would be good if we as a community tried to nail down these different meanings that people tend to attach to Show and fork out new type classes that each encompass those meanings.
+1 See also http://www.haskell.org/haskellwiki/Show_and_Read_instance

On Fri, Dec 26, 2008 at 1:55 PM, Jeff Heard
Off the top of my head, I would say that the traditional meaning of Show could be changed to Serial, where serial encompasses both Read and Show -- possibly we could find a more efficient read function, several have been proposed. Then a separate class could be made for HumanReadable (or Loggable) where the point would bet that we write something that can be read by humans without conforming to a particular grammar that Haskell could read back in.
+1. 'Show' sounds like it's for what Java 'toString' and Ruby 'to_s', etc., do, and that's not right.

On Fri, 2008-12-26 at 13:55 -0600, Jeff Heard wrote:
I don't think that making Show a type class was a mistake. I think that we have long since overloaded the meaning of Show and made it ambiguous. There are multiple distinct reasons people use Show, and this gets confusing. It would be good if we as a community tried to nail down these different meanings that people tend to attach to Show and fork out new type classes that each encompass those meanings. Text is useful and often ignored as a means of debugging, inspecting, logging, and serializing.
True. Although I predict finding a description of a function for doing any/all of the above which is precise enough that there is guaranteed to be a *single* unique such function of type tau -> String, for each type tau, will be difficult.
Off the top of my head, I would say that the traditional meaning of Show could be changed to Serial,
The traditional meaning of Show comes from the fact that Hugs/ghci use it, right? So why do they do that? It's not hard to come up with scenarios (involving qualified import of modules, say, which is considered good practice) where the result of Show is *not* acceptable as input to ghci. That is, where typing an expression at the ghci prompt and then cutting-and-pasting the response back in gives a syntax error. (Data.Map, for example).
where serial encompasses both Read and Show -- possibly we could find a more efficient read function, several have been proposed. Then a separate class could be made for HumanReadable (or Loggable) where the point would bet that we write something that can be read by humans without conforming to a particular grammar that Haskell could read back in.
When I work with my (forthcoming) interpreter, depending on my mood, I may want to see an expression in any of three forms: * The derived Show instance (yes, sometimes it's useful) * The pretty-printed form of the expression * A parser function applied to a Haskell string representation of the second form --- still (somewhat) readable, but also acceptable to ghci as input. Which version do I make the instance of which class? (In fact, I *do* import modules qualified, internally within my interpreter, so the derived Show instance does not produce acceptable input for ghci, but the application listed third does.) jcc

G'day all.
Quoting Jeff Heard
I don't think that making Show a type class was a mistake.
I don't either. Two main reasons: 1. [Char] should not be shown ['l','i','k','e',' ','t','h','i','s']. 2. Default implementations of Show can break abstractions by leaking implementation details.
I think that we have long since overloaded the meaning of Show and made it ambiguous. There are multiple distinct reasons people use Show, and this gets confusing. It would be good if we as a community tried to nail down these different meanings that people tend to attach to Show and fork out new type classes that each encompass those meanings. Text is useful and often ignored as a means of debugging, inspecting, logging, and serializing.
I tend to agree. Some thoughts: - Show is what print outputs and what GHCi reports. Therefore, to most programmers, it's primarily for human-readability regardless of what the standard says. - Read is barely useful as-is. Don't get me wrong; the "read" function has a very handy interface, especially if all you need is to convert a String into an Integer. But I'd wager that the majority of the most expert of expert Haskellers couldn't write a custom Read instance without constantly referring to the documentation and/or example code. In addition, very few people are aware of the performance characteristics of "reads". - If you want serialisation and deserialisation, Show and Read are poorly-suited for it. A real solution requires handling tricky cases like versioning, redundancy (e.g. computed fields), smart constructors etc. - If what you actually want is parsing and/or pretty-printing, we have some great solutions for that. Cheers, Andrew Bromage
participants (6)
-
ajb@spamcop.net
-
brian
-
Henning Thielemann
-
Jeff Heard
-
Jonathan Cast
-
Thomas DuBuisson