It is not impossible, but a lot of work. And if you want to do it correctly you would have to support UTF-16 (BE of LE) and UTF-32 (BE of LE) as well. You can't expect someone to start writing utf encoders and decoders every time he needs a fast parser.

Sjoerd

On Jan 14, 2009, at 12:42 AM, Luke Palmer wrote:

On Tue, Jan 13, 2009 at 4:39 PM, Sjoerd Visscher <sjoerd@w3future.com> wrote:
JSON is a UNICODE format, like any modern format is today. ByteStrings are not going to work.

I don't understand this statement.  Why can one not make a parser from ByteStrings that can decode UTF-8?

Luke
 


If everybody starts yelling "ByteString" every time String performance is an issue, I don't see how Haskell is ever going to be a "real world programming language".


On Jan 13, 2009, at 4:00 PM, Don Stewart wrote:

ketil:
"Levi Greenspan" <greenspan.levi@googlemail.com> writes:

Now I wonder why Text.JSON is so slow in comparison and what can be
done about it. Any ideas? Or is the test case invalid?

I haven't used JSON, but at first glance, I'd blame String IO.  Can't
you decode from ByteString?


Text.JSON was never optimised for performance. It was designed for small
JSON objects. For things above 1M I'd suggest using Data.Binary (or a
quick JSON encoding over bytestrings). Shouldn't be too hard to prepare.

-- Don
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

--
Sjoerd Visscher
sjoerd@w3future.com




_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


--
Sjoerd Visscher