ok, here is what I have found out so far. First, I tested 3 html generation libraries to see if they do any escaping on the arguments passed to href (Text.Html, Text.XHtml, and HSP):

{-# OPTIONS -F -pgmFtrhsx #-}
module Main where

import System.IO
import qualified Text.Html as H
import qualified Text.XHtml as X
import HSP
import HSP.Identity
import HSP.HTML

main :: IO ()
main = 
  do hSetEncoding stdout utf8
     let nihongo = "日本語"
     putStrLn nihongo
     putStrLn $ H.renderHtml $ H.anchor H.! [H.href nihongo] H.<< (H.toHtml "nihongo")
     putStrLn $ X.renderHtml $ X.anchor X.! [X.href nihongo] X.<< (X.toHtml "nihongo")     
     putStrLn $ renderAsHTML $ evalIdentity $ <a href=nihongo>nihongo</a>

The output produced was:

*Main Text.Html System.IO> main
日本語
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 FINAL//EN">
<!--Rendered using the Haskell Html Library v0.2-->
<HTML
><A HREF = "日本語"
  >nihongo</A
  ></HTML
>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
><a href="&#26085;&#26412;&#35486;"
  >nihongo</a
  ></html
>

<a href="日本語"
>nihongo</a
>

So, none of them attempted to convert the String into a valid URL.  The XHtml library did make an attempt to encode the string, but that encoding does not really make it a valid URL. (And the other two utf-8 encoded the string, because they utf-8 encoded the whole document -- which is the correct thing to do).

The behavior of these libraries seems correct -- if they attempted to do more url encoding,  I think that would just make things worse.

Next there is the question of what are you supposed to do with non-ASCII characters in a URI? This is describe in section 2.1 of RFC 2396:

http://www.ietf.org/rfc/rfc2396.txt

   The relationship between URI and characters has been a source of
   confusion for characters that are not part of US-ASCII. To describe
   the relationship, it is useful to distinguish between a "character"
   (as a distinguishable semantic entity) and an "octet" (an 8-bit
   byte). There are two mappings, one from URI characters to octets, and
   a second from octets to original characters:

   URI character sequence->octet sequence->original character sequence

   A URI is represented as a sequence of characters, not as a sequence
   of octets. That is because URI might be "transported" by means that
   are not through a computer network, e.g., printed on paper, read over
   the radio, etc.

So a URI is a character sequence (of a restricted set of characters that are found in ASCII). A URI does not have a 'binary representation', because it could be transmitted via non-binary forms, such as a business card, etc. It is the characters that matter. A uri that has been utf-8 encoded and utf-16 encoded is still the same uri because the characters represented by those encodings are the same.

So, there is actually another little piece missing in that sequence when data is transmitted via the computer. Namely, extracting the URI from the raw octets.

 raw octets for uri -> URI character sequence -> octet sequence -> original character sequence

For example, let's pretend a web page was sent as: Content-type: text/html; charset=utf-32

The utf-32 octets representing the uri must first be decoded to characters (aka the uri character sequence). That seems outside the scope of URLT .. that stage of decoding should be done before URLT gets the data because it requires looking at HTTP headers, the meta-equiv tag, etc. Next we can convert the uri sequence into a new sequence of octets representing 8-bit encoded data. That is done by converting normal ascii characters to their 8-bit ascii equivalent, and by converting % encoded values to their equivalent 8-bit values. so the character 'a' in the URI would be converted to 0x61, and the sequence %35 would be converted to 0x35. Next the binary data is converted to the original character sequence.

There are a few things that make this tricky. 

 1. the encoding of the octet sequence in the middle is not specified in the uri. So when you are converting back to the original character sequence you don't know if octet sequence represents ascii, utf-8, or something else.

 2. normalization and reserved characters

  Every character *can* be percent encoded, though your are only supposed to percent encode a limited set. URL normalization dictates that the following three URIs are equivalent:

      http://example.com:80/~smith/home.html
      http://EXAMPLE.com/%7Esmith/home.html
      http://EXAMPLE.com:/%7esmith/home.html
 
 The %7E and ~ are equal, because ~ is *not* a reserved character. But 

   /foo/bar/baz/
   /foo%2Fbar/baz/

 are *not* equal because / is a reserved character.

RFC3986 has this to say about when to encode and decode:

2.4.  When to Encode or Decode

   Under normal circumstances, the only time when octets within a URI
   are percent-encoded is during the process of producing the URI from
   its component parts.  This is when an implementation determines which
   of the reserved characters are to be used as subcomponent delimiters
   and which can be safely used as data.  Once produced, a URI is always
   in its percent-encoded form.

   When a URI is dereferenced, the components and subcomponents
   significant to the scheme-specific dereferencing process (if any)
   must be parsed and separated before the percent-encoded octets within
   those components can be safely decoded, as otherwise the data may be
   mistaken for component delimiters.  The only exception is for
   percent-encoded octets corresponding to characters in the unreserved
   set, which can be decoded at any time.  For example, the octet
   corresponding to the tilde ("~") character is often encoded as "%7E"
   by older URI processing implementations; the "%7E" can be replaced by
   "~" without changing its interpretation.

   Because the percent ("%") character serves as the indicator for
   percent-encoded octets, it must be percent-encoded as "%25" for that
   octet to be used as data within a URI.  Implementations must not
   percent-encode or decode the same string more than once, as decoding
   an already decoded string might lead to misinterpreting a percent
   data octet as the beginning of a percent-encoding, or vice versa in
   the case of percent-encoding an already percent-encoded string.


It also has this to say about encoding Unicode data:

   When a new URI scheme defines a component that represents textual
   data consisting of characters from the Universal Character Set [UCS],
   the data should first be encoded as octets according to the UTF-8
   character encoding [STD63]; then only those octets that do not
   correspond to characters in the unreserved set should be percent-
   encoded.  For example, the character A would be represented as "A",
   the character LATIN CAPITAL LETTER A WITH GRAVE would be represented
   as "%C3%80", and the character KATAKANA LETTER A would be represented
   as "%E3%82%A2".

I can't find an official stamp of approval, but I believe the http scheme now specifies that the octets in the middle step are utf-8 encoded.

So, here is a starting example of what I think should happen for encoding, and then decoding.

1. We start with a list of path components ["foo/bar","baz"]
2. We then convert the sequence to a String containing the utf-8 encoded octets (a String not a bytestring)
3. We percent encode everything that is not an unreserved character
4. We add the delimiters

We now have a proper URI. Note that we have a String and that the URI is made up of the characters in that String. The final step happens when the URI is actually used:

 5. the URI is inserted into an HTML document (etc). The document is this encoded according to whatever encoding the document is supposed to have (could be anything), converting the URI into some encoding.

So a URI is actually encoded twice. We use a similar process to decode the URI. Here is some code that does what I described:

import Codec.Binary.UTF8.String (encodeString, decodeString)
import Network.URI
import System.FilePath.Posix (joinPath, splitDirectories)

encodeUrl :: [String] -> String
encodeUrl paths = 
  let step1 = map encodeString paths -- utf-8 encode the data characters in path components (we have not added any delimiters yet)
      step2 = map (escapeURIString isUnreserved) step1 -- percent encode the characters
      step3 = joinPath step2 -- add in the delimiters
  in step3
     
decodeUrl :: String -> [String]     
decodeUrl str =
  let step1 = splitDirectories str            -- split path on delimiters
      step2 = map unEscapeString step1 -- decode any percent encoded characters
      step3 = map decodeString step2   -- decode octets
  in step3
  
f = encodeString "日本語"     
     
test = 
  let p = ["foo/bar", "日本語"]     
      e = encodeUrl p
      d = decodeUrl e
  in (d == p, p, e ,d)

The problem with using [String] is that it assumes the only delimiter we care about is '/'. But we might also want to deal with the other delimiters such as : # ?. (For example, if we want to use the urlt system to generate the query string as well as the path..). But [String] does not give us a way to do that. Instead it seems like we would need a type that would allow us to specify the path, the query string, the fragment, etc. namely a real uri type? Perhaps there is something on hackage we can leverage.

I think that having each individual set of toUrl / fromUrl functions deal with the encoding / decoding is not a good way to go. Makes it too easy to get it wrong. Having it all done correctly in one place makes life easier for people adding new instances or methods of generating instances.

I think that urlt dealing with ByteString or [ByteString] is never the right thing. The only time that the URI is a 'byte string' is when it is encoded in an html document, or encoded in the http headers. But at the URLT level, we don't know what encoding that is. Instead we want the bytestring decoded, and we want to receive a 'URI character sequence.' Or we want to give a 'URI character sequence' to a the html library, and let it worry about the encoding of the document.

At present, I think I am still ok with the fromURL and toURL functions producing and consuming String values. But, what we need is an intermediate URL type like:

data URL = URL { paths :: [String], queryString :: String :: frag :: String }

and functions that properly do, encodeURL :: URL -> String, decodeURL :: String -> URL.

The AsURL class would look like:

class AsURL u where
  toURLC :: u -> URL
  fromURLC :: URL -> Failing u

instance AsURL URL where
  toURLC = id
  fromURLC = Success

And then toURL / fromURL would be like:

toURL :: (AsURL u) => u -> String
toURL = encodeURL . toURLC

fromURL :: (AsURL u) => String -> u
fromURL = fromURLC . decodeURL

The Strings in the URL type would not require any special encoding/decoding. The encoding / decoding would be handled by the encodeURL / decodeURL functions.

In other words, when the user creates a URL type by hand, they do not have to know anything about url encoding rules, it just happens like magic. That should make it much easier to write AsURL instances by hand.

Does this makes sense to you?

The key now is seeing if someone has already create a suitable URL type that we can use...

- jeremy

On Fri, Mar 19, 2010 at 5:55 PM, Michael Snoyman <michael@snoyman.com> wrote:
http://www.ietf.org/rfc/rfc2396.txt

On Fri, Mar 19, 2010 at 2:41 PM, Jeremy Shaw <jeremy@n-heptane.com> wrote:
On Fri, Mar 19, 2010 at 5:22 PM, Michael Snoyman <michael@snoyman.com> wrote:
I am not going to have time to look at this again until Saturday or Sunday. There are a few minor details that have been swept under the rug that need to be addressed. For example, when exactly does should url encoding / decoding take place. It's not good if that happens twice or not at all.


Just to confuse the topic even more: if we do real URL encoding/decoding, I believe we would have to assume a certain character set. I had to deal with a site that was encoded in non-UTF8 just a bit ago, and dealing with query parameters is not fun.

That said, perhaps we should consider making the type of PathInfo "PathInfo ByteString" so we make it clear that we're doing no character encoding.

Yeah. I dunno. I just know it needs to be solved :) 
 
Another issue in the same vein is dealing with leading and trailing slashes, though I think this is fairly simple in practice: the web app knows what to do about the trailing slashes, and each plugin should always pass a leading slash.

I am not quite sure what you mean 'each plugin should always pass a leading slash'. Pass to whom?

If we have:

MySite = MyHome    | MyBlog Blog
MyBlog = BlogHome | BlogPost String

Then I would expect something like this:

formatMySite MyHome = "MyHome"
formatMySite (MyBlog blog) = "MyBlog/" ++ formatMyBlog blog

formatMyBlog BlogHome = "BlogHome"
formatMyBlog (BlogPost title) = "BlogPost/" ++ title

mkAbs = ("http://localhost:3000/" ++)

(ignoring any escaping  that needs to happen in title, and ignoring an AbsPath / PathInfo stuff).

But we could, of course, do it the other way:


formatMySite MyHome = "/MyHome"
formatMySite (MyBlog blog) = "/MyBlog" ++ formatMyBlog blog

formatMyBlog BlogHome = "/BlogHome"
formatMyBlog (BlogPost title) = "/BlogPost/" ++ title

mkAbs = ("http://localhost:3000" ++)

There definitely needs to be some policy.

- jeremy

Then here's a proposal for both issues at once:

* PathInfo is a ByteString
* handleWai strips the leading slash from the path-info
* every component parses and generates URLs without a leading slash. Trailing slash is application's choice.

Regarding URL encoding, let me point out that the following are two different URLs (just try clicking on them):
http://www.snoyman.com/blog/entry%2Fpersistent-plugs/

In other words, if we ever URL-decode the string before it reaches the application, we will have conflated unique URLs. I see two options here:

* We specify that PathInfo contains URL-encoded values. Any fromUrl/toUrl functions must be aware of this fact.
* We change the type of PathInfo to [ByteString], where we split the PathInfo by slashes, and specify that the pieces of *not* URL-encoded. In order to preserve perfectly the original value, we should not combine adjacent delimiters. In other words:

/foo/bar/baz/ -> ["foo", "bar", "baz", ""] -- note the trailing empty string
/foo/bar/baz -> ["foo", "bar", "baz"] -- we don't need a leading empty string; *every* pathinfo begins with a slash
/foo%2Fbar/baz/ -> ["foo/bar", "baz", ""]
/foo//bar/baz -> ["foo", "", "bar", "baz]

I'm not strongly attached to any of this. Also, my original motivation for breaking up the pieces (easier pattern matching) will be mitigated by the usage of ByteStrings.

Michael