
Hi folks, The Hackathon, ICFP and Haskell Workshop are fast approaching, and we promised to have 6.6 out before then. This means we're on a pretty tight schedule, and some corners will have to be cut in order to get there. But that may not be a bad thing - without hard deadlines the release can easily drag on. So we propose the following schedule for the release: Release candidate: 25 August Release: 8 September giving me a couple of days before I have to fly out to Portland in case of serious mishaps in the release. 6.6 will be an alpha-quality release, mainly because we won't have time to fix all the bugs in the database currently scheduled for 6.6. However, we do expect it to pass the vast majority of the testsuite, and for most uses it'll work fine. We do expect to see more than the usual amount of churn between 6.6 and 6.6.1 while we shake things down, though. Before the release we will be focussing on things that can't be deferred until 6.6.1, and that means API changes (because patchlevels don't modify APIs). But we'll also be redefining the core set of packages that come with GHC, so the API stability will be restricted to just these: base, haskell98, template-haskell, readline, Cabal, unix, Win32 We will probably still ship binary distributions with more packages (at the option of the distribution builder), but in general other packages should be considered independent of GHC. You'll be able to upgrade them separately from GHC. I'm aware we still possibly have threading-related problems on MacOS X, Solaris and FreeBSD. We'll do our best to sort these out before the release, but we can't hold up the release for them. We could really use some help. In particular, I'd like to see test reports for platforms that we don't have run nightly builds on. If you have the time to take one of the 6.6 bugs, please go ahead: http://hackage.haskell.org/trac/ghc/query?status=new&status=assigned&status=reopened&milestone=6.6&order=priority If you plan to look into a bug, either assign it to yourself (if you have a developer account on the Trac), or else drop us a note saying so. I'll be going through the bug list and prioritising in the next day or two. Many of these bugs will be pushed back to 6.6.1. Cheers, Simon

Would a new and expanded Regex package (Test.Regex.Lazy) be something that could be included in the 6.6.0 libraries? What is the best practice for getting it included? It still supports a wrapped Posix regex backend, but also include a PCRE wrapper and pure haskell backends and work efficiently on both String and ByteString. It runs, has an increasing amount of Haddock documentation, some HUnit tests, and some new QuickCheck tests are almost done. -- Chris Simon Marlow wrote:
Hi folks,
The Hackathon, ICFP and Haskell Workshop are fast approaching, and we promised to have 6.6 out before then. This means we're on a pretty tight schedule, and some corners will have to be cut in order to get there. But that may not be a bad thing - without hard deadlines the release can easily drag on.
So we propose the following schedule for the release:
Release candidate: 25 August Release: 8 September
giving me a couple of days before I have to fly out to Portland in case of serious mishaps in the release.
6.6 will be an alpha-quality release, mainly because we won't have time to fix all the bugs in the database currently scheduled for 6.6. However, we do expect it to pass the vast majority of the testsuite, and for most uses it'll work fine. We do expect to see more than the usual amount of churn between 6.6 and 6.6.1 while we shake things down, though.
Before the release we will be focussing on things that can't be deferred until 6.6.1, and that means API changes (because patchlevels don't modify APIs). But we'll also be redefining the core set of packages that come with GHC, so the API stability will be restricted to just these:
base, haskell98, template-haskell, readline, Cabal, unix, Win32
We will probably still ship binary distributions with more packages (at the option of the distribution builder), but in general other packages should be considered independent of GHC. You'll be able to upgrade them separately from GHC.
I'm aware we still possibly have threading-related problems on MacOS X, Solaris and FreeBSD. We'll do our best to sort these out before the release, but we can't hold up the release for them.
We could really use some help. In particular, I'd like to see test reports for platforms that we don't have run nightly builds on. If you have the time to take one of the 6.6 bugs, please go ahead:
If you plan to look into a bug, either assign it to yourself (if you have a developer account on the Trac), or else drop us a note saying so.
I'll be going through the bug list and prioritising in the next day or two. Many of these bugs will be pushed back to 6.6.1.
Cheers, Simon _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

On Mon, 2006-08-07 at 17:07 +0100, Chris Kuklewicz wrote:
Would a new and expanded Regex package (Test.Regex.Lazy) be something that could be included in the 6.6.0 libraries? What is the best practice for getting it included?
It still supports a wrapped Posix regex backend, but also include a PCRE wrapper and pure haskell backends and work efficiently on both String and ByteString.
It runs, has an increasing amount of Haddock documentation, some HUnit tests, and some new QuickCheck tests are almost done.
Wouldn't it be nice to use Ville Laurikari's TRE package instead of PCRE? [It is also Posix compliant and drop in replacement for gnu regex .. as well as supporting nice extensions] -- John Skaller <skaller at users dot sf dot net> Felix, successor to C++: http://felix.sf.net

skaller wrote:
On Mon, 2006-08-07 at 17:07 +0100, Chris Kuklewicz wrote:
Would a new and expanded Regex package (Test.Regex.Lazy) be something that could be included in the 6.6.0 libraries? What is the best practice for getting it included?
It still supports a wrapped Posix regex backend, but also include a PCRE wrapper and pure haskell backends and work efficiently on both String and ByteString.
It runs, has an increasing amount of Haddock documentation, some HUnit tests, and some new QuickCheck tests are almost done.
Wouldn't it be nice to use Ville Laurikari's TRE package instead of PCRE?
[It is also Posix compliant and drop in replacement for gnu regex .. as well as supporting nice extensions]
It is possible to add support for more backends. The more the merrier, no need to replace anything. I have never heard of TRE before. I could use darcs or darwinports to install libtre. Looking at the API, I see that it is very easy to add as a backend. TRE is LGPL, which PCRE is BSD. TRE is not a replacement for PCRE. TRE claims to be a replacement for Posix regex except for collating elements. But Posix regex is already in GHC's Text.Regex(.Posix) modules. So I would say: I welcome contribution of Text.Regex.Lib.WrapTRE,StringTRE,ByteStringTRE but getting the current library into GHC-6.6 would be more important. Note that Text.Regex.Lazy has two darcs repositories: http://evenmere.org/~chrisk/trl/stable/ http://evenmere.org/~chrisk/trl/head/ -- Chris

On Mon, 2006-08-07 at 20:38 +0100, Chris Kuklewicz wrote:
skaller wrote:
Wouldn't it be nice to use Ville Laurikari's TRE package instead of PCRE?
[It is also Posix compliant and drop in replacement for gnu regex .. as well as supporting nice extensions]
It is possible to add support for more backends. The more the merrier, no need to replace anything. I have never heard of TRE before.
TRE is actually based on mathematics.
I could use darcs or darwinports to install libtre. Looking at the API, I see that it is very easy to add as a backend. TRE is LGPL, which PCRE is BSD.
Appeal to Ville to change the licence :)
TRE is not a replacement for PCRE.
Who want's a replacement? PCRE is junk. Check the papers, cached here: http://felix.sf.net/papers/regex-submatch.ps http://felix.sf.net/papers/spire2000-tnfa.ps http://felix.sf.net/papers/greedy.pdf -- John Skaller <skaller at users dot sf dot net> Felix, successor to C++: http://felix.sf.net

skaller wrote:
On Mon, 2006-08-07 at 20:38 +0100, Chris Kuklewicz wrote:
skaller wrote:
Wouldn't it be nice to use Ville Laurikari's TRE package instead of PCRE?
[It is also Posix compliant and drop in replacement for gnu regex .. as well as supporting nice extensions]
It is possible to add support for more backends. The more the merrier, no need to replace anything. I have never heard of TRE before.
TRE is actually based on mathematics.
I have added TRE as a new backend. (I have libtre-0.7.4 on OS X, it is LGPL). And I have a benchmark, which I will now copy and paste. On "b(..?c)?d": PCRE is fast, TRE is almost as fast, the DFA (without subexpression capture) is a close 3rd. The Parsec backend is about 4 times slower, and the Posix library scales poorly, being far slower on large data sets. The code is in darcs http://evenmere.org/~chrisk/trl/head/ under regex-devel/bench Benchmark: find all matches and substring capture for "b(..?c)?d" in a one million character string on disk. Print the count, the first, and the last match (with captures). The benchmark program was, for String:
module Main(main) where
import Text.Regex.XXX
filename = "datafile" regex = "b(..?c)?d" main = do input <- readFile filename let a :: [[String]] a = input =~ regex b :: Int b = length a print (b,head a,last a)
and for ByteString:
module Main(main) where
import Text.Regex.XXX import qualified Data.ByteString as B
default (Int)
filename = "datafile" regex = "b(..?c)?d"
main = do input <- B.readFile filename let a :: [[B.ByteString]] a = input =~ regex b :: Int b = length a print (b,head a,last a)
where XXX was replaces by PCRE, Parsec, DFA, PosixRE, TRE and compile with "ghc -O2" Data file is 10^6 characters from permutations of the set "abcdbcdcdd\n" Using the 10^6 character datafile and String or ByteString. (The DFS uses a different semantics so a dot matches a newline, so the matching is different.) The output and user+sys reported by the time command: BenchPCRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) total is 1.294s BenchTRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) total is 2.128s BenchDFA (107811,["bcdcd"],["bbccd"]) total is 2.313s BenchParsec (102363,["bcdcd","cdc"],["bbccd","bcc"]) total is 8.094s BenchPosixRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) total is 91.435s BenchBSPCRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) total is 0.932s BenchBSTRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) total is 1.297s BenchBSDFA (107811,["bcdcd"],["bbccd"]) total is 1.437s BenchBSParsec (102363,["bcdcd","cdc"],["bbccd","bcc"]) total is 8.496s BenchBSPosixRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) total is 89.780 For 10^5 characters on String: PCRE 0.077s DFA 0.131s TRE 0.206s PosixRE 0.445s Parsec 0.825s Old Posix 43.760s (Text.Regex using splitRegex) Old Text.Regex took 43.76 seconds on 10^5 characters to do a task comparable to the one the others did (it ran splitRegex). The new PosixRE wrapping took 0.445 seconds instead. Yes it is two orders of magnitude faster, and this is because my wrapping only marshals the String to CString once. Laziness cannot be worth 2 orders of magnitude of runtime. This is why we needed a new wrapping, which has grown into the new library. On a 10^7 length data set: PCRE (979261,["bcdcd","cdc"],["bd",""]) time 17.388s TRE (979261,["bcdcd","cdc"],["bd",""]) time 17.880s DFA (1063961,["bcdcd"],["bd"]) time 21.617s Parsec (979261,["bcdcd","cdc"],["bd",""]) time 87.330s BenchBSPCRE (979261,["bcdcd","cdc"],["bd",""]) time 8.322s BenchBSTRE (979261,["bcdcd","cdc"],["bd",""]) time 12.644s BenchBSDFA (1063961,["bcdcd"],["bd"]) time 14.115s BenchBSParsec (979261,["bcdcd","cdc"],["bd",""]) time 83.395s

On 09 August 2006 15:14, Chris Kuklewicz wrote:
For 10^5 characters on String: PCRE 0.077s DFA 0.131s TRE 0.206s PosixRE 0.445s Parsec 0.825s Old Posix 43.760s (Text.Regex using splitRegex)
Old Text.Regex took 43.76 seconds on 10^5 characters to do a task comparable to the one the others did (it ran splitRegex). The new PosixRE wrapping took 0.445 seconds instead. Yes it is two orders of magnitude faster, and this is because my wrapping only marshals the String to CString once. Laziness cannot be worth 2 orders of magnitude of runtime. This is why we needed a new wrapping, which has grown into the new library.
Right, I see the problem with Text.Regex.splitRegex, it repeatedly packs the String into a CString. But then why this result:
BenchPCRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) total is 1.294s .. etc. ... BenchPosixRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) total is 91.435s
Was this the old Posix, or your new one? If the new one, why is it so slow compared to the others? Cheers, Simon

Simon Marlow wrote:
On 09 August 2006 15:14, Chris Kuklewicz wrote:
For 10^5 characters on String: PCRE 0.077s DFA 0.131s TRE 0.206s PosixRE 0.445s Parsec 0.825s Old Posix 43.760s (Text.Regex using splitRegex)
Old Text.Regex took 43.76 seconds on 10^5 characters to do a task comparable to the one the others did (it ran splitRegex). The new PosixRE wrapping took 0.445 seconds instead. Yes it is two orders of magnitude faster, and this is because my wrapping only marshals the String to CString once. Laziness cannot be worth 2 orders of magnitude of runtime. This is why we needed a new wrapping, which has grown into the new library.
Right, I see the problem with Text.Regex.splitRegex, it repeatedly packs the String into a CString. But then why this result:
BenchPCRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) total is 1.294s .. etc. ... BenchPosixRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) total is 91.435s
Was this the old Posix, or your new one? If the new one, why is it so slow compared to the others?
Cheers, Simon
Your question has prompted me to go back into my PosixRE wrapping code and compare it to the PCRE code. I have made some changes which ought to enhance the performance of the PosixRE code. Let us see the new bechmarks on 10^6 bytes: PosixRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) real 1m35.429s user 1m17.862s sys 0m1.455s total is 79.317s PCRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) real 0m2.570s user 0m1.702s sys 0m0.219s total is 1.921s BenchBSPosixRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) real 1m32.267s user 1m16.494s sys 0m1.374s total is 77.868s BenchBSPCRE (102363,["bcdcd","cdc"],["bbccd","bcc"]) real 0m1.245s user 0m0.809s sys 0m0.110s total is 0.919s So there was only a little improvement to the previous PosixRE speed. If you want to look at the code, it is in the three Wrap.hsc for regex-posix and regex-tre and regex-pcre for the wrapMatchAll functions. But it appears to be a library issue, not a Haskell issue. I will tend to the Haddock cleanup next. -- Chris

Chris Kuklewicz wrote:
Your question has prompted me to go back into my PosixRE wrapping code and compare it to the PCRE code. I have made some changes which ought to enhance the performance of the PosixRE code. Let us see the new bechmarks on 10^6 bytes:
PosixRE (102363,["bcdcd","cdc"],["bbccd","bcc"])
real 1m35.429s user 1m17.862s sys 0m1.455s
total is 79.317s
PCRE (102363,["bcdcd","cdc"],["bbccd","bcc"])
real 0m2.570s user 0m1.702s sys 0m0.219s
total is 1.921s
So I still don't understand why PCRE should be 40 times faster than PosixRE. Surely this can't be just due to differences in the underlying C library? Cheers, Simon

simonmarhaskell:
Chris Kuklewicz wrote:
Your question has prompted me to go back into my PosixRE wrapping code and compare it to the PCRE code. I have made some changes which ought to enhance the performance of the PosixRE code. Let us see the new bechmarks on 10^6 bytes:
PosixRE (102363,["bcdcd","cdc"],["bbccd","bcc"])
real 1m35.429s user 1m17.862s sys 0m1.455s
total is 79.317s
PCRE (102363,["bcdcd","cdc"],["bbccd","bcc"])
real 0m2.570s user 0m1.702s sys 0m0.219s
total is 1.921s
So I still don't understand why PCRE should be 40 times faster than PosixRE. Surely this can't be just due to differences in the underlying C library?
It could be. The C regex.h is pretty slow. http://shootout.alioth.debian.org/gp4/benchmark.php?test=regexdna&lang=all -- Don

Donald Bruce Stewart wrote:
simonmarhaskell:
Chris Kuklewicz wrote:
Your question has prompted me to go back into my PosixRE wrapping code and compare it to the PCRE code. I have made some changes which ought to enhance the performance of the PosixRE code. Let us see the new bechmarks on 10^6 bytes:
PosixRE (102363,["bcdcd","cdc"],["bbccd","bcc"])
real 1m35.429s user 1m17.862s sys 0m1.455s
total is 79.317s
PCRE (102363,["bcdcd","cdc"],["bbccd","bcc"])
real 0m2.570s user 0m1.702s sys 0m0.219s
total is 1.921s So I still don't understand why PCRE should be 40 times faster than PosixRE. Surely this can't be just due to differences in the underlying C library?
It could be. The C regex.h is pretty slow.
http://shootout.alioth.debian.org/gp4/benchmark.php?test=regexdna&lang=all
-- Don
And I notice c++ (g++) gets away with a 3rd party library from boost:
// This implementation of regexdna does not use the POSIX regex // included with the GNU libc. Instead it uses the Boost C++ libraries // // http://www.boost.org/libs/regex/doc/index.html // // (On Debian: apt-get install libboost-regex-dev before compiling, // and then "g++ -O3 -lboost_regex regexdna.cc -o regexdna // Gentoo seems to package boost as, well, 'boost')
Which is a strange precedent. -- Chris

On Thu, 2006-08-10 at 11:32 +0100, Simon Marlow wrote:
So I still don't understand why PCRE should be 40 times faster than PosixRE. Surely this can't be just due to differences in the underlying C library?
Read Ville's papers. Includes comparisons of GNU regex and PCRE. -- John Skaller <skaller at users dot sf dot net> Felix, successor to C++: http://felix.sf.net

Chris Kuklewicz wrote:
Would a new and expanded Regex package (Test.Regex.Lazy) be something that could be included in the 6.6.0 libraries? What is the best practice for getting it included?
Since we're aiming to include fewer libraries under the GHC umbrella, not more, this wouldn't be the right approach. Also, I'm sure you want the ability to release Text.Regex.Lazy independently of GHC, so tying it to the GHC release cycle would be unduly restrictive. I do want to extract the existing Text.Regex(.Posix) from the base package. So we should think about what structure we want for regex packages. Here's one possible plan: regex-base shared regex code regex-posix the POSIX backend regex-pcre the PCRE backend ... We should have one "default" implementation that provides regexes over Strings and ByteStrings. I don't really mind which backend that is, as long as it is fast and BSD-licensed. We could then include a subset of these packages (optionally) in GHC binary distributions, so that users would have access to basic regex functionality out of the box. How does that sound? Ah, I just realised that GHC itself uses regexes in a couple of places, so I do need to include basic (String) regex functionality in the libraries. So we could either: - work on regex-base/regex-posix for inclusion in GHC, or - I could just extract Text.Regex(.Posix) from the base package into a separate package. obviously the first option is better, but time is short. Let me know what you think. Cheers, Simon

Simon Marlow wrote:
Chris Kuklewicz wrote:
Would a new and expanded Regex package (Test.Regex.Lazy) be something that could be included in the 6.6.0 libraries? What is the best practice for getting it included?
Since we're aiming to include fewer libraries under the GHC umbrella, not more, this wouldn't be the right approach. Also, I'm sure you want the ability to release Text.Regex.Lazy independently of GHC, so tying it to the GHC release cycle would be unduly restrictive.
Possibly true.
I do want to extract the existing Text.Regex(.Posix) from the base package.
"base" seems the wrong place.
So we should think about what structure we want for regex packages. Here's one possible plan:
regex-base shared regex code regex-posix the POSIX backend regex-pcre the PCRE backend ...
That could work well. It would not involved too much pulling apart. Once small quirk is there is the old Text.Regex API and a new JRegex-style API.
We should have one "default" implementation that provides regexes over Strings and ByteStrings. I don't really mind which backend that is, as long as it is fast and BSD-licensed.
A "default" backend has to be dependably present. That means either keeping the current Posix backend, adding a dependency on PCRE, or using the Haskell/Parsec backend. The problem is that String is very inefficient with Posix or PCRE and ByteString is slightly inefficient with Haskell/Parsec.
We could then include a subset of these packages (optionally) in GHC binary distributions, so that users would have access to basic regex functionality out of the box. How does that sound?
That seems like the best plan.
Ah, I just realised that GHC itself uses regexes in a couple of places, so I do need to include basic (String) regex functionality in the libraries.
I assume that is the Posix Text.Regex syntax. So you need support for this syntax in GHC.
So we could either:
- work on regex-base/regex-posix for inclusion in GHC, or
I could prepare this for you.
- I could just extract Text.Regex(.Posix) from the base package into a separate package.
obviously the first option is better, but time is short. Let me know what you think.
Cheers, Simon
I'll assemble a version organized like that this week. Important question: Should I be planning to install alongside the current Text.Regex(.Posix) or planning on replacing them? (With an identical API)? -- Chris

Chris Kuklewicz wrote:
That could work well. It would not involved too much pulling apart.
Once small quirk is there is the old Text.Regex API and a new JRegex-style API.
Is it possible to provide both? Perhaps deprecating the current API?
A "default" backend has to be dependably present. That means either keeping the current Posix backend, adding a dependency on PCRE, or using the Haskell/Parsec backend.
I'm not keen on adding a PCRE dependency. We already include an implementation of POSIX regexes in GHC itself (libraries/base/cbits/regex) which tends to get used on Windows where there isn't an implementation of POSIX regexes
The problem is that String is very inefficient with Posix or PCRE and ByteString is slightly inefficient with Haskell/Parsec.
Do you have any measurements (rough measurements would be fine)? When you say "very inefficient", by what factor is the Parsec implementation faster than using the Posix one for Strings? If we were to use the Parsec implementation, that pulls in another dependency. Not out of the question, but to be avoided if possible.
So we could either:
- work on regex-base/regex-posix for inclusion in GHC, or
I could prepare this for you.
Great, thanks!
I'll assemble a version organized like that this week. Important question: Should I be planning to install alongside the current Text.Regex(.Posix) or planning on replacing them? (With an identical API)?
We want to replace Text.Regex. So ideally you want to do this in a GHC tree, so you can remove the old Text.Regex and replace with yours. If this is too difficult, then you could develop it separately (as Text.Regex.New, or something), and I'll make the relevant changes when I import it. Cheers, Simon

Simon Marlow wrote:
Chris Kuklewicz wrote:
That could work well. It would not involved too much pulling apart.
Once small quirk is there is the old Text.Regex API and a new JRegex-style API.
Is it possible to provide both? Perhaps deprecating the current API?
It is possible to provide the old and new. The old was only defined for the String type and this probably will not be changed (at least at first).
A "default" backend has to be dependably present. That means either keeping the current Posix backend, adding a dependency on PCRE, or using the Haskell/Parsec backend.
I'm not keen on adding a PCRE dependency. We already include an implementation of POSIX regexes in GHC itself (libraries/base/cbits/regex) which tends to get used on Windows where there isn't an implementation of POSIX regexes
Ah. That is how you are doing it.
The problem is that String is very inefficient with Posix or PCRE and ByteString is slightly inefficient with Haskell/Parsec.
Do you have any measurements (rough measurements would be fine)? When you say "very inefficient", by what factor is the Parsec implementation faster than using the Posix one for Strings?
This whole Text.Regex.Lazy project was born from the computer language shootout. , http://haskell.org/hawiki/RegexDna . The Text.Regex(.Posix) that came with GHC timed out (hours!). The pure haskell/parsec version took about 2 minutes. That is the meaning "very inefficient" for repeated use of Text.Regex(.Posix) on String: more than two orders of magnitude, since it is not caching the CString that it marshals.
If we were to use the Parsec implementation, that pulls in another dependency. Not out of the question, but to be avoided if possible.
The only nonparsec/nonlibrary version is a simple DFA which is too simple for many uses. To get what people expect from regular expressions you need posix library, pcre library, my parsec parser, or find someone else's regex implementation in haskell. Or the parsec version could eventually be rewritten to not depend on parsec by implementing its own parser monad. To keep a Posix default backend the libraries/base/cbits/regex may need to become part of regex-posix. That would be a learning curve for me as I have no ghc on windows experience, though I have a computer for it next to me. So I might need help later for that.
So we could either:
- work on regex-base/regex-posix for inclusion in GHC, or
I could prepare this for you.
Great, thanks!
The re-organization is in progress (hooray for "darcs mv"). After re-organization will come the doc/Haddock clean up to match. After that comes the unit testing clean up (I have some HUnit and QuickCheck now). Then, time permitting, benchmarks.
I'll assemble a version organized like that this week. Important question: Should I be planning to install alongside the current Text.Regex(.Posix) or planning on replacing them? (With an identical API)?
We want to replace Text.Regex. So ideally you want to do this in a GHC tree, so you can remove the old Text.Regex and replace with yours. If this is too difficult, then you could develop it separately (as Text.Regex.New, or something), and I'll make the relevant changes when I import it.
I will make such a Text.Regex.New that fakes the old API. I'll make it use the posix backend, but that can be changed via an import statement. I suggest removing the old Text.Regex.Posix module. People will be able to make better use of the new API for doing this. -- Chris
participants (5)
-
Chris Kuklewicz
-
dons@cse.unsw.edu.au
-
Simon Marlow
-
Simon Marlow
-
skaller