Proposal: Add {to,from}Strict conversion functions between strict and lazy ByteStrings

I propose to add optimized {to,from}Strict conversion functions between strict and lazy ByteStrings to the Data.ByteString.Lazy API. Discussion deadline: 2 weeks from now (12 November) = Current State = The current Data.ByteString.Lazy API doesn't provide direct conversion functions to/from single strict ByteStrings Currently, there are only `fromChunks` and `toChunks`, by which convert to/from a list of strict ByteStrings. A possible reference implementation of the missing conversion functions is: fromStrict = BL.fromChunks . (:[]) and toStrict = B.concat . BL.toChunks == The Issues == The lack of `fromStrict`/`toStrict` in the Data.ByteString.Lazy API has the following issues: - Convenience: If the single-strict-bytestring conversion is often needed, one tends to define module- or package-local helper functions for convenience/readability to perform the desired conversion. This violates the DRY principle. - Principle of least suprise: Might be confusing to users new to `Data.ByteString.Lazy` why there is no direct conversion. - Symmetry with `Data.Text.Lazy` API which does provide such single-strict-text conversion functions (`fromStrict`/`toStrict`) - Performance: The above provided "naive" `toStrict` definition has a roughly 2 to 4 times higher overhead than a manually fused version (which was kindly provided by Bas van Dijk -- whom I'd like to thank for providing me with the optimized versions of toStrict and fromStrict) -- see end of this mail for criterion benchmark code and results = Proposed Enhancement = Enhance the Data.ByteString.Lazy API by adding the following conversion functions (suggestions for improvements are highly welcome): -- see benchmark code at end of mail for the qualified imports -- |/O(n)/ Convert a strict ByteString into a lazy ByteString. fromStrict :: B.ByteString -> BL.ByteString fromStrict = flip BLI.chunk BLI.Empty -- |/O(n)/ Convert a lazy ByteString into a strict ByteString. toStrict :: BL.ByteString -> B.ByteString toStrict lb = BI.unsafeCreate len $ go lb where len = BLI.foldlChunks (\l sb -> l + B.length sb) 0 lb go BLI.Empty _ = return () go (BLI.Chunk (BI.PS fp s l) r) ptr = withForeignPtr fp $ \p -> do BI.memcpy ptr (p `plusPtr` s) (fromIntegral l) go r (ptr `plusPtr` l) == Benchmark Code & Results == ------------------------------------------------------------------------ {-# LANGUAGE OverloadedStrings #-} import Criterion import Criterion.Main import qualified Data.ByteString as B import qualified Data.ByteString.Internal as BI import qualified Data.ByteString.Lazy as BL import qualified Data.ByteString.Lazy.Internal as BLI import Foreign.ForeignPtr import Foreign.Ptr toStrict1 :: BL.ByteString -> B.ByteString toStrict1 = B.concat . BL.toChunks toStrict2 :: BL.ByteString -> B.ByteString toStrict2 lb = BI.unsafeCreate len $ go lb where len = BLI.foldlChunks (\l sb -> l + B.length sb) 0 lb go BLI.Empty _ = return () go (BLI.Chunk (BI.PS fp s l) r) ptr = withForeignPtr fp $ \p -> do BI.memcpy ptr (p `plusPtr` s) (fromIntegral l) go r (ptr `plusPtr` l) main :: IO () main = do let lbs1 = "abcdefghij" lbs2 = BL.fromChunks (replicate 10 "abcdefghij") lbs3 = BL.fromChunks (replicate 1000 "abcdefghij") -- force evaluation of lbs{1,2,3} and verify validity print $ toStrict1 lbs1 == toStrict2 lbs1 print $ toStrict1 lbs2 == toStrict2 lbs2 print $ toStrict1 lbs3 == toStrict2 lbs3 defaultMain [ bgroup "toStrict" [ bench "simple #1" $ whnf toStrict1 lbs1 , bench "simple #2" $ whnf toStrict1 lbs2 , bench "simple #3" $ whnf toStrict1 lbs3 , bench "optimized #1" $ whnf toStrict2 lbs1 , bench "optimized #2" $ whnf toStrict2 lbs2 , bench "optimized #3" $ whnf toStrict2 lbs3 ] ] {- True True True warming up estimating clock resolution... mean is 2.302557 us (320001 iterations) found 2039 outliers among 319999 samples (0.6%) 1658 (0.5%) high severe estimating cost of a clock call... mean is 54.99870 ns (14 iterations) found 1 outliers among 14 samples (7.1%) 1 (7.1%) low mild benchmarking toStrict/simple #1 mean: 28.96077 ns, lb 28.89527 ns, ub 29.01562 ns, ci 0.950 std dev: 305.8466 ps, lb 262.1008 ps, ub 345.6136 ps, ci 0.950 benchmarking toStrict/simple #2 mean: 487.0739 ns, lb 486.7939 ns, ub 487.4713 ns, ci 0.950 std dev: 1.699232 ns, lb 1.262363 ns, ub 2.457099 ns, ci 0.950 benchmarking toStrict/simple #3 mean: 55.06322 us, lb 54.91370 us, ub 55.20236 us, ci 0.950 std dev: 741.6239 ns, lb 656.3273 ns, ub 846.6403 ns, ci 0.950 benchmarking toStrict/optimized #1 mean: 48.67522 ns, lb 48.65188 ns, ub 48.70237 ns, ci 0.950 std dev: 129.3192 ps, lb 111.3761 ps, ub 165.4819 ps, ci 0.950 benchmarking toStrict/optimized #2 mean: 178.6342 ns, lb 178.5480 ns, ub 178.7276 ns, ci 0.950 std dev: 457.4436 ps, lb 409.2746 ps, ub 519.8267 ps, ci 0.950 benchmarking toStrict/optimized #3 mean: 13.01866 us, lb 13.00734 us, ub 13.03549 us, ci 0.950 std dev: 70.09916 ns, lb 52.18012 ns, ub 97.77226 ns, ci 0.950 -}

On 29 October 2011 12:05, Herbert Valerio Riedel
I propose to add optimized {to,from}Strict conversion functions between strict and lazy ByteStrings to the Data.ByteString.Lazy API.
+1 Maybe this is bikeshedding and I'm not sure I like it but we could also rename these functions to: toStrict -> fromLazy fromStrict -> toLazy Because we nowhere mention the word 'strict' in the bytestring API but we do mention to word 'lazy'. Any idea why "toStrict/simple #1" is faster than "toStrict/optimized #1"? Regards, Bas

On Sat, Oct 29, 2011 at 2:18 PM, Bas van Dijk
On 29 October 2011 12:05, Herbert Valerio Riedel
wrote: I propose to add optimized {to,from}Strict conversion functions between strict and lazy ByteStrings to the Data.ByteString.Lazy API.
+1
Maybe this is bikeshedding and I'm not sure I like it but we could also rename these functions to:
toStrict -> fromLazy fromStrict -> toLazy
Because we nowhere mention the word 'strict' in the bytestring API but we do mention to word 'lazy'.
In that case, I think they should be in the non-Lazy module? ByteString.Lazy.fromLazy/toLazy sound a bit odd... (at a first impression, you'd think they were identity functions).
Any idea why "toStrict/simple #1" is faster than "toStrict/optimized #1"?
Regards,
Bas
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
-- Work is punishment for failing to procrastinate effectively.

On 10/29/11 8:23 AM, Gábor Lehel wrote:
Maybe this is bikeshedding and I'm not sure I like it but we could also rename these functions to:
toStrict -> fromLazy fromStrict -> toLazy
Because we nowhere mention the word 'strict' in the bytestring API but we do mention to word 'lazy'.
In that case, I think they should be in the non-Lazy module? ByteString.Lazy.fromLazy/toLazy sound a bit odd... (at a first impression, you'd think they were identity functions).
I don't think that sounds too strange. We're defining a new thing (lazy ByteStrings) which is a special version of something else more "primitive", so we're also defining how to convert the primitive things into our special things. In any case, +1 to adding these functions to the package. I don't know how many times I've written them by hand this way... -- Live well, ~wren

On Sun, Oct 30, 2011 at 12:53 AM, wren ng thornton
In any case, +1 to adding these functions to the package. I don't know how many times I've written them by hand this way...
+1!! Same here -- I've written these functions, or their inline
equivalents, hundreds of times.
G
--
Gregory Collins

Just as a historical note, they were intentionally left out of the original
bytestring library, as it was felt it would only encourage mixing up lazy
and strict bytestrings, with resulting poor and confusing performance.
It was intentional that you had to compose two functions to do this.
-- Don
On Sun, Oct 30, 2011 at 5:19 AM, Gregory Collins
On Sun, Oct 30, 2011 at 12:53 AM, wren ng thornton
wrote: In any case, +1 to adding these functions to the package. I don't know how many times I've written them by hand this way...
+1!! Same here -- I've written these functions, or their inline equivalents, hundreds of times.
G -- Gregory Collins
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Sun, Oct 30, 2011 at 09:47, Don Stewart
Just as a historical note, they were intentionally left out of the original bytestring library, as it was felt it would only encourage mixing up lazy and strict bytestrings, with resulting poor and confusing performance.
I think the fact that the issue has come up demonstrates that this was the wrong solution. In particular, this design has led to a bit too much "every library chooses one and you lose or are forced to convert to a possibly inappropriate representation if you don't agree". -- brandon s allbery allbery.b@gmail.com wandering unix systems administrator (available) (412) 475-9364 vm/sms

On Sat, 29 Oct 2011, Bas van Dijk wrote:
On 29 October 2011 12:05, Herbert Valerio Riedel
wrote: I propose to add optimized {to,from}Strict conversion functions between strict and lazy ByteStrings to the Data.ByteString.Lazy API.
+1
I remember I also missed these conversions, so add me as a supporter.
Maybe this is bikeshedding and I'm not sure I like it but we could also rename these functions to:
toStrict -> fromLazy fromStrict -> toLazy
Because we nowhere mention the word 'strict' in the bytestring API but we do mention to word 'lazy'.
On the one hand this is correct, on the other hand the conversion functions will certainly reside in the Lazy module, because lazy bytestrings can be thought of as being build from strict bytestrings. In the Lazy module only the function names toStrict and fromStrict make sense.

On 29 October 2011 14:27, Henning Thielemann
On the one hand this is correct, on the other hand the conversion functions will certainly reside in the Lazy module, because lazy bytestrings can be thought of as being build from strict bytestrings. In the Lazy module only the function names toStrict and fromStrict make sense.
You convinced me. Lets stick to the names toStrict and fromStrict exported from the Lazy module. Bas

On Sat, Oct 29, 2011 at 8:49 AM, Bas van Dijk
You convinced me. Lets stick to the names toStrict and fromStrict exported from the Lazy module.
This is also consistent with the naming used in the text package. +1 on all counts.

On Sat, 2011-10-29 at 14:18 +0200, Bas van Dijk wrote:
Any idea why "toStrict/simple #1" is faster than "toStrict/optimized #1"?
tbh, I missed #1 being actually slower... Thx for pointing that out! The reason is simply, that B.concat performs zero-copying for 0 and 1 chunks: -- | /O(n)/ Concatenate a list of ByteStrings. concat :: [ByteString] -> ByteString concat [] = empty concat [ps] = ps concat xs = ... I've added those zero-copy optimizations to the optimized toStrict2, and now all timings are better than for the naive implementation toStrict1: -------------------------------------------------------------------------- {-# LANGUAGE OverloadedStrings #-} import Criterion import Criterion.Main import qualified Data.ByteString as B import qualified Data.ByteString.Internal as BI import qualified Data.ByteString.Lazy as BL import qualified Data.ByteString.Lazy.Internal as BLI import Foreign.ForeignPtr import Foreign.Ptr toStrict1 :: BL.ByteString -> B.ByteString toStrict1 = B.concat . BL.toChunks toStrict2 :: BL.ByteString -> B.ByteString toStrict2 BLI.Empty = B.empty toStrict2 (BLI.Chunk c BLI.Empty) = c toStrict2 lb = BI.unsafeCreate len $ go lb where len = BLI.foldlChunks (\l sb -> l + B.length sb) 0 lb go BLI.Empty _ = return () go (BLI.Chunk (BI.PS fp s l) r) ptr = withForeignPtr fp $ \p -> do BI.memcpy ptr (p `plusPtr` s) (fromIntegral l) go r (ptr `plusPtr` l) main :: IO () main = do let lbs0 = "" lbs1 = "abcdefghij" lbs2 = BL.fromChunks (replicate 2 "abcdefghij") lbs3 = BL.fromChunks (replicate 10 "abcdefghij") lbs4 = BL.fromChunks (replicate 1000 "abcdefghij") print $ toStrict1 lbs0 == toStrict2 lbs0 print $ toStrict1 lbs1 == toStrict2 lbs1 print $ toStrict1 lbs2 == toStrict2 lbs2 print $ toStrict1 lbs3 == toStrict2 lbs3 print $ toStrict1 lbs4 == toStrict2 lbs4 defaultMain [ bgroup "toStrict" [ bench "simple #0" $ whnf toStrict1 lbs0 , bench "simple #1" $ whnf toStrict1 lbs1 , bench "simple #2" $ whnf toStrict1 lbs2 , bench "simple #3" $ whnf toStrict1 lbs3 , bench "simple #4" $ whnf toStrict1 lbs4 , bench "optimized #0" $ whnf toStrict2 lbs0 , bench "optimized #1" $ whnf toStrict2 lbs1 , bench "optimized #2" $ whnf toStrict2 lbs2 , bench "optimized #3" $ whnf toStrict2 lbs3 , bench "optimized #4" $ whnf toStrict2 lbs4 ] ] {- True True True True True warming up estimating clock resolution... mean is 2.537877 us (320001 iterations) found 2658 outliers among 319999 samples (0.8%) 2292 (0.7%) high severe estimating cost of a clock call... mean is 55.38578 ns (15 iterations) benchmarking toStrict/simple #0 mean: 17.66316 ns, lb 17.62767 ns, ub 17.74077 ns, ci 0.950 std dev: 258.0479 ps, lb 146.2057 ps, ub 417.6999 ps, ci 0.950 benchmarking toStrict/simple #1 mean: 28.59188 ns, lb 28.49765 ns, ub 28.70322 ns, ci 0.950 std dev: 523.0005 ps, lb 415.2360 ps, ub 688.4154 ps, ci 0.950 benchmarking toStrict/simple #2 mean: 144.5600 ns, lb 144.2192 ns, ub 145.2525 ns, ci 0.950 std dev: 2.397734 ns, lb 1.352302 ns, ub 3.905039 ns, ci 0.950 benchmarking toStrict/simple #3 mean: 488.0094 ns, lb 486.6532 ns, ub 490.3121 ns, ci 0.950 std dev: 8.903734 ns, lb 6.046690 ns, ub 13.68234 ns, ci 0.950 benchmarking toStrict/simple #4 mean: 55.65404 us, lb 55.43386 us, ub 55.97341 us, ci 0.950 std dev: 1.342695 us, lb 989.9903 ns, ub 1.836054 us, ci 0.950 benchmarking toStrict/optimized #0 mean: 14.14306 ns, lb 14.10655 ns, ub 14.20415 ns, ci 0.950 std dev: 237.0362 ps, lb 159.7752 ps, ub 347.7329 ps, ci 0.950 benchmarking toStrict/optimized #1 mean: 19.10087 ns, lb 19.05273 ns, ub 19.18831 ns, ci 0.950 std dev: 322.9545 ps, lb 201.9111 ps, ub 503.2965 ps, ci 0.950 benchmarking toStrict/optimized #2 mean: 63.34285 ns, lb 63.21118 ns, ub 63.59386 ns, ci 0.950 std dev: 903.1160 ps, lb 543.0056 ps, ub 1.377267 ns, ci 0.950 benchmarking toStrict/optimized #3 mean: 166.5292 ns, lb 166.2405 ns, ub 167.1715 ns, ci 0.950 std dev: 2.144647 ns, lb 1.259903 ns, ub 3.408946 ns, ci 0.950 benchmarking toStrict/optimized #4 mean: 13.05338 us, lb 13.02102 us, ub 13.11728 us, ci 0.950 std dev: 222.8512 ns, lb 116.9145 ns, ub 343.6604 ns, ci 0.950 -}

+1 as users can't get this performance increase without using the internal API currently.

On 29 October 2011 18:10, Herbert Valerio Riedel
I've added those zero-copy optimizations to the optimized toStrict2, and now all timings are better than for the naive implementation toStrict1:
Nice! +1 Oops I already voted.

+1
On Sat, Oct 29, 2011 at 12:44 PM, Bas van Dijk
On 29 October 2011 18:10, Herbert Valerio Riedel
wrote: I've added those zero-copy optimizations to the optimized toStrict2, and now all timings are better than for the naive implementation toStrict1:
Nice!
+1
Oops I already voted.
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

+1
On Sat, Oct 29, 2011 at 4:50 PM, Edward Kmett
+1
On Sat, Oct 29, 2011 at 12:44 PM, Bas van Dijk
wrote: On 29 October 2011 18:10, Herbert Valerio Riedel
wrote: I've added those zero-copy optimizations to the optimized toStrict2, and now all timings are better than for the naive implementation toStrict1:
Nice!
+1
Oops I already voted.
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
-- Felipe.

On 10/29/2011 11:05 AM, Herbert Valerio Riedel wrote:
I propose to add optimized {to,from}Strict conversion functions between strict and lazy ByteStrings to the Data.ByteString.Lazy API. +1
-- Vincent

On Sat, 2011-10-29 at 12:05 +0200, Herbert Valerio Riedel wrote:
I propose to add optimized {to,from}Strict conversion functions between strict and lazy ByteStrings to the Data.ByteString.Lazy API.
Discussion deadline: 2 weeks from now (12 November)
I see we're still before the deadline, but it seems like unanimous support. I've added the functions. They'll be included in bytestring-0.10.x. Thanks Herbert and others who chimed in. While I was at it, I also exported foldrChunks and foldlChunks so we now match the Text API in this area. As Don pointed out, we deliberately didn't include {to,from}Strict functions to discourage people from converting back and forth, since it's expensive. Since that has not proved popular I've just documented it instead: -- |/O(n)/ Convert a lazy 'ByteString' into a strict 'ByteString'. -- -- Note that this is an /expensive/ operation that forces the whole lazy -- ByteString into memory and then copies all the data. If possible, try to -- avoid converting back and forth between strict and lazy bytestrings. -- toStrict :: ByteString -> S.ByteString BTW, I'm slightly sceptical of the benchmarks, it's using tiny chunk sizes, in practice I don't expect the performance of toStrict to be much different from B.concat . BL.toChunks. Duncan
= Current State =
The current Data.ByteString.Lazy API doesn't provide direct conversion functions to/from single strict ByteStrings
Currently, there are only `fromChunks` and `toChunks`, by which convert to/from a list of strict ByteStrings.
A possible reference implementation of the missing conversion functions is:
fromStrict = BL.fromChunks . (:[])
and
toStrict = B.concat . BL.toChunks
== The Issues ==
The lack of `fromStrict`/`toStrict` in the Data.ByteString.Lazy API has the following issues:
- Convenience: If the single-strict-bytestring conversion is often needed, one tends to define module- or package-local helper functions for convenience/readability to perform the desired conversion. This violates the DRY principle.
- Principle of least suprise: Might be confusing to users new to `Data.ByteString.Lazy` why there is no direct conversion.
- Symmetry with `Data.Text.Lazy` API which does provide such single-strict-text conversion functions (`fromStrict`/`toStrict`)
- Performance: The above provided "naive" `toStrict` definition has a roughly 2 to 4 times higher overhead than a manually fused version (which was kindly provided by Bas van Dijk -- whom I'd like to thank for providing me with the optimized versions of toStrict and fromStrict) -- see end of this mail for criterion benchmark code and results
= Proposed Enhancement =
Enhance the Data.ByteString.Lazy API by adding the following conversion functions (suggestions for improvements are highly welcome):
-- see benchmark code at end of mail for the qualified imports
-- |/O(n)/ Convert a strict ByteString into a lazy ByteString. fromStrict :: B.ByteString -> BL.ByteString fromStrict = flip BLI.chunk BLI.Empty
-- |/O(n)/ Convert a lazy ByteString into a strict ByteString. toStrict :: BL.ByteString -> B.ByteString toStrict lb = BI.unsafeCreate len $ go lb where len = BLI.foldlChunks (\l sb -> l + B.length sb) 0 lb
go BLI.Empty _ = return () go (BLI.Chunk (BI.PS fp s l) r) ptr = withForeignPtr fp $ \p -> do BI.memcpy ptr (p `plusPtr` s) (fromIntegral l) go r (ptr `plusPtr` l)
== Benchmark Code & Results ==
------------------------------------------------------------------------ {-# LANGUAGE OverloadedStrings #-}
import Criterion import Criterion.Main import qualified Data.ByteString as B import qualified Data.ByteString.Internal as BI import qualified Data.ByteString.Lazy as BL import qualified Data.ByteString.Lazy.Internal as BLI import Foreign.ForeignPtr import Foreign.Ptr
toStrict1 :: BL.ByteString -> B.ByteString toStrict1 = B.concat . BL.toChunks
toStrict2 :: BL.ByteString -> B.ByteString toStrict2 lb = BI.unsafeCreate len $ go lb where len = BLI.foldlChunks (\l sb -> l + B.length sb) 0 lb
go BLI.Empty _ = return () go (BLI.Chunk (BI.PS fp s l) r) ptr = withForeignPtr fp $ \p -> do BI.memcpy ptr (p `plusPtr` s) (fromIntegral l) go r (ptr `plusPtr` l)
main :: IO () main = do let lbs1 = "abcdefghij" lbs2 = BL.fromChunks (replicate 10 "abcdefghij") lbs3 = BL.fromChunks (replicate 1000 "abcdefghij")
-- force evaluation of lbs{1,2,3} and verify validity print $ toStrict1 lbs1 == toStrict2 lbs1 print $ toStrict1 lbs2 == toStrict2 lbs2 print $ toStrict1 lbs3 == toStrict2 lbs3
defaultMain [ bgroup "toStrict" [ bench "simple #1" $ whnf toStrict1 lbs1 , bench "simple #2" $ whnf toStrict1 lbs2 , bench "simple #3" $ whnf toStrict1 lbs3
, bench "optimized #1" $ whnf toStrict2 lbs1 , bench "optimized #2" $ whnf toStrict2 lbs2 , bench "optimized #3" $ whnf toStrict2 lbs3 ] ]
{-
True True True warming up estimating clock resolution... mean is 2.302557 us (320001 iterations) found 2039 outliers among 319999 samples (0.6%) 1658 (0.5%) high severe estimating cost of a clock call... mean is 54.99870 ns (14 iterations) found 1 outliers among 14 samples (7.1%) 1 (7.1%) low mild
benchmarking toStrict/simple #1 mean: 28.96077 ns, lb 28.89527 ns, ub 29.01562 ns, ci 0.950 std dev: 305.8466 ps, lb 262.1008 ps, ub 345.6136 ps, ci 0.950
benchmarking toStrict/simple #2 mean: 487.0739 ns, lb 486.7939 ns, ub 487.4713 ns, ci 0.950 std dev: 1.699232 ns, lb 1.262363 ns, ub 2.457099 ns, ci 0.950
benchmarking toStrict/simple #3 mean: 55.06322 us, lb 54.91370 us, ub 55.20236 us, ci 0.950 std dev: 741.6239 ns, lb 656.3273 ns, ub 846.6403 ns, ci 0.950
benchmarking toStrict/optimized #1 mean: 48.67522 ns, lb 48.65188 ns, ub 48.70237 ns, ci 0.950 std dev: 129.3192 ps, lb 111.3761 ps, ub 165.4819 ps, ci 0.950
benchmarking toStrict/optimized #2 mean: 178.6342 ns, lb 178.5480 ns, ub 178.7276 ns, ci 0.950 std dev: 457.4436 ps, lb 409.2746 ps, ub 519.8267 ps, ci 0.950
benchmarking toStrict/optimized #3 mean: 13.01866 us, lb 13.00734 us, ub 13.03549 us, ci 0.950 std dev: 70.09916 ns, lb 52.18012 ns, ub 97.77226 ns, ci 0.950
-}
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
participants (14)
-
Bas van Dijk
-
Brandon Allbery
-
Bryan O'Sullivan
-
Don Stewart
-
Duncan Coutts
-
Edward Kmett
-
Felipe Almeida Lessa
-
Gregory Collins
-
Gábor Lehel
-
Henning Thielemann
-
Herbert Valerio Riedel
-
Johan Tibell
-
Vincent Hanquez
-
wren ng thornton