haskell gsoc proposal for richer numerical type classes and supporting algorithms

Hello All, I would like to know if there is enough community interest in following gsoc project proposal of mine for me to write up a proper haskell gsoc app for it . (and accordingly if there is a person who'd be up for having the mentoring role) Project: Alternate numerical prelude with a typeclass hierarchy that follows that found in abstract algebra, along with associated generic algorithms for various computations can be done on datastructures satisfying these type clases. In particular, the motivating idea is the following: yes it'd be useful (for more mathematically inclined haskellers) to have a numerical prelude whose hierarchy means something, but for even that subset of the haskell user population, such an alternate prelude is only useful if users get extra functionality out of that more detailed infrastructure (and I believe that some of the infrastructure that becomes feasible would be worthwhile to the larger haskell community). the basic plan is as follows 1) define a typeclass hierarchy that covers everything such as monoids - groups - rings - fields, and various points in between. After some experimenting, it has become pretty clear to me that all of these need to be defined indirectly with the help of template haskell so that the names of the various operators can be suitably parameterized (so that * and + etc aren't the only choices people have). This part itself isn't terribly difficult, though theres a lot of important intermediate algebraic structures that also need to be defined, and as I currently plan it, i'm very much inclined towards specifying all the required properties of each algebraic structure as testable properties, so that in a certain sense all these type clases could be interpreted as "adding facts" to an inference engine . This part is easy, just a lot of support type classes which a user wouldn't often encounter or deal with unless they're doing real math, in which case understanding them is probably key for correctly writing the desired code anyways. For those reading this who don't know the abstract algebra terminology: *a monoid* is a set with an operation "+" which has an identity element, an example would be lists and the append operation. *a group * is a monoid where the operation is invertible, it could be something as simple as positive rational numbers under multiplication or the various translations and rotations that one does to 3d objects such as when doing open gl programming where in this latter case the "*" or "+" operation is composition of these (invertible) transformations *a ring *will have both a "+", and a "*", and the "+" will be a group over the set of objects in the ring and the "*" a monoid, over the objects in the ring. a simple example would just be the integers with + and * as they usually are. Plus some rules about 0 and 1. *a field * would be a ring where all the nonzero elements are invertible with respect to multiplication (ie nonzero elements form a multiplicative group). A standard example would be rational numbers with the standard + and * and so forth for many other algebraic objects. 2) providing algorithmic machinery that makes this all worthwhile: in particular there are two veins of algorithmic machinery that would be goals of the gsoc project, one which is certainly feasible, and the other which i'm still working out the design details for but feel is very likely. respectively: a) generic algorithms for various standard algebraic computational problems, for example a generic euclidean algorithm that will work on any sort of data structure/object/algebraic thingy that satisfies all the necessary algebraic properties (such as multivariate polynomials!), with specialized versions for various cases where better algorithms are known and the nature of the representation can be exploited (eg binary gcd on the integers). This part of the project would essentially be me going through some computational number theory /algebra texts such as http://www.shoup.net/ntb/ (A Computational Introduction to Number Theory and Algebra) and implementing every interesting/useful algorithm both generic and specialized variants. As haskell's numerical typeclass hierarchy currently stands, it is impossible to implement generic versions of these algorithms in haskell, and I don't think any widely used language aside from haskell (except perhaps scala?) even has the right abstraction facilities to make it feasible. Of course certain specialized cases would even benefit perhaps by actually having them be in turn implemented in a more low level fashion (eg via some external c/c++ library), but that itself is not really important for the core project and would be beyond the scope of a single summers work. b) because the type classes would directly encode what manipulations and equational reasoning steps are valid, the same information could be used to safely do computer assisted algebra/mathematics. In particular, the various types which are instances of algebraic type classes (say Field Rationals) could also be indexed by a sort whose members are CAS and Value, so (Field (Rationals Value)), and depending on what the context expects, we either get a computation, or a lightweight way of lifting mathematical expressions into a higher order abstract syntax! Then it would be possible to do interesting things like symbolic differentiation of functions, eg polynomial expressions etc. For this latter deliverable, theres still a few questions and problems I'd need to work out (such as how names are handled and what sort of primitive operations should be used/exposed), but nothing that seems insurmountable to making it work, and in a fairly light weight way for the end user at that! any constructive feedback would be very appreciated, please note that while irrespective of gsoc summer funding considerations I'm planning on putting this together... the funding would mean that I wouldn't have a day job taking up a good chunk of my time/energy for this coming summer. Also, I do in fact have enough of an algorithms, pl, and math background to undertake this project, though I'll leave that info for the gsoc app if there proves to be an interest in the proposal. again, thanks for any feedback! -Carter Schonwald

Hi Carter The proposal is interesting - but maybe there is not a great community benefit to a 'covers everything' library considering Henning Thielemann and others 'numeric prelude' already exists: http://hackage.haskell.org/package/numeric-prelude As a not especially mathematically inclined Haskell programmer who delves into geometry at least, the current Num class is certainly a problem even for me. I'd certainly welcome a small but more 'sound' set of numeric classes - Jerzy Karczmarczuk has presented such a library in various papers - this one (in French) lists the full code: http://users.info.unicaen.fr/~karczma/arpap/pareseq.pdf At the moment I use the normal Num class (ho-hum) plus Conal Elliott's VectorSpace package to get similar. Sectioning the Prelude would be useful (perhaps boring) work to add onto a GSoC project that replaces some of its machinery. Currently if one wants to avoid the Prelude's Num, one then has to import say the monad stuff by hand. Similarly, if one wants to hide the monads, if they have parameterized monads instead, they lose all the Num codes. Some suitably defined import modules would be very handy e.g.: {-# LANGUAGE NoImplicitPrelude #-} module XYZ where import PreludeWithoutNum ... Best wishes Stephen

Hi Carter,
You might be interested in the 'monoids' package on hackage, which I
constructed for my own research.
http://hackage.haskell.org/package/monoids-0.1.36
This package largely covers the first half of your proposal, and provides
machinery for automatic differentiation of monoids over bimodules, which
starts to cover some of the scope you describe in the second half of your
proposal. Later revisions drastically reduce scope, in part due to the
considerations below.
I'd like to point out a few things about why you might not want to do this.
A rigorous numerical prelude is a tedious thing in Haskell to extend or to
use in practice.
I think many people would agree that Num did not strike the right balance
between generality and power. There are many perfectly good numerical types
that don't fit into the strange 'self-semi-measured ring-like thing with
equality and an available textual representation' that makes up a Haskell
Num.
However, in the absence of the ability to declare class aliases, define
default members for your superclasses, and retoactively define superclasses,
a deeper numeric prelude is an exercise in frustration.
You can view the standard mathematical hierarchy as a lattice defined by the
laws associated with the structure in question. You can start with magma,
add associativity and get semigroup, or just add a unit and get a unital
magma, but mixing the two gives you a monoid, so on and so forth. So many of
the points on this lattice have names, others are just adjectival
modifications of other more primitive concepts.
However, if you ever skip a level, and say, omit 'Quasigroup' on the way to
defining 'Loop', you'll find that the nature of Haskell makes it hard to
retrofit such a level into your hierarchy. Afterall, any users who had
defined their own 'Loop' instances before, would now have to split their
definition in half. So, on one front, these deep hierarchies are brittle.
Another problem is the drastic increase in boilerplate to instantiate
anything deep within the hierarchy. When every Field is a Magma, SemiGroup,
Monoid, QuasiGroup, Loop, Group, UnitalMagma, MultiplicativeMagma,
MultiplicativeSemiGroup, MultiplicativeMonoid, MultiplicativeQuasiGroup,
MultiplicativeLoop, MultiplicativeGroup, Ringoid, LeftSemiNearRing,
RightSemiNearRing, SemiNearRing, Rng, SemiRing, Ring, and Field with the
attendant instances to form a vector space (and hence module, etc.) over
itself and the canonical modules over the naturals (as a monoid) and
integers (as a group), the amount of boilerplate reaches a level of patent
absurdity. You have to instantiate each of those classes and provide default
definitions for properties that are clearly inferrable. Yes the
MultiplicativeQuasiGroup's notion of left and right division are uniquely
the same, but you still have to write the lines that say it, for every
instance. Template Haskell can help dull the pain, but the result seems
hardly idiomatic.
The amount of code to define a new Field-like object can baloon to well over
a hundred lines, and in the above I didn't even address how to work with
near-field-like concepts like Fields and Doubles, which don't support all of
the laws but which have the same feel.
Finally, there is the issue of adoption. Incompatibility with the Prelude is
a major concern. Ultimately your code has to talk to other people's code,
and the impedence mismatch can be high.
-Edward Kmett
On Thu, Apr 8, 2010 at 1:23 AM, Carter Schonwald wrote: Hello All, I would like to know if there is enough community interest in following
gsoc project proposal of mine for me to write up a proper haskell gsoc app
for it . (and accordingly if there is a person who'd be up for having the
mentoring role) Project: Alternate numerical prelude with a typeclass hierarchy that
follows that found in abstract algebra, along with associated generic
algorithms for various computations can be done on datastructures satisfying
these type clases. In particular, the motivating idea is the following: yes it'd be useful
(for more mathematically inclined haskellers) to have a numerical prelude
whose hierarchy means something, but for even that subset of the haskell
user population, such an alternate prelude is only useful if users get extra
functionality out of that more detailed infrastructure (and I believe that
some of the infrastructure that becomes feasible would be worthwhile to the
larger haskell community). the basic plan is as follows 1) define a typeclass hierarchy that covers everything such as monoids -
groups - rings - fields, and various points in between. After some
experimenting, it has become pretty clear to me that all of these need to
be defined indirectly with the help of template haskell so that the names of
the various operators can be suitably parameterized (so that * and + etc
aren't the only choices people have).
This part itself isn't terribly difficult, though theres a lot of
important intermediate algebraic structures that also need to be defined,
and as I currently plan it, i'm very much inclined towards specifying all
the required properties of each algebraic structure as testable properties,
so that in a certain sense all these type clases could be interpreted as
"adding facts" to an inference engine . This part is easy, just a lot of
support type classes which a user wouldn't often encounter or deal with
unless they're doing real math, in which case understanding them is probably
key for correctly writing the desired code anyways. For those reading this who don't know the abstract algebra terminology:
*a monoid* is a set with an operation "+" which has an identity element,
an example would be lists and the append operation. *a group * is a monoid where the operation is invertible, it could be
something as simple as positive rational numbers under multiplication or the
various translations and rotations that one does to 3d objects such as when
doing open gl programming where in this latter case the "*" or "+" operation
is composition of these (invertible) transformations *a ring *will have both a "+", and a "*", and the "+" will be a group over
the set of objects in the ring and the "*" a monoid, over the objects in the
ring. a simple example would just be the integers with + and * as they
usually are. Plus some rules about 0 and 1. *a field * would be a ring where all the nonzero elements are invertible
with respect to multiplication (ie nonzero elements form a multiplicative
group). A standard example would be rational numbers with the standard + and
* and so forth for many other algebraic objects. 2) providing algorithmic machinery that makes this all worthwhile: in
particular there are two veins of algorithmic machinery that would be goals
of the gsoc project, one which is certainly feasible, and the other which
i'm still working out the design details for but feel is very likely.
respectively: a) generic algorithms for various standard algebraic computational
problems, for example a generic euclidean algorithm that will work on any
sort of data structure/object/algebraic thingy that satisfies all the
necessary algebraic properties (such as multivariate polynomials!), with
specialized versions for various cases where better algorithms are known and
the nature of the representation can be exploited (eg binary gcd on the
integers).
This part of the project would essentially be me going through some
computational number theory /algebra texts such as
http://www.shoup.net/ntb/ (A Computational Introduction to Number Theory
and Algebra) and implementing every interesting/useful algorithm both
generic and specialized variants. As haskell's numerical typeclass hierarchy
currently stands, it is impossible to implement generic versions of these
algorithms in haskell, and I don't think any widely used language aside from
haskell (except perhaps scala?) even has the right abstraction facilities
to make it feasible.
Of course certain specialized cases would even benefit perhaps by
actually having them be in turn implemented in a more low level fashion (eg
via some external c/c++ library), but that itself is not really important
for the core project and would be beyond the scope of a single summers work. b) because the type classes would directly encode what manipulations and
equational reasoning steps are valid, the same information could be used to
safely do computer assisted algebra/mathematics. In particular, the various
types which are instances of algebraic type classes (say Field Rationals)
could also be indexed by a sort whose members are CAS and Value, so (Field
(Rationals Value)), and depending on what the context expects, we either get
a computation, or a lightweight way of lifting mathematical expressions into
a higher order abstract syntax! Then it would be possible to do interesting
things like symbolic differentiation of functions, eg polynomial
expressions etc.
For this latter deliverable, theres still a few questions and problems
I'd need to work out (such as how names are handled and what sort of
primitive operations should be used/exposed), but nothing that seems
insurmountable to making it work, and in a fairly light weight way for the
end user at that! any constructive feedback would be very appreciated, please note that while irrespective of gsoc summer funding considerations
I'm planning on putting this together... the funding would mean that I
wouldn't have a day job taking up a good chunk of my time/energy for this
coming summer. Also, I do in fact have enough of an algorithms, pl, and math background to
undertake this project, though I'll leave that info for the gsoc app if
there proves to be an interest in the proposal. again, thanks for any feedback!
-Carter Schonwald _______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, Apr 8, 2010 at 2:09 PM, Edward Kmett
Template Haskell can help dull the pain, but the result seems hardly idiomatic.
Well, since this is dealing with types and type classes, much of the required boilerplate could also be straightforwardly derived in full generality using type-level metaprogramming techniques rather than TH, but the outcome of that would likely be even less tasteful, in the sense of "so many UndecidableInstances that you won't be able to scratch your nose without running into the Halting Problem". With a bit of finesse, though, I suspect the result could allow users of the library to avoid both boilerplate and unnerving GHC extensions. Compatibility with Prelude classes could probably also be solved this way. Still, probably not terribly appealing to most folks.
The amount of code to define a new Field-like object can baloon to well over a hundred lines, and in the above I didn't even address how to work with near-field-like concepts like Fields and Doubles, which don't support all of the laws but which have the same feel.
I'm somewhat uncomfortable as it is with equivocating between "true" mathematical objects and hand-wavy approximations that have hardware support. Seriously, floating point so-called "numbers" don't even have reflexive equality! If anything, it seems like there might be value in chopping the numeric types apart into "fast number-crunchy types" and "types with nice algebraic properties", and then enhancing each with relevant functionality. - C.

On Thu, Apr 8, 2010 at 3:25 PM, Casey McCann
On Thu, Apr 8, 2010 at 2:09 PM, Edward Kmett
wrote: Template Haskell can help dull the pain, but the result seems hardly idiomatic.
Well, since this is dealing with types and type classes, much of the required boilerplate could also be straightforwardly derived in full generality using type-level metaprogramming techniques rather than TH, but the outcome of that would likely be even less tasteful, in the sense of "so many UndecidableInstances that you won't be able to scratch your nose without running into the Halting Problem". With a bit of finesse, though, I suspect the result could allow users of the library to avoid both boilerplate and unnerving GHC extensions. Compatibility with Prelude classes could probably also be solved this way.
Still, probably not terribly appealing to most folks.
Unfortunately the type level metaprogramming approach doesn't really yield something that has an idiomatic usage pattern and would still rely on extensions, since you'd need Type Families and/or MPTCs/Fundeps/UndecidableInstances/IncoherentInstances. A TH solution needn't be that bad with template haskell and quasiquotation: [$field ''MyField| (+) = ... (*) = ... negate = .. |] You just construct the tower of instances needed and bake in a bunch of machinery to precalculate the appropriate defaults and derived members. This has the advantage that if the numerical tower is refactored to add another level, then the client side code doesn't break.
The amount of code to define a new Field-like object can baloon to well over a hundred lines, and in the above I didn't even address how to work with near-field-like concepts like Fields and Doubles, which don't support all of the laws but which have the same feel.
I'm somewhat uncomfortable as it is with equivocating between "true" mathematical objects and hand-wavy approximations that have hardware support. Seriously, floating point so-called "numbers" don't even have reflexive equality! If anything, it seems like there might be value in chopping the numeric types apart into "fast number-crunchy types" and "types with nice algebraic properties", and then enhancing each with relevant functionality.
I agree they should be separated. In my previous implementation of these I had built up a "Pseudo-" prefixed numerical tower for when you couldn't rely on the laws to apply, and a newtype wrapper that let you coerce then into something for which you claim the laws locally apply. What I meant was that I hadn't bothered to include the Pseudo-prefixed variants on each of the classes, which bloats the example even further. -Edward Kmett

On Apr 8, 2010, at 12:25 PM, Casey McCann wrote:
Seriously, floating point so-called "numbers" don't even have reflexive equality!
They don't? I am pretty sure that a floating point number is always equal to itself, with possibly a strange corner case for things like +/- 0 and NaN. Cheers, Greg

Gregory Crosswhite wrote:
On Apr 8, 2010, at 12:25 PM, Casey McCann wrote:
Seriously, floating point so-called "numbers" don't even have reflexive equality!
They don't? I am pretty sure that a floating point number is always equal to itself, with possibly a strange corner case for things like +/- 0 and NaN.
Exactly. NaN /= NaN. Other than that, I believe that "let x = ... in x == x" is true (because they are the same bitfield by definition), however it is easy to have 'the same number' without it having the same bitfield representation due to loss of precision and the like. To say nothing of failures of other laws leading to overflow, underflow, etc. -- Live well, ~wren

On Thu, Apr 8, 2010 at 7:58 PM, wren ng thornton
They don't? I am pretty sure that a floating point number is always equal to itself, with possibly a strange corner case for things like +/- 0 and NaN.
Exactly. NaN /= NaN.
Other than that, I believe that "let x = ... in x == x" is true (because they are the same bitfield by definition), however it is easy to have 'the same number' without it having the same bitfield representation due to loss of precision and the like. To say nothing of failures of other laws leading to overflow, underflow, etc.
Indeed. NaN means that equality is not reflexive for floats in general, only a subset of them. Likewise, addition and multiplication are not associative and the distributive law doesn't hold. I think commutativity is retained, though. That's something, right? - C.

On Apr 8, 2010, at 5:30 PM, Casey McCann wrote:
On Thu, Apr 8, 2010 at 7:58 PM, wren ng thornton
wrote: Exactly. NaN /= NaN [...] Indeed. NaN means that equality is not reflexive for floats in general, only a subset of them.
First of all, it isn't clear to me that NaN /= NaN, since in ghci the expression "1.0/0.0 == 1.0/0.0" evaluates to True. But even if that were the case, I would call that more of a technicality then meaning that equality is not reflexive for floats, since NaN is roughly the floating-point equivalent of _|_, and using the same argument one could also say that reflexivity does not hold in general for equating values of *any* Haskell type since (_|_ == _|_) does not return true but rather _|_. (Don't get me wrong, I agree with your point that multiplication and addition are neither associative nor distributive and other such nasty things occur when dealing with floating point numbers, it's just that I think that saying they are so messed up that even equality is not reflexive is a little harsher than they actually deserve. ;-) ) Cheers, Greg

Am Freitag 09 April 2010 02:51:23 schrieb Gregory Crosswhite:
On Apr 8, 2010, at 5:30 PM, Casey McCann wrote:
On Thu, Apr 8, 2010 at 7:58 PM, wren ng thornton
wrote: Exactly. NaN /= NaN
[...]
Indeed. NaN means that equality is not reflexive for floats in general, only a subset of them.
First of all, it isn't clear to me that NaN /= NaN,
Specified by IEEE 754, IIRC.
since in ghci the expression "1.0/0.0 == 1.0/0.0" evaluates to True.
Yes, but 1/0 isn't a NaN: Prelude> isNaN (1.0/0.0) False Prelude> isNaN (0.0/0.0) True Prelude> 1.0/0.0 Infinity Prelude> 0.0/0.0 NaN Prelude> (0.0/0.0) == (0.0/0.0) False
But even if that were the case, I would call that more of a technicality then meaning that equality is not reflexive for floats, since NaN is roughly the floating-point equivalent of _|_, and using the same argument one could also say that reflexivity does not hold in general for equating values of *any* Haskell type since (_|_ == _|_) does not return true but rather _|_.
Very roughly. But I agree with your point, though x == x returning False isn't quite the same as x == x returning _|_ (repeat 1 == repeat 1).
(Don't get me wrong, I agree with your point that multiplication and addition are neither associative nor distributive and other such nasty things occur when dealing with floating point numbers, it's just that I think that saying they are so messed up that even equality is not reflexive is a little harsher than they actually deserve. ;-) )
Yup. But using (==) on floating point numbers is dangerous even without NaNs. Warning against that can be beneficial.
Cheers, Greg

On Apr 8, 2010, at 6:53 PM, Daniel Fischer wrote:
Am Freitag 09 April 2010 02:51:23 schrieb Gregory Crosswhite:
Yes, but 1/0 isn't a NaN:
Prelude> isNaN (1.0/0.0) False Prelude> isNaN (0.0/0.0) True Prelude> 1.0/0.0 Infinity Prelude> 0.0/0.0 NaN Prelude> (0.0/0.0) == (0.0/0.0) False
Curse you for employing the dirty trick of employing easily verifiable facts to prove me wrong! :-)
Yup. But using (==) on floating point numbers is dangerous even without NaNs. Warning against that can be beneficial.
Fair enough. On a tangental note, I've considered coding up a package with an "AlmostEq" typeclass that allows one to test for approximate equality. The problem is that different situations call for different tolerances so there is no standard "approximate equal" operator that would work for everyone, but there might be a tolerance that is "good enough" for most situations where it would be needed (such as using QuickCheck to test that two different floating-point functions that are supposed to return the same answer actually do so) to make it worthwhile to have a standard package for this around for the sake of convenience. Anyone have any thoughts on this? Cheers, Greg

--- On Thu, 4/8/10, Gregory Crosswhite
From: Gregory Crosswhite
On a tangental note, I've considered coding up a package with an "AlmostEq" typeclass that allows one to test for approximate equality. The problem is that different situations call for different tolerances so there is no standard "approximate equal" operator that would work for everyone, but there might be a tolerance that is "good enough" for most situations where it would be needed (such as using QuickCheck to test that two different floating-point functions that are supposed to return the same answer actually do so) to make it worthwhile to have a standard package for this around for the sake of convenience.
Anyone have any thoughts on this?
It's always worth pointing out http://docs.sun.com/source/806-3568/ncg_goldberg.html at this point (not that it really helps you out in this case) Doug

On Thu, Apr 8, 2010 at 7:29 PM, Doug Burke
--- On Thu, 4/8/10, Gregory Crosswhite
wrote: From: Gregory Crosswhite
On a tangental note, I've considered coding up a package with an "AlmostEq" typeclass that allows one to test for approximate equality. The problem is that different situations call for different tolerances so there is no standard "approximate equal" operator that would work for everyone, but there might be a tolerance that is "good enough" for most situations where it would be needed (such as using QuickCheck to test that two different floating-point functions that are supposed to return the same answer actually do so) to make it worthwhile to have a standard package for this around for the sake of convenience.
Anyone have any thoughts on this?
I've always wondered if Haskell would make it easy to track number of significant digits. The other thought is that you could probably use Oleg's implicit configurations to handle the tolerance in a rather nice way: http://www.cs.rutgers.edu/~ccshan/prepose/prepose.pdf The example in the paper is managing the modulus implicitly and I would imagine the amount of precision could be managed similarly. Jason

On Thu, Apr 8, 2010 at 8:51 PM, Gregory Crosswhite
First of all, it isn't clear to me that NaN /= NaN, since in ghci the expression "1.0/0.0 == 1.0/0.0" evaluates to True. But even if that were the case, I would call that more of a technicality then meaning that equality is not reflexive for floats, since NaN is roughly the floating-point equivalent of _|_, and using the same argument one could also say that reflexivity does not hold in general for equating values of *any* Haskell type since (_|_ == _|_) does not return true but rather _|_.
The difference there is that _|_ generally means the entire program has shuffled off this mortal coil, whilst a (non-signalling) NaN is, by specification, silently and automatically propagated, turning everything it touches into more NaNs. The very sensible purpose of this is to allow computations on floats to be written in a straightforward manner without worrying about intermediate errors, rather than having NaN checks everywhere, or having to worry about exceptions all the time. In that regard, it's more analogous to all floating point operations occurring in an implicit Maybe monad, except with Nothing treated as False by conditionals. "(==) <$> Nothing <*> Nothing" indeed does not evaluate to "Just True", so in a sense equality is also not reflexive for types such as "Maybe Integer". Although I like to make fun of floats as being sorry excuses for numbers, the way they work is actually quite sensible from the perspective of doing low-level computations with fractional numbers. Now, if only certain... other... programming languages were as well-designed, and could silently and safely propagate values like "Not a Reference" or "Not a Pointer". - C.
participants (9)
-
Carter Schonwald
-
Casey McCann
-
Daniel Fischer
-
Doug Burke
-
Edward Kmett
-
Gregory Crosswhite
-
Jason Dagit
-
Stephen Tetley
-
wren ng thornton