Thanks for the reference. I agree with the rant word by word.

I use tcache. It is a cache with access and update in the STM monad  and each element can have its own persistence, defined by the programmer. So an element can be the result of a web service request for example from AWS, another from a database and a third from anywhere. the three can participate in the same STM transaction in memory and update their respective storages, if they are modified.

These are the kinds of things are not possible in conventional databases. It is easy to create a almost a perfect product if you establishes the rules of perfection and you sit at the center of the development process that is what the SQL databases did for a long time. The DBs stayed at the protective womb of the back-office, with a few queries per second and being consistent with themselves and with nothing else. Now things have changed. We need their STM transactions working for us close to fresh application data  at full speed, not in the backoffice. We need our data spread across different locations. We have no other option. We need to synchronize and integrate more than ever, so we need software and developers that can figure out what the data is about by looking at it, so the schema must be implicit in the data and so on.




2013/12/19 Mikael Brockman <mbrock@goula.sh>
Andrew Cowie <andrew@operationaldynamics.com> writes:

> Yeah, if external inspection were necessary that'd definitely be a good
> reason to go that way for sure. The report from Ozgur that just
> serializing out a Map structure was workable is encouraging, though.
> I'll start with that.

Pardon the digression, but I'd just like to appreciate this way of
thinking.  There's a rant by Bob Martin [1] that concludes:

> "We are heading into an interesting time. A time when the prohibition
> against different data storage mechanisms has been lifted, and we are
> free to experiment with many novel new approaches. But as we play with
> our CouchDBs and our Mongos and BigTables, remember this: The database
> is just a detail that you don’t need to figure out right away."

A project I'm working on uses a persistent append-only list, which is
currently implemented like this, almost verbatim:

  async . forever $
    atomically (readTChan queue) >>= writeFile path . Aeson.encode

Files are trivial to back up and generally easy to work with.  Since
it's just JSON, I can grep and mess with it easily with command-line
tools.  And since the writing is done in a separate thread reading from
a queue, I don't need to worry about locking.

I think this will be alright for a good while, and when the project
outgrows it, I'll just migrate to some other solution.  Probably
acid-state, because the version migration stuff seems really useful.

[1]: Bob Martin's rant "No DB",
     http://blog.8thlight.com/uncle-bob/2012/05/15/NODB.html

--
Mikael Brockman

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



--
Alberto.