Big enough machine with enough memory and it's fine. I used to keep a job queue with a billion rows on MySQL at a gig long ago. Could do it with PostgreSQL pretty easily too. On your personal work machine? I dunno.

Not trying to steer you away from using Haskell here by any means, but if you can process your data in a SQL database efficiently, that's often pretty optimal in terms of speed and ease of use until you start doing more sophisticated analysis. I don't have a lot of experience in data analysis but I knew people to do some preliminary slicing/dicing in SQL before moving onto a building a custom model for understanding the data.

I guess I was just curious what a sensible approach using Haskell would look like and i'll play
around with what I know now. If this was from my working place i'd just put it in a database with
enough horse power but it's just my curiosity in my spare time, alas..

thank you for your input.