
Hi all. I'm a Haskell beginner looking for valuable inputs, I don't expect my program to magically fix itself. I've implemented a maximum matching graph algorithm in Haskell. The implementation specification uses some state, some arrays that are indexed and updated throughout the algorithm. At first, I implemented the state using HashMaps. This went well, but the runtime wasn't close to my professors corresponding C program. I profiled my program and found out that adjusting the maps were taking alot of time. Then, I had a look at Data.HashTable.ST.Basic. I thought, getting average constant time lookup and insert would surely fix the problem with the slow state updates. I reimplemented the algorithm using HashTables and ran the program. As it turned out, the runtime didn't decrease using the ST monad, rather increase. And heavily. A program instance that took 100 seconds using maps now took 400 seconds. Looking at a profiling sample, it shows that HashTable lookup and other various procedures related to HashTables are the big time consumers. How can this be possible? Any ideas? The code is available under https://github.com/lsund/edmonds-matching/ where: The 'main' branch is the map implementation, and 'hashtable' branch is the hashtable implementation. The code is quite messy, and since I'm a beginner, probably inefficient. It is the first time I implemented anything using the ST monad. So I'm afraid that I don't understand it perfectly, and made some serious implementation flaw. Best Regards, Ludvig

I don't know why it is slower but since you're using ST you could try with (unboxed) arrays instead of hashmaps: https://hackage.haskell.org/package/array-0.5.1.0/docs/Data-Array-ST.html On 27/07/2017 08:44, lud.sund@gmail.com wrote:
Hi all. I'm a Haskell beginner looking for valuable inputs, I don't expect my program to magically fix itself.
I've implemented a maximum matching graph algorithm in Haskell. The implementation specification uses some state, some arrays that are indexed and updated throughout the algorithm.
At first, I implemented the state using HashMaps. This went well, but the runtime wasn't close to my professors corresponding C program. I profiled my program and found out that adjusting the maps were taking alot of time.
Then, I had a look at Data.HashTable.ST.Basic. I thought, getting average constant time lookup and insert would surely fix the problem with the slow state updates. I reimplemented the algorithm using HashTables and ran the program.
As it turned out, the runtime didn't decrease using the ST monad, rather increase. And heavily. A program instance that took 100 seconds using maps now took 400 seconds. Looking at a profiling sample, it shows that HashTable lookup and other various procedures related to HashTables are the big time consumers.
How can this be possible? Any ideas?
The code is available under https://github.com/lsund/edmonds-matching/ where: The 'main' branch is the map implementation, and 'hashtable' branch is the hashtable implementation.
The code is quite messy, and since I'm a beginner, probably inefficient. It is the first time I implemented anything using the ST monad. So I'm afraid that I don't understand it perfectly, and made some serious implementation flaw.
Best Regards, Ludvig
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
participants (2)
-
lud.sund@gmail.com
-
Sylvain Henry