
#8604: Some stack/vmem limits (ulimit) combinations causing GHC to fail -------------------------------------+------------------------------------- Reporter: clavin | Owner: Type: bug | Status: new Priority: normal | Milestone: Component: | Version: 7.6.3 Documentation | Keywords: Resolution: | Architecture: x86_64 (amd64) Operating System: Other | Difficulty: Unknown Type of failure: | Blocked By: Documentation bug | Related Tickets: Test Case: | Blocking: | Differential Revisions: | -------------------------------------+------------------------------------- Description changed by thomie: Old description:
I have encountered a strange occurrence with GHC that was causing several GHC job failures on an SGE cluster. It turned out that there were other SGE users who needed an absurdly large stack size limit (set via 'ulimit -s') in the several gigabytes range. The default stack size limit had to be raised for the entire cluster.
Any job that was run on a machine where the virtual memory limit was less than or equal to 2X the stack size limit, GHC would crash with the following message:
ghc: failed to create OS thread: Cannot allocate memory
I am running on GHC 7.6.3 with a 64-bit RedHat Enterprise OS, version 5.5.
To reproduce the error, I was able to create the following test case:
[ ~]$ ulimit -v unlimited [ ~]$ ulimit -s 10240 [ ~]$ ghc --version The Glorious Glasgow Haskell Compilation System, version 7.6.3 [ ~]$ ulimit -v 200000 [ ~]$ ulimit -s 100000 [ ~]$ ghc --version ghc: failed to create OS thread: Cannot allocate memory
Several other programs work find using these settings, but GHC consistently has problems. Is this a fundamental issue with how GHC operates or can this be addressed?
New description: I have encountered a strange occurrence with GHC that was causing several GHC job failures on an SGE cluster. It turned out that there were other SGE users who needed an absurdly large stack size limit (set via 'ulimit -s') in the several gigabytes range. The default stack size limit had to be raised for the entire cluster. Any job that was run on a machine where the virtual memory limit was less than or equal to 2X the stack size limit, GHC would crash with the following message: ghc: failed to create OS thread: Cannot allocate memory I am running on GHC 7.6.3 with a 64-bit RedHat Enterprise OS, version 5.5. To reproduce the error, I was able to create the following test case: [ ~]$ ulimit -v unlimited [ ~]$ ulimit -s 10240 [ ~]$ ghc --version The Glorious Glasgow Haskell Compilation System, version 7.8.3 [ ~]$ ulimit -v 2000000 [ ~]$ ulimit -s 1000000 [ ~]$ ghc --version ghc: failed to create OS thread: Cannot allocate memory [ ~]$ ulimit -s 500000 [ ~]$ ghc --version The Glorious Glasgow Haskell Compilation System, version 7.8.3 Several other programs work find using these settings, but GHC consistently has problems. Is this a fundamental issue with how GHC operates or can this be addressed? -- -- Ticket URL: http://ghc.haskell.org/trac/ghc/ticket/8604#comment:6 GHC http://www.haskell.org/ghc/ The Glasgow Haskell Compiler