
On Wed, 2008-09-17 at 20:29 +0000, Aaron Denney wrote:
On 2008-09-17, Jonathan Cast
wrote: On Wed, 2008-09-17 at 18:40 +0000, Aaron Denney wrote:
On 2008-09-17, Arnar Birgisson
wrote: Hi Manlio and others,
On Wed, Sep 17, 2008 at 14:58, Manlio Perillo
wrote: http://www.heise-online.co.uk/open/Shuttleworth-Python-needs-to-focus-on-fut...
"cloud computing, transactional memory and future multicore processors"
Multicore support is already "supported" in Python, if you use multiprocessing, instead of multithreading.
Well, I'm a huge Python fan myself, but multiprocessing is not really a solution as much as it is a workaround. Python as a language has no problem with multithreading and multicore support and has all primitives to do conventional shared-state parallelism. However, the most popular /implementation/ of Python sacrifies this for performance, it has nothing to do with the language itself.
Huh. I see multi-threading as a workaround for expensive processes, which can explicitly use shared memory when that makes sense.
That breaks down when you want 1000s of threads.
This really misses the point I was going for. I don't want 1000s of threads. I don't want *any* threads. Processes are nice because you don't have other threads of execution stomping on your memory-space (except when explicitly invited to by arranged shared-memory areas). It's an easier model to control side-effects in. If this is too expensive, and threads in the same situation will work faster, than I might reluctantly use them instead.
I'm not aware of any program, on any system, that spawns a new process on each event it wants to handle concurrently;
inetd
OK. But inetd calls exec on each event, too, so I think it's somewhat orthogonal to this issue. (Multi-processing is great if you want to compose programs; the question is how you parallelize concurrent instances of the same program).
systems that don't use an existing user-space thread library (such as Concurrent Haskell or libthread [1]) emulate user-space threads by keeping a pool of processors and re-using them (e.g., IIUC Apache does this).
Your response seems to be yet another argument that processes are too expensive to be used the same way as threads.
You mean `is'. And they are. Would you write a web server run out of inetd? (Would its multi-processor support make it out-perform a single-threaded web server in the same language? I doubt it.) HWS, on the other hand, spawns a new Concurrent Haskell thread on every request.
In my mind pooling vs new-creation is only relevant to process vs thread in the performance aspects.
Say what? This discussion is entirely about performance --- does CPython actually have the ability to scale concurrent programs to multiple processors? The only reason you would ever want to do that is for performance.
The fact that people use thread-pools
I don't think people use thread-pools with Concurrent Haskell, or with libthread.
means that they think that even thread-creation is too expensive.
Kernel threads /are/ expensive. Which is why all the cool kids use user-space threads.
The central aspect in my mind is a default share-everything, or default share-nothing.
I really don't think you understand Concurrent Haskell, then. (Or Concurrent ML, or stackless Python, or libthread, or any other CSP-based set-up). jcc