On Wed, 2008-09-17 at 18:40 +0000, Aaron Denney wrote:
On 2008-09-17, Arnar Birgisson
wrote: Hi Manlio and others,
On Wed, Sep 17, 2008 at 14:58, Manlio Perillo
wrote: http://www.heise-online.co.uk/open/Shuttleworth-Python-needs-to-focus-on-fut...
"cloud computing, transactional memory and future multicore processors"
Multicore support is already "supported" in Python, if you use multiprocessing, instead of multithreading.
Well, I'm a huge Python fan myself, but multiprocessing is not really a solution as much as it is a workaround. Python as a language has no problem with multithreading and multicore support and has all primitives to do conventional shared-state parallelism. However, the most popular /implementation/ of Python sacrifies this for performance, it has nothing to do with the language itself.
Huh. I see multi-threading as a workaround for expensive processes, which can explicitly use shared memory when that makes sense.
That breaks down when you want 1000s of threads. I'm not aware of any program, on any system, that spawns a new process on each event it wants to handle concurrently; systems that don't use an existing user-space thread library (such as Concurrent Haskell or libthread [1]) emulate user-space threads by keeping a pool of processors and re-using them (e.g., IIUC Apache does this). Any counter-examples? jcc [1] http://swtch.com/plan9port/man/man3/thread.html