Hi there. Beeing rather new to the realm of Haskell and functional programming, I've been reading about "how is easier it is to parallelize code in a purely functional language" (am I right saying that ?).<br>
<br>My knowledge of parallelization is also very weak, but I've been thinking about this and I have a question. Let's say we take a small piece of code, like the quicksort algorithm.<br><br>> qsort [] = []<br>> qsort (x:xs) = qsort lesser ++ [x] ++ qsort greater
<br>> where lesser = filter (<x) xs<br> greater = filter (>=x) xs<br><br>(actually I didn't test this code, this is just for the example)<br><br>Here, computing "lesser" and "greater" are two disjoint actions - we don't need the result from one to compute the other, and haskell does not allow one to alter data so that would change the behaviour of the other. So I guess those could be run in parallel.
<br><br>Would it be fairly simple to automatically determine parts of code that can be run in parallel, and parts that cannot (at least in non-monadic code) ?<br><br>So the final question is : if it is possible to automatically define segments of code that can be run in parallel, is [insert compiler name here] compiling this code as a one thread thing, or as a multithreaded version ?
<br><br>I guess on single processor machines, it is more efficient to do it as a single thread - but with those "many-cores" processors that are coming, it probably won't be true anymore.<br><br>Sorry if this post is blowing off open doors (I guess this doesn't mean anything in english but it sounds cool) ...
<br><br>Regards<br><br>Thomas Girod<br><br><br>