<br><br><div><span class="gmail_quote">On 7/15/07, <b class="gmail_sendername">Donald Bruce Stewart</b> <<a href="mailto:dons@cse.unsw.edu.au">dons@cse.unsw.edu.au</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Oh, and I forgot you count up by two now. Here's the Haskell<br>transliteration (again).<br><br><br> {-# OPTIONS -O2 -optc-O -fbang-patterns #-}<br><br> import <a href="http://Control.Monad.ST">Control.Monad.ST</a>
<br> import <a href="http://Data.Array.ST">Data.Array.ST</a><br> import Data.Array.Base<br> import System<br> import Control.Monad<br> import Data.Bits<br><br> main = print (pureSieve 10000000)<br><br> pureSieve :: Int -> Int
<br> pureSieve n = runST( sieve n )<br><br> sieve n = do<br> a <- newArray (3,n) True :: ST s (STUArray s Int Bool)<br> let cutoff = truncate (sqrt (fromIntegral n)) + 1<br> go a n cutoff 3 1
<br><br> go !a !m cutoff !n !c<br> | n >= m = return c<br> | otherwise = do<br> e <- unsafeRead a n<br> if e then<br> if n < cutoff<br> then let loop !j
<br> | j < m = do<br> x <- unsafeRead a j<br> when x $ unsafeWrite a j False<br> loop (j+n)
<br><br> | otherwise = go a m cutoff (n+2) (c+1)<br><br> in loop ( if n < 46340 then n * n else n `shiftL` 1)<br> else go a m cutoff (n+2) (c+1)<br>
<br> else go a m cutoff (n+2) c<br><br><br>Marginally faster:<br><br> $ time ./primes<br> 664579<br> ./primes 0.34s user 0.00s system 89% cpu 0.385 total<br><br>Very cache-dependent though, so widely varying runtimes could be
<br>expected.<br><br>-- Don<br></blockquote></div><br>Hi Donald, quick question. So, one of the things that is very interesting about Haskell is it's potential for automatic threading, ie you write a trivial algorithm that looks like it runs in a single thread, and the runtime splits it across multiple cores automatically.
<br><br>It's fairly safe to say that maps, foldrs, foldls, and their derivatives are safe to parallelize? (For example, hand-waving argument, a foldr of (/) on [1,5,7,435,46,2] can be split into a foldr on [1,5,7] and a foldr on [435,46,2], then their results combined).
<br><br>To what extent is the technology you are using in your algorithm parallizable? (I actually cant tell, it's a genuine question). In the case that it is parallelizable, to what extent is it trivial for a runtime to know this? (Again, I dont have enough information to tell)
<br><br>