Difference between revisions of "ThreadScope Tour/Spark"

From HaskellWiki
Jump to navigation Jump to search
(say a bit more about the spark pools)
 
(4 intermediate revisions by the same user not shown)
Line 21: Line 21:
 
<pre> ./parlist0 +RTS -N2 -lf
 
<pre> ./parlist0 +RTS -N2 -lf
 
threadscope parlist0.eventlog</pre>
 
threadscope parlist0.eventlog</pre>
<p>Notice here that we pass the <code>f</code> flag to the <code>-l</code>. This means that in addition to the events collected by default [(s)cheduler, (g)arbage collector, and s(p)ark events], we also want to collect the (f)ull details for par spark events. See the [[ThreadScope_Tour/RTS|list of event flags]] for details.</p>
+
<p>Notice here that we pass the <code>f</code> flag to the <code>-l</code>. This means that in addition to the events collected by default (s)cheduler, (g)arbage collector, and s(p)ark events we also want to collect the (f)ull details for par spark events. See the [[ThreadScope_Tour/RTS|list of event flags]] for details.</p>
<p>''-lf not recognised?'' Your GHC is not recent enough</p></li>
+
<p>''HINT: -lf not recognised? Your GHC is not recent enough''</p></li>
 
<li><p>Have a look at the activity profile. What problems do you see?</p>
 
<li><p>Have a look at the activity profile. What problems do you see?</p>
 
[[Image:ThreadScope-parlist0.png|parlist0 activity]]
 
[[Image:ThreadScope-parlist0.png|parlist0 activity]]
Line 40: Line 40:
 
[[Image:ThreadScope-spark-creation-conversion.png|400px|spark creation/conversion]]
 
[[Image:ThreadScope-spark-creation-conversion.png|400px|spark creation/conversion]]
   
<p>The creation charts show that a large proportion of attempted sparks overflow. They are not created because of spark pools are full. (It may be useful to have a look at the [[Special:Filename/spark-lifecycle.png|spark lifecycle]] if you have not done so already).</p>
+
<p>The creation charts show that a large proportion of attempted sparks overflow. They are not created because of spark pools are full. (It may be useful to have a look at the [[Special:Filepath/spark-lifecycle.png|spark lifecycle]] if you have not done so already).</p>
[[Image:spark-lifecycle.png|200px|spark lifecycle thumbnail]]
+
[[Image:spark-lifecycle.png|thumb]]
 
</li>
 
</li>
 
<li><p>Scrolling down a bit further to the spark pool. Notice how the spark pool usage for HEC 1 shoots up until full and gradually declines as the pool is depleted.</p>
 
<li><p>Scrolling down a bit further to the spark pool. Notice how the spark pool usage for HEC 1 shoots up until full and gradually declines as the pool is depleted.</p>
 
[[Image:ThreadScope-spark-pool.png|spark pool]]
 
[[Image:ThreadScope-spark-pool.png|spark pool]]
   
<p>Compare this with the spark creation chart. The red bits (overflow) coincide with the spark pool being full ('''TODO this isn't quite true; have I misunderstood something?''')</p></li>
+
<p>Compare this with the spark creation chart. The red parts (overflow) roughly coincide with where the spark pool is full. The pool stays full as any spark which is converted away is quickly replaced by new ones coming in (the bits of green). Only when we stop trying to create sparks does the pool begin to drain.</p>
  +
<p>''HINT: Remember that whereas the creation/conversion graphs represent '''rates''' (sparks per ms), the spark pool graph tracks '''quantities'''''</p>
  +
</li>
 
<li><p>We can see that we're getting overflow, that we are generating many sparks way faster than we can convert them, or conversely that we aren't converting sparks fast enough.</p>
 
<li><p>We can see that we're getting overflow, that we are generating many sparks way faster than we can convert them, or conversely that we aren't converting sparks fast enough.</p>
 
<p>Have a look at the source code. Do you see anything that suggests excessive sparking?</p>
 
<p>Have a look at the source code. Do you see anything that suggests excessive sparking?</p>
Line 68: Line 70:
 
# Not enough vertical space? Try shrinking the bottom drawer or just disable 'Views > Information Pane'
 
# Not enough vertical space? Try shrinking the bottom drawer or just disable 'Views > Information Pane'
 
# Remember that the spark creation/conversion graphs represent '''rates''' (sparks per ms) and not '''quantity'''
 
# Remember that the spark creation/conversion graphs represent '''rates''' (sparks per ms) and not '''quantity'''
  +
# If you want to study just one of the cores, it may help to disable the traces for the other cores so that graphs are brought closer together<br/>[[Image:ThreadScope-focus-on-HEC.png‎]]

Latest revision as of 11:55, 9 December 2011

Sparking too fast

Objectives

Use spark events to diagnose unthrottled sparking.

Note: you will need GHC 7.4 or higher to generate eventlogs used by the spark events viewer. You can use our sample logs if you just want to play with the UI a bit.

Concepts

  • spark pool - each core is associated with a finite pool of sparks; if there is not enough space in the pool for new sparks, the overflow is just dropped

Steps

  1. Download and build the parlist examples

     cd ThreadScope-Tour-examples
     make

    Note the need for GHC 7.4

  2. Run the parlist0 example, enabling collection of full spark events:

     ./parlist0 +RTS -N2 -lf
     threadscope parlist0.eventlog

    Notice here that we pass the f flag to the -l. This means that in addition to the events collected by default – (s)cheduler, (g)arbage collector, and s(p)ark events – we also want to collect the (f)ull details for par spark events. See the list of event flags for details.

    HINT: -lf not recognised? Your GHC is not recent enough

  3. Have a look at the activity profile. What problems do you see?

    parlist0 activity

    Some issues you may notice

    • We are only get half the activity possible from our CPUs
    • Most of work by HEC 1 after a while
    • There is also some 'stutter' and interruption for garbage collection (orange)

    We know now what some problems are, eg. only one CPU doing any work. Now how do we figure out what we have these problems?

  4. Enable the spark creation and conversion traces

    tick the spark creation/conversion boxes

  5. Scroll down to view these traces (shrink the bottom drawer if you need more room). What do you see?

    HINT: Remember that the spark creation/conversion graphs represent rates (sparks per ms) and not quantity

    spark creation/conversion

    The creation charts show that a large proportion of attempted sparks overflow. They are not created because of spark pools are full. (It may be useful to have a look at the spark lifecycle if you have not done so already).

    Spark-lifecycle.png
  6. Scrolling down a bit further to the spark pool. Notice how the spark pool usage for HEC 1 shoots up until full and gradually declines as the pool is depleted.

    spark pool

    Compare this with the spark creation chart. The red parts (overflow) roughly coincide with where the spark pool is full. The pool stays full as any spark which is converted away is quickly replaced by new ones coming in (the bits of green). Only when we stop trying to create sparks does the pool begin to drain.

    HINT: Remember that whereas the creation/conversion graphs represent rates (sparks per ms), the spark pool graph tracks quantities

  7. We can see that we're getting overflow, that we are generating many sparks way faster than we can convert them, or conversely that we aren't converting sparks fast enough.

    Have a look at the source code. Do you see anything that suggests excessive sparking?

    main =
      let xs = [ fib (5 + n `mod` 10) | n <- [0..100000] ]
                 `using` parListWHNF
       in print (sum xs)
    
    -- HINT: look here!
    parListWHNF :: Strategy [a] 
    parListWHNF xs = go xs `pseq` return xs
      where
        go []     = []
        go (y:ys) = y `par` go ys

    Study the parListWHNF function; it sparks the entire list off in parallel, all 100 thousand fibonnaci computations. Maybe we need to generate sparks a little less aggressively.

  8. Move on to the next section to see what happens when we throttle spark creation with the rolling buffer trick.

Hints and Tips

  1. Not enough vertical space? Try shrinking the bottom drawer or just disable 'Views > Information Pane'
  2. Remember that the spark creation/conversion graphs represent rates (sparks per ms) and not quantity
  3. If you want to study just one of the cores, it may help to disable the traces for the other cores so that graphs are brought closer together
    ThreadScope-focus-on-HEC.png