nofib benchmarks for measuring the effects of compiler
simonpj at microsoft.com
Fri Oct 9 03:38:12 EDT 2009
That's a much smaller difference than I'd expect. I can't explain why.
One reason may be that you probably didn't recompile all the libraries with -O0, and some (but far from all) nofib programs spend a lot of time in library code.
From: cvs-ghc-bounces at haskell.org [mailto:cvs-ghc-bounces at haskell.org] On Behalf Of David Peixotto
Sent: 09 October 2009 05:01
To: cvs-ghc at haskell.org
Subject: nofib benchmarks for measuring the effects of compiler optimizations
I'm interested in looking at the effects on performance of various compiler optimizations in GHC. I ran the nofib benchmarks against the stable branch to get a feel for some very simple results. In my measurements I only saw a maximum of 2.8% difference in runtime when using -O0 and -O2 flags. Also, it looks like only 5 of the benchmarks had a running time of more than 1 second. Do these look like results you would expect to see?
I am setting the compilation flags in $(GHC_TOP)/mk/build.mk by
NoFibHcOpts = -H64m -O0 # or -O2
and running the benchmarks by using make. It looks like the flags are getting picked up correctly judging by the output generated when running the benchmarks.
Would you say that the nofib benchmarks are the best available for measuring the effectiveness of compiler optimizations, or is there a better benchmarking suite for that purpose? Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Cvs-ghc