[Haskell-cafe] How far compilers are allowed to go with optimizations?

ok at cs.otago.ac.nz ok at cs.otago.ac.nz
Mon Feb 11 02:58:41 CET 2013


> As a software developer, who typically inherits code to work on rather
> than simply writing new, I see a potential of aggressive compiler
> optimizations causing trouble.

I would be grateful if someone could explain the
difference between "aggressive optimisation" and
"obviously sensible compilation" to me in a way
that doesn't boil down to "what I'm used to".

These days, C compilers do things like automatic
vectorization that were regarded as "aggressive
optimisation" back when C was new and Real Programmers
used 'cat' as their editor of choice.  (Been there,
done that, don't fit the T-shirt any more.)

> Programmer P inherits some application/system to improve upon. Someday
> he spots some piece of rather badly written code. So he sets out and
> rewrites that piece, happy with the improvement it brings to clarity
> and likely also to efficiency.
>
> The code goes into production and, disaster. The new "improved"
> version runs 3 times slower than the old, making it practically
> unusable. The new version has to be rolled back with loss of uptime
> and functionality and  management is not happy with P.

There are three fairly obvious comments here.
(1) The real villain is the original programmer who didn't leave
    comments explaining _why_ the code was written in a strange way.
(2) It is far more likely that clean code will trigger an
    optimisation than dirty code.
Case in point:  I have some 4-way merge code that was written in C with
heavy use of M4 to create inlined unfolded stuff that kept registers
full and produced serious computational goodness.  It gave a really
nice performance boost in the old days.  Modern compilers take one
glance at it, turn up their noses, and switch several kinds of
optimisation off, so that it actually runs slower than naive code.
(It still _could_ be compiled to go blindingly fast, it's just that
compilers say "too many labels, too hard" and stop trying.)
(3) A performance regression test would catch this, no?

> It just so happened that the old code triggered some aggressive
> optimization unbeknownst to everyone, **including the original
> developer**, while the new code did not.

So what _was_ the original developer's reason for writing strange
code?




More information about the Haskell-Cafe mailing list