2 points on language. Continuing

Bjorn Lisper lisper@it.kth.se
Fri, 24 Aug 2001 11:12:21 +0200 (MET DST)


>D. Tweed:
>On Fri, 24 Aug 2001, S.D.Mechveliani wrote:
>
>> I am skeptical too.
>> Anyway, if a compiler `knows' the word  CommutativeRing, 
>> knows that by definition,  CommutativeRing  requires 
>> associativity, commutativity ..., observes the instance  
>> CommutativeRing T  for some type T  in some program, sees further
>>                       
>>                        expr = (2*a + b) - (a+a)  :: T 
>> 
>> and sees the language version (compiler) flag     -fignore-bottom,
>> (something of this sort) then, has it right to convert  
>>                          
>>                        expr --> b :: T
>> ?
(...)
>On a purely pragmatic note ( :-) ) these optimisations can't necessarily
>be safely used in three of the four most common cases, namely Int, Float
>and Double because of the restricted range of intermediates. In
>particular, somewhere in my copy of the source code for MetaFont Donald
>Knuth has a routine containing (from memory):
>
>t <- (p-q) + p; {compute t=2p-q without overflow; let us hope an
>optimising compiler does not `optimise' this to 2*p-q}
>
>There's a thread on the gcc mailing list (`Unsafe FP optimisations' I
>think) that covers some of the even less obvious things that can happen in
>floating point. Clearly people who write code which is both likely to use
>the entire range of a numeric type and also do enough reasoning to know
>that they're particular sequence of operations does not overflow even
>though other `equivalent' reorderings will, is a set of measure zero.
>However I wouldn't think it was reasonable to ignore the possibility the
>program writer knows what he's doing better than the programmer. (In the
>fourth case, Integer, there're no problems that I can see with these
>optimisations.)
>
>So the potential benefits of optimisations based on commutativity and
>associativity seem to be restricted to Integers and non-elementary numeric
>types and algebraic types.

I don't fully agree. There are plenty of cases where transformations based
on these properties are useful and safe in practice, although it may be hard
or impossible to prove the safety formally. Or, it might be the case that
the damage caused by an occasional numeric failure that manifests itself
(lack of convergence, overflow) is more than compensated by the average gain
in speed. Restructuring compilers, in particular parallelising compilers,
sometimes rely heavily on these algebraic laws for their optimizations.

On the other hand, it must of course be possible to turn off such
optimizations when they are deemed unsafe enough to be harmful. So the
solution seems to be compiler flags or code annotations.

Björn Lisper