Difference between revisions of "User:Benmachine/Non-strict semantics"

From HaskellWiki
Jump to navigation Jump to search
Line 24: Line 24:
 
2 in [2,4,noreturn(5)]
 
2 in [2,4,noreturn(5)]
   
also fails to have a value, because in order to construct the list, the interpreter tries to work out <tt>noreturn(5)</tt>, which of course doesn't return a value. This is called '''innermost-first''' evaluation: in order to call a function with some arguments, you first have to calculate what all the arguments are, starting from the innermost function call and working outwards. The result is that Python is strict, in the sense that calling any function with an undefined argument produces an undefined value, i.e. <tt>f(⊥) = ⊥</tt>. If your language uses innermost-first evaluation, it correspondingly must have strict semantics.
+
also fails to have a value, because in order to construct the list, the interpreter tries to work out <tt>noreturn(5)</tt>, which of course doesn't return a value. This is called '''innermost-first''' evaluation: in order to call a function with some arguments, you first have to calculate what all the arguments are, starting from the innermost function call and working outwards. The result is that Python is ''strict'', in the sense that calling any function with an undefined argument produces an undefined value, i.e. <tt>f(⊥) = ⊥</tt>. If your language uses innermost-first evaluation, it correspondingly must have strict semantics.
   
 
In Haskell, an analogous expression:
 
In Haskell, an analogous expression:
Line 32: Line 32:
 
</haskell>
 
</haskell>
   
in fact has the value <tt>True</tt>. The program does not have to compute <tt>noreturn 5</tt> because it is irrelevant to the overall value of the computation: only the values that are necessary to the result need be computed. This is called '''outermost-first''' evaluation because you first look at the outermost function call, <tt>elem</tt>, to see if it needs to use its arguments, and only if it does do you look at what those arguments are. This means that you can write a function that doesn't look at its argument, so it will return a value even if the argument is <tt>⊥</tt>. Such functions are not strict, i.e. they satisfy <tt>f(⊥) ≠ ⊥</tt>. Practically, this means that Haskell functions need not completely compute their arguments before using them, which is why e.g. <tt>take 3 [1..]</tt> can produce <tt>[1,2,3]</tt> even though it is given a conceptually infinite list.
+
in fact has the value <tt>True</tt>. The program does not have to compute <tt>noreturn 5</tt> because it is irrelevant to the overall value of the computation: only the values that are necessary to the result need be computed. This is called '''outermost-first''' evaluation because you first look at the outermost function call, <tt>elem</tt>, to see if it needs to use its arguments, and only if it does do you look at what those arguments are. This means that you can write a function that doesn't look at its argument, so it will return a value even if the argument is <tt>⊥</tt>. Such functions are ''not strict'', i.e. they satisfy <tt>f(⊥) ≠ ⊥</tt>. Practically, this means that Haskell functions need not completely compute their arguments before using them, which is why e.g. <tt>take 3 [1..]</tt> can produce <tt>[1,2,3]</tt> even though it is given a conceptually infinite list.
   
Note that outermost-first evaluation is not the only way to have non-strict semantics: a speculative evaluation strategy, that evaluated arguments in parallel with the function in case they were needed later, could also be non-strict, as long as whenever the speculative evaluation failed, the evaluation of the function continued.
+
Note that outermost-first evaluation is not the only way to have non-strict semantics: a speculative evaluation strategy, that evaluates arguments in parallel with the function in case they are needed later, could also be non-strict, as long as whenever the speculative evaluation failed, the evaluation of the function continued.
   
 
Note also that in order for a function to be truly non-strict, it must return something without inspecting its argument ''at all''. You might think that doesn't sound like a very useful function, but remember that it might be e.g. a partial application: the function <tt>(||) True</tt>, or equivalently <tt>\x -> True || x</tt> does not need to inspect its argument, since <tt>True || x</tt> is always <tt>True</tt>. There are other examples, too: constructors like <tt>Just</tt> wrap their argument without inspecting it, and some other functions apply constructors ''before'' looking at the argument, and hence still produce a partial result, e.g. <tt>inits ⊥ = [] : ⊥</tt>
 
Note also that in order for a function to be truly non-strict, it must return something without inspecting its argument ''at all''. You might think that doesn't sound like a very useful function, but remember that it might be e.g. a partial application: the function <tt>(||) True</tt>, or equivalently <tt>\x -> True || x</tt> does not need to inspect its argument, since <tt>True || x</tt> is always <tt>True</tt>. There are other examples, too: constructors like <tt>Just</tt> wrap their argument without inspecting it, and some other functions apply constructors ''before'' looking at the argument, and hence still produce a partial result, e.g. <tt>inits ⊥ = [] : ⊥</tt>

Revision as of 18:01, 14 September 2013

An expression language is said to have non-strict semantics if expressions can have a value even if some of their subexpressions do not. Haskell is one of the few modern languages to have non-strict semantics by default: nearly every other language has strict semantics, in which if any subexpression fails to have a value, the whole expression fails with it.

What?

Any sufficiently capable programming language is non-total, which is to say you can write expressions that do not produce a value: common examples are an exception thrown, an infinite loop, or unproductive recursion, e.g. the following definition in Haskell:

noreturn :: Integer -> Integer
noreturn x = negate (noreturn x)

or the following Python function:

def noreturn(x):
    while True:
        x = -x

    return x # not reached

both fail to produce a value when executed. We say that noreturn x is undefined, and write noreturn x = .

In Python the following expression to check if 2 is in some list:

2 in [2,4,noreturn(5)]

also fails to have a value, because in order to construct the list, the interpreter tries to work out noreturn(5), which of course doesn't return a value. This is called innermost-first evaluation: in order to call a function with some arguments, you first have to calculate what all the arguments are, starting from the innermost function call and working outwards. The result is that Python is strict, in the sense that calling any function with an undefined argument produces an undefined value, i.e. f(⊥) = ⊥. If your language uses innermost-first evaluation, it correspondingly must have strict semantics.

In Haskell, an analogous expression:

elem 2 [2, 4, noreturn 5]

in fact has the value True. The program does not have to compute noreturn 5 because it is irrelevant to the overall value of the computation: only the values that are necessary to the result need be computed. This is called outermost-first evaluation because you first look at the outermost function call, elem, to see if it needs to use its arguments, and only if it does do you look at what those arguments are. This means that you can write a function that doesn't look at its argument, so it will return a value even if the argument is . Such functions are not strict, i.e. they satisfy f(⊥) ≠ ⊥. Practically, this means that Haskell functions need not completely compute their arguments before using them, which is why e.g. take 3 [1..] can produce [1,2,3] even though it is given a conceptually infinite list.

Note that outermost-first evaluation is not the only way to have non-strict semantics: a speculative evaluation strategy, that evaluates arguments in parallel with the function in case they are needed later, could also be non-strict, as long as whenever the speculative evaluation failed, the evaluation of the function continued.

Note also that in order for a function to be truly non-strict, it must return something without inspecting its argument at all. You might think that doesn't sound like a very useful function, but remember that it might be e.g. a partial application: the function (||) True, or equivalently \x -> True || x does not need to inspect its argument, since True || x is always True. There are other examples, too: constructors like Just wrap their argument without inspecting it, and some other functions apply constructors before looking at the argument, and hence still produce a partial result, e.g. inits ⊥ = [] : ⊥

Why?

The important thing to understand about non-strict semantics is that it is not a performance feature. Non-strict semantics allows your language to only evaluate the things it needs to, but if you write your programs carefully, you'll only compute what is absolutely necessary anyway, so the extra time your program spends working out what should and shouldn't be evaluated is time wasted. For this reason, a very well-optimised strict program will frequently outperform even the fastest non-strict program.

However, the real and major advantage that non-strictness gives you over strict languages is you get to write cleaner and more composable code. In particular, you can separate production and consumption of data: don't know how many prime numbers you're going to need? Just make `primes` a list of all prime numbers, and then which ones actually get generated depends on how you use them in the rest of your code. By contrast, writing code in a strict language that constructs a data structure in response to demand usually will require first-class functions and/or a lot of manual hoop-jumping to make it all behave itself.

Consider the following Haskell function definition:

any :: (a -> Bool) -> [a] -> Bool
any p = or . map p

Because or uses non-strictness to stop at the first True in the input, map doesn't even need to know that only the first half of the list might be needed. We can write map in the completely straightforward and obviously correct way, and still have it interact well with or in this way; map produces data, or consumes it, and the two are properly decoupled.

In a strict langauge, you'd have to write the recursion out manually:

any p [] = False
any p (x:xs)
  | p x = True
  | otherwise = any p xs

since in strict languages only builtin control structures can decide whether some bit of code gets executed or not, ordinary functions like or can't.

It's this additional power that Haskell has that leads people to say you can define your own control structures as normal Haskell functions, which allows all sorts of interesting patterns to be abstracted in an incredibly lightweight fashion. Labelled for-loops are a library in Haskell, rather than requiring special syntax and language support.

How do I stop it?

As mentioned above, non-strictness can hurt performance, e.g. if a result is definitely going to be needed later, you might as well evaluate it now, to avoid having to hold on to all the data that goes into it. Fortunately, the Haskell designers were aware of these problems and introduced a loophole or two so that we could force our programs to be strict when necessary: see Performance/Strictness and seq.