|Haskell Performance Resource|
Haskell is a non-strict language, and most implementations use a strategy called laziness to run your program. Basically laziness == non-strictness + sharing.
Laziness can be a useful tool for improving performance (see Performance/Laziness), but more often than not it reduces performance by adding a constant overhead to everything. Because of laziness, the compiler can't evaluate a function argument and pass the value to the function, it has to record the expression in the heap in a suspension (or thunk) in case it is evaluated later. Storing and evaluating suspensions is costly, and unnecessary if the expression was going to be evaluated anyway.
Optimising compilers like GHC try to reduce the cost of laziness using strictness analysis, which attempts to determine which function arguments are always evaluated by the function, and hence can be evaluated by the caller instead. Sometimes this leads to bigger gains; a strict Int can be passed as an unboxed value, for example. Strictness analysis sometimes does wonderful things; for example it is very good at optimising fac:
fac :: Int -> Int fac n = if n <= 1 then 1 else n * fac (n-1)
Strictness analysis can spot the fact that the argument n is strict, and can be represented unboxed. The resulting function won't use any heap while it is running, as you'd expect.
It's easy to accidentally write functions that aren't strict, though. Often a lazy function can be sitting around eating up your performance, when making it strict wouldn't change the meaning of the program. For example:
suminit :: [Int] -> Int -> Int -> (Int,[Int]) suminit xs len acc | len == 0 = (acc,xs) | otherwise = case xs of  -> (acc,) x:xs -> suminit xs (len-1) (acc+x)
this function sums the first len elements of a list, returning the sum and the remaining list. We've already tried to improve performance by using an accumulating parameter. However, the parameter acc isn't strict, because there's no guarantee that the caller will evaluate it. The compiler will use a fully boxed Int to represent acc, although it will probably use an unboxed Int to represent len. The expression (acc+x) will be saved as a suspension, rather than evaluated on the spot. (incidentally, this is a common pattern we see crop up time and again in small recursive functions with a few parameters).
To fix it, we need to make acc explicitly strict. The way to do this is using seq:
suminit :: [Int] -> Int -> Int -> (Int,[Int]) suminit xs len acc | len `seq` acc `seq` False = undefined -- *** add this line | len == 0 = (acc,xs) | otherwise = case xs of  -> (acc,) x:xs -> suminit xs (len-1) (acc+x)
using a guard like this is a common technique. It's not pretty, but it works. Some other languages (eg. Clean) have strictness annotations on types, which is is a less ugly way to express this, but for now there are no Haskell compilers that support this.
Incidentally, GHC will also eliminate the tuple returned by this function if the caller immediately deconstructs it.
Evaluating expressions strictly
There's a useful variant of the infix application operator ($) that evaluates its argument strictly: ($!). This can often be used to great effect in eliminating unnecessary suspensions that the compiler hasn't spotted. eg. in a function application
f (g x)
f $! (g x)
will be more efficient if (a) you were going to evaluate (g x) anyway, and (b) f isn't visibly strict, or inlined. If f is strict or inlined, then the chances are that ($!) is unnecessary cruft here.
A good example is the monadic return. If you find yourself writing
do ... ... return (fn x)
then consider instead writing
do ... ... return $! fn x
it is very rare to actually need laziness in the argument of return here.
NB. do not do this if the expression on the right of $! is a variable - that just wastes effort, because it does not eliminate a suspension. The only reason to do this would be if you were eliminating a space leak.
Warning: Using any kind of strictness annotations as above can have unexpected impact on program semantics, in particular when certain optimizations are performed by the compiler. See correctness of short cut fusion.