# [Haskell-cafe] Re: Integers v ints

Jon Fairbairn jon.fairbairn at cl.cam.ac.uk
Fri Apr 2 05:22:17 EDT 2010

```Jens Blanck <jens.blanck at gmail.com> writes:

> On 1 April 2010 10:53, Ivan Lazar Miljenovic <ivan.miljenovic at gmail.com>wrote:
>> Jens Blanck <jens.blanck at gmail.com> writes:
>> > I was wondering if someone could give me some references to
>> > when and why the choice was made to default integral
>> > numerical literals to Integer rather than to Int in

Seems to have been in 1998.  I don't have a complete archive of
the discussion, though, and I don't know where to find the

>> My guess is precision: some numeric calculations (even doing
>> a round on some Double values) will be too large for Int
>> values (at least on 32bit). Note that unlike Python, etc.
>> Haskell doesn't allow functions like round to choose between
>> Int and Integer (which is equivalent to the long type in
>> Python, etc.).
>
> Ints have perfect precision as long as you remember that it
> implements modulo arithmetic for some power of 2. I was hoping
> that the reason would be that Integers give more users what
> they expect, namely integers, instead of something where you
> can add two positive numbers and wind up with a negative
> number.

As I interpret the part of the discussion I have on file, there
are two reasons:

(1) as you hoped, because Integers are what people "expect":
reasoning on Integers is more reliable -- you can't do induction
on Int, for example, and people don't generally try to prove
that they've implemented

f x = the_f_they_originally_wanted x `mod` 2^32

(2) good language design. One of the things I've repeated over
the years is that Int doesn't have to be part of the language
(it's just another peculiar type that should be defined in a
library) but Integer does, because without it there's no way to
specify the meaning of an Integral constant¹.

Jón

[1] This isn't quite true; using subtyping one could make
Integral constants into [Digit] and leave the conversion to the