[Haskell-cafe] custom SQL-to-Haskell type conversion in HDBC
Richard O'Keefe
ok at cs.otago.ac.nz
Mon Aug 22 02:29:32 CEST 2011
On 20/08/2011, at 11:41 PM, Erik Hesselink wrote:
>
> This is the way I was taught to do it in physics. See also http://en.m.wikipedia.org/wiki/Significance_arithmetic
There are at least two different "readings" of fixed precision arithmetic.
(1) A number with d digits after the decimal point is
a *precise* integer times 10**-d.
Under this reading, scale(x) ± scale(y) => scale(max(x,y))
scale(x) × scale(y) => scale(x+y)
scale(x) ÷ scale(y) => an exact rational number
scale(x) < scale(y) is well-defined even when x ~= y
scale(x) = scale(y) is well-defined even when x ~= y
(2) A number with d digits after the decimal point represents
*some* number in the range (as written) ± (10**-d)/2
Under this reading, scale(x) ± scale(y) => scale(min(x,y))
scale(x) × scale(y) => depends on the value of the numbers
scale(x) < scale(y) is often undefined even when x = y
scale(x) = scale)y) is often undefined even when x = y
The web page Erik Hesselink pointed to includes the example 8.02*8.02 = 64.3
(NOT 64.32).
Values in data bases often represent sums of money, for which reading (1) is
appropriate. One tenth of $2.53 is $0.253; rounding that to $0.25 would in
some circumstances count as fraud.
Of course, values in data bases often represent physical measurements, for which
reading (2) is appropriate. There is, however, no SQL data type that expresses
this intent.
More information about the Haskell-Cafe
mailing list