No subject
Thu Jul 5 12:38:43 CEST 2012
about Haskell/GHC specific hacks?
Is there a good written source for gathering the required knowledge?
For example, I remember I struggled a lot with Arrays once, and just days
after digging I found that Arrays are a no-no in Haskell. That was before
finding this maiking list, though.
On Dec 1, 2012 9:18 PM, "wren ng thornton" <wren at freegeek.org> wrote:
>
> On 11/29/12 2:17 PM, Ivan Salazar wrote:
>>
>> The bad side is that direct translation of algorithms are almost always
>> very slow and the work needed to make them perform is very mind bending.
>
>
> Indeed. The thing is, all algorithms make (implicit) assumptions about
the cost model of the underlying language. The problem comes about because
the assumptions made in most algorithms are not valid in Haskell. This
isn't just an issue of laziness; the entire computational model of Haskell
(e.g., STG) is radically different from imperative languages.
>
> For example, in many cases just passing arguments around (a la the State
monad) is much faster than using ST and explicit mutation. GHC does a lot
of clever code reorganization, but mutation breaks countless purity laws
and so it inhibits the application of these optimizations. Unfortunately,
the vast majority of algorithms out there assume you're working in a
language where mutation is essentially free. I'm not talking about any
significant theoretical issues about using mutation or not; I'm just
talking about the implicit assumptions that algorithm implementers make. If
you believe mutation is essentially free then it makes sense to create an
initial object and then incrementally mutate it this way and that until you
get to where you want. But, a great deal of the time there's nothing
actually stopping us from gathering all the information and allocating the
final result directly. However, if you're not used to thinking in those
terms, then the conceptual reorganization required to see how to gather all
the data at once is indeed mind-bending.
>
> --
> Live well,
> ~wren
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
On Dec 1, 2012 9:18 PM, "wren ng thornton" <wren at freegeek.org> wrote:
> On 11/29/12 2:17 PM, Ivan Salazar wrote:
>
>> The bad side is that direct translation of algorithms are almost always
>> very slow and the work needed to make them perform is very mind bending.
>>
>
> Indeed. The thing is, all algorithms make (implicit) assumptions about the
> cost model of the underlying language. The problem comes about because the
> assumptions made in most algorithms are not valid in Haskell. This isn't
> just an issue of laziness; the entire computational model of Haskell (e.g.,
> STG) is radically different from imperative languages.
>
> For example, in many cases just passing arguments around (a la the State
> monad) is much faster than using ST and explicit mutation. GHC does a lot
> of clever code reorganization, but mutation breaks countless purity laws
> and so it inhibits the application of these optimizations. Unfortunately,
> the vast majority of algorithms out there assume you're working in a
> language where mutation is essentially free. I'm not talking about any
> significant theoretical issues about using mutation or not; I'm just
> talking about the implicit assumptions that algorithm implementers make. If
> you believe mutation is essentially free then it makes sense to create an
> initial object and then incrementally mutate it this way and that until you
> get to where you want. But, a great deal of the time there's nothing
> actually stopping us from gathering all the information and allocating the
> final result directly. However, if you're not used to thinking in those
> terms, then the conceptual reorganization required to see how to gather all
> the data at once is indeed mind-bending.
>
> --
> Live well,
> ~wren
>
> ______________________________**_________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/**mailman/listinfo/haskell-cafe<http://www.haskell.org/mailman/listinfo/haskell-cafe>
>
--e89a8fb1f6cad1d7d804cfed2531
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
<p>I see.</p>
<p>I read some chapters from Purely Functional Data Structures when I was i=
n college in order to understand some tree algorithms, but not the whole bo=
ok.<br>
Do you think that could help me to understand performance problems with cod=
e (poorly) written in Haskell?</p>
<p>From reading your post, I can guess the book is a good start, but what a=
bout Haskell/GHC specific hacks?<br>
Is there a good written source for gathering the required knowledge?</p>
<p>For example, I remember I struggled a lot with Arrays once, and just day=
s after digging I found that Arrays are a no-no in Haskell. That was before=
finding this maiking list, though.<br></p>
<p>On Dec 1, 2012 9:18 PM, "wren ng thornton" <<a href=3D"mail=
to:wren at freegeek.org">wren at freegeek.org</a>> wrote:<br>
><br>
> On 11/29/12 2:17 PM, Ivan Salazar wrote:<br>
>><br>
>> The bad side is that direct translation of algorithms are almost a=
lways<br>
>> very slow and the work needed to make them perform is very mind be=
nding.<br>
><br>
><br>
> Indeed. The thing is, all algorithms make (implicit) assumptions about=
the cost model of the underlying language. The problem comes about because=
the assumptions made in most algorithms are not valid in Haskell. This isn=
't just an issue of laziness; the entire computational model of Haskell=
(e.g., STG) is radically different from imperative languages.<br>
><br>
> For example, in many cases just passing arguments around (a la the Sta=
te monad) is much faster than using ST and explicit mutation. GHC does a lo=
t of clever code reorganization, but mutation breaks countless purity laws =
and so it inhibits the application of these optimizations. Unfortunately, t=
he vast majority of algorithms out there assume you're working in a lan=
guage where mutation is essentially free. I'm not talking about any sig=
nificant theoretical issues about using mutation or not; I'm just talki=
ng about the implicit assumptions that algorithm implementers make. If you =
believe mutation is essentially free then it makes sense to create an initi=
al object and then incrementally mutate it this way and that until you get =
to where you want. But, a great deal of the time there's nothing actual=
ly stopping us from gathering all the information and allocating the final =
result directly. However, if you're not used to thinking in those terms=
, then the conceptual reorganization required to see how to gather all the =
data at once is indeed mind-bending.<br>
><br>
> -- <br>
> Live well,<br>
> ~wren<br>
><br>
><br>
> _______________________________________________<br>
> Haskell-Cafe mailing list<br>
> <a href=3D"mailto:Haskell-Cafe at haskell.org">Haskell-Cafe at haskell.org</=
a><br>
> <a href=3D"http://www.haskell.org/mailman/listinfo/haskell-cafe">http:=
//www.haskell.org/mailman/listinfo/haskell-cafe</a></p>
<div class=3D"gmail_quote">On Dec 1, 2012 9:18 PM, "wren ng thornton&q=
uot; <<a href=3D"mailto:wren at freegeek.org">wren at freegeek.org</a>> wro=
te:<br type=3D"attribution"><blockquote class=3D"gmail_quote" style=3D"marg=
in:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 11/29/12 2:17 PM, Ivan Salazar wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
The bad side is that direct translation of algorithms are almost always<br>
very slow and the work needed to make them perform is very mind bending.<br=
>
</blockquote>
<br>
Indeed. The thing is, all algorithms make (implicit) assumptions about the =
cost model of the underlying language. The problem comes about because the =
assumptions made in most algorithms are not valid in Haskell. This isn'=
t just an issue of laziness; the entire computational model of Haskell (e.g=
., STG) is radically different from imperative languages.<br>
<br>
For example, in many cases just passing arguments around (a la the State mo=
nad) is much faster than using ST and explicit mutation. GHC does a lot of =
clever code reorganization, but mutation breaks countless purity laws and s=
o it inhibits the application of these optimizations. Unfortunately, the va=
st majority of algorithms out there assume you're working in a language=
where mutation is essentially free. I'm not talking about any signific=
ant theoretical issues about using mutation or not; I'm just talking ab=
out the implicit assumptions that algorithm implementers make. If you belie=
ve mutation is essentially free then it makes sense to create an initial ob=
ject and then incrementally mutate it this way and that until you get to wh=
ere you want. But, a great deal of the time there's nothing actually st=
opping us from gathering all the information and allocating the final resul=
t directly. However, if you're not used to thinking in those terms, the=
n the conceptual reorganization required to see how to gather all the data =
at once is indeed mind-bending.<br>
<br>
-- <br>
Live well,<br>
~wren<br>
<br>
______________________________<u></u>_________________<br>
Haskell-Cafe mailing list<br>
<a href=3D"mailto:Haskell-Cafe at haskell.org" target=3D"_blank">Haskell-Cafe@=
haskell.org</a><br>
<a href=3D"http://www.haskell.org/mailman/listinfo/haskell-cafe" target=3D"=
_blank">http://www.haskell.org/<u></u>mailman/listinfo/haskell-cafe</a><br>
</blockquote></div>
--e89a8fb1f6cad1d7d804cfed2531--
More information about the Haskell-Cafe
mailing list