[Haskell-cafe] Re: FW: Haskell

Andrew Bagdanov bagdanov at gmail.com
Tue Apr 1 17:50:38 EDT 2008


On Tue, Apr 1, 2008 at 5:37 PM, Chris Smith <cdsmith at twu.net> wrote:
> Just random thoughts here.
>

Same here...

>
>  Andrew Bagdanov wrote:
>  > Well, if I don't have side effects (and don't mind extra, unneeded
>  > evaluations), I can write my conditionals as functions in Scheme too.
>  > Heck, now that I think of it I can even avoid those extra evaluations
>  > and side-effect woes if i require promises for each branch of the
>  > conditional.  No macros required...
>
>  This is essentially doing lazy evaluation in Scheme.  It's certainly
>  possible; just clumsy.  You must explicitly say where to force
>  evaluation; but if you think about it, the run-time system already knows
>  when it needs a value.  This is very analogous to having type inference
>  instead of explicitly declaring a bunch of types as in Java or C++.
>

Boy is it ever clumsy, and I like your analogy too.  But lazy
evaluation semantics typically come with purity, which is also a
fairly heavy burden to foist onto the user...  Certainly not without
benefits, but at times a burden nonetheless...

>
>  > Again, I think this is highly problem
>  > dependent, though I think you win more with lazy evaluation in the long
>  > run.  Do more experienced Haskellers than me have the opposite
>  > experience?  I mean, do you ever find yourself forcing strict evaluation
>  > so frequently that you just wish you could switch on strict evaluation
>  > as a default for a while?
>
>  The first thing I'd say is that Haskell, as a purely functional language
>  that's close enough to the pure lambda calculus, has unique normal
>  forms.  Furthermore, normal order (and therefore lazy) evaluation is
>  guaranteed to be an effective evaluation order for reaching those normal
>  forms.  Therefore, forcing strictness can never be needed to get a
>  correct answer from a program.  (Applicative order evaluation does not
>  have this property.)
>

I thought that in a pure functional language any evaluation order was
guaranteed to reduce to normal form.  But then it's been a very, very
long time since I studied the lambda calculus...

>  Therefore, strictness is merely an optimization.  In some cases, it can
>  improve execution time (by a constant factor) and memory usage (by a
>  lot).  In other cases, it can hurt performance by doing calculations that
>  are not needed.  In still more cases, it is an incorrect optimization and
>  can actually break the code by causing certain expressions that should
>  have an actual value to become undefined (evaluate to bottom).  I've
>  certainly seen all three cases.
>
>  There are certainly situations where Haskell uses a lot of strictness
>  annotations.  For example, see most of the shootout entries.  In
>  practice, though, code isn't written like that.  I have rarely used any
>  strictness annotations at all.  Compiling with optimization in GHC is
>  usually good enough.  The occasional bang pattern (often when you intend
>  to run something in the interpreter) works well enough.
>
>  (As an aside, this situation is quite consistent with the general
>  worldview of the Haskell language and community.  Given that strictness
>  is merely an optimization of laziness, the language itself naturally opts
>  for the elegant answer, which is lazy evaluation; and then Simon and
>  friends work a hundred times as hard to make up for it in GHC!)
>

Yeah, I'm actually pretty convinced on the laziness issue.  Lazy
evaluation semantics are a big win in many ways.

>
>  > I think this is possibly the weakest reason to choose Haskell over
>  > Scheme.  Lispers like the regularity of the syntax of S-expressions, the
>  > fact that there is just one syntactic form to learn, understand, teach,
>  > and use.
>
>  I am strongly convinced, by the collective experience of a number of
>  fields of human endeavor, that noisy syntax gets in the way of
>  understanding.  Many people would also say that mathematical notation is
>  a bit impenetrable -- capital sigmas in particular seem to scare people
>  -- but I honestly think we'd be a good ways back in the advancement of
>  mathematical thought if we didn't have such a brief and non-obstructive
>  syntax for these things.  Mathematicians are quite irregular.  Sometimes
>  they denote that y depends on x by writing y(x); sometimes by writing y_x
>  (a subscript); and sometimes by writing y and suppressing x entirely in
>  the notation.  These are not arbitrary choices; they are part of how
>  human beings communicate with each other, by emphasizing some things, and
>  suppressing others.  If one is to truly believe that computer programs
>  are for human consumption, then striving for regularity in syntax doesn't
>  seem consistent.
>

All good points, but "noisy" is certainly in the eye of the beholder.
I'd make a distinction between background and foreground noise.  A
simple, regular syntax offers less background noise.  I don't have to
commit lots of syntactic idioms and special cases to memory to read
and write in that language.  Low background noise in Scheme, and I'm
responsible for whatever foreground noise I create with my syntactic
extensions.

Haskell, with more inherent syntax, has more background noise but
perhaps limits the amount of foreground noise I can introduce because
it constrains me from the beginning...

OK, this analogy is starting to suck, so I'll move on...  More on the
mathematical notation analogy below.

>  Initially, syntax appears to be on a completely different level from all
>  the deep "semantic" differences; but they are in reality deeply
>  interconnected.  The earlier comment I made about it being clumsy to do
>  lazy programming in Scheme was precisely that the syntax is too noisy.
>  Other places where lazy evaluation helps, in particular compositionality,
>  could all be simulated in Scheme, but you'd have to introduce excessive
>  syntax.  The result of type inference is also a quieter expression of
>  code.  So if concise syntax is not desirable, then one may as well throw
>  out laziness and type inference as well.  Also, sections and currying.
>  Also, do notation.  And so on.
>

Well, yes.  However, I think, and have found through personal
experience, that notation is one of the most personal aspects of
mathematics, and one of the most difficult things to get "right."  Or
even "pretty good."  Good notation that conveys meaning without
distraction and confusion (ah yes, the interconnection between syntax
and semantics rears it's head again) is extremely difficult to invent.
Like in mathematics, at the boundaries of creativity, I prefer the
freedom to create my own notation and syntax in order to express my
ideas about computation, and to adapt them to whatever relevant
community standards are appropriate (physicists, signal processors,
and image processing folks all have their own notations for
convolution).  Haskell has an elegant and expressive, but already
quite defined and complex, syntax that is not particularly malleable.

Returning to your line of reasoning, why should one have to commit to
a single, already defined syntax instead of a minimal, regular one
which admits the possibility of extending and modifying the syntax to
fit my needs?  It's a double-edged blade, obviously, because if I
botch my syntax extensions I ruin the regularity and simplicity I
originally had and don't communicate clearly to others or myself.
It's the same risk as in mathematics, and just a hard to get "right."

Do notation and sections are both syntactic constructs that irk me at
an almost visceral level about Haskell syntax, but I certainly see the
convenience of both and recognize that my opinion is wholly
subjective.  That Scheme applications are non-currying by default is
one of the things that irks me now about Scheme...

>
>  > In short, I think the orginal question must be asked in context.  For
>  > some problems, types are just a natural way to start thinking about
>  > them.  For others dynamic typing, with _judicious_ use of macros to
>  > model key aspects, is the most natural approach.
>
>  I wouldn't completely rule out, though, the impact of the person solving
>  the problem on whether type-based problem solving is a natural and useful
>  way to solve problems.  Indeed, I would guess this probably ends up being
>  a greater factor, in the end, than the problem.  Unfortunately, we (as a
>  general programming community) have caged ourselves into a box with the
>  mantra "use the right tool for the job" such that we find it far too easy
>  to dismiss something as the wrong tool merely because it is unfamiliar.
>  My not being able to use a tool well may be a good reason for me, but
>  it's a lousy reason for me to advise someone else.
>

Yes and no.  I see your point about the person being central to the
problem solving itself, and "the right tool for the right job"
implicitly assumes that there is on "right" solution for a problem.

However, I can pretty much only base my advice to someone about what
tools to use on my own experience with those tools (qualified, of
course, with the upfront admission that it's based on personal
experience), and I would likewise be wary myself of someone giving
advice that wasn't tempered by experience.  Theoretical discussions
about the theoretical underpinnings of tools seem to be an equally
lousy basis for such advice.

Cheers,

-Andy

>
>
>
>  _______________________________________________
>  Haskell-Cafe mailing list
>  Haskell-Cafe at haskell.org
>  http://www.haskell.org/mailman/listinfo/haskell-cafe
>


More information about the Haskell-Cafe mailing list