[Haskell-cafe] Fwd: A Thought: Backus, FP, and Brute Force Learning

Eli Frey eli.lee.frey at gmail.com
Fri Mar 22 10:18:07 CET 2013


---------- Forwarded message ----------
From: Eli Frey <eli.lee.frey at gmail.com>
Date: Wed, Mar 20, 2013 at 4:56 PM
Subject: Re: [Haskell-cafe] A Thought: Backus, FP, and Brute Force Learning
To: OWP <owpmailact at gmail.com>


I have not read Bacchus' paper, so i might be off the mark here.

Functional code is just as simple (if not more so) to puzzle apart and
understand as imperative code.  You might find that instead of  "stepping
through the process" of code, you end up "walking the call graph" more
often.  FPers tend to break their problem into ever smaller parts before
re-assembling them back together, often building their own vocabulary as
they go.  Not to say this is not done in imperative languages, but it is
not so heavily encouraged and embraced.  One result of this is that you can
easily understand a piece of code in isolation, without considering it's
place in some "process".  It sounds as though you are not yet comfortable
with this yet.

So yes, you might have to learn more vocabulary to understand a piece of
functional code.  This is not because the inner workings are obfuscated,
but because there are so many more nodes in the call graph that are given
names.  You can still go and scrutinize each of those new pieces of
vocabulary by themselves and understand them without asking for the author
to come down from on high with his explanation.

Let's take iteration for example.  In some imperative languages, people
spend an awful lot of time writing iteration in terms of language
primitives.  You see a for loop.  "What is this for loop doing?" you say to
yourself.  So you step through the loop imagining how it behaves as it goes
and you say "Oh, this guy is walking through the array until he finds an
element that matches this predicate."  In a functional style, you would
reuse some iterating function and give it functions to use as it is
iterating.  The method of iteration is still there, it has just nested into
the call graph and if you've never seen that function name before you've
got to go look at it.  Again I don't mean to suggest that this isn't
happening in an imperative language, just not to the same degree.

As well there is a bit of a learning curve in seeing what a function "does"
when there is no state or "doing" to observe in it.  Once you get used to
it, I believe you will find it quite nice though.  You have probably heard
FPers extolling the virtues of "declarative" code.  When there is no state
or "process" to describe, you end up describing what a thing is.  I for one
think this greatly increases readability.

Good Luck!
Eli


On Wed, Mar 20, 2013 at 3:59 PM, OWP <owpmailact at gmail.com> wrote:

> I made an error.  I meant FP to stand for Functional Programming, the
> concept not the language.
>
> On Wed, Mar 20, 2013 at 6:54 PM, OWP <owpmailact at gmail.com> wrote:
>
>> This thought isn't really related to Haskell specifically but it's more
>> towards FP ideal in general.
>>
>> I'm new to the FP world and to get me started, I began reading a few
>> papers.  One paper is by John Backus called "Can Programming Be Liberated
>> from the von Neumann Style? A Functional Style and It's Algebra of
>> Programs".
>>
>> While I like the premise which notes the limitation of the von Neumann
>> Architecture, his solution to this problem makes me feel queasy when I read
>> it.
>>
>> For me personally, one thing I enjoy about a typical procedural program
>> is that it allows me to "Brute Force Learn".  This means I stare at a
>> particular section of the code for a while until I figure out what it
>> does.  I may not know the reasoning behind it but I can have a pretty
>> decent idea of what it does.  If I'm lucky, later on someone may tell me
>> "oh, that just did a gradient of such and such matrix".  In a way, I feel
>> happy I learned something highly complex without knowing I learned
>> something highly complex.
>>
>> Backus seems to throw that out the window.  He introduces major new terms
>> which require me to break out the math book which then requires me to break
>> out a few other books to figure out which bases things using archaic
>> symbols which then requires me to break out the pen and paper to mentally
>> expand what in the world that does.  It makes me feel CISCish except
>> without a definition book nearby.  It's nice if I already knew what a
>> "gradient of such and such matrix" is but what happens if I don't?
>>
>> For the most part, I like the idea that I have the option of Brute Force
>> Learning my way towards something.  I also like the declarative aspect of
>> languages such as SQL which let's me asks the computer of things once I
>> know the meaning of what I'm asking.  I like the ability to play and learn
>> but I also like the ability to declare this or that once I do learn.  From
>> Backus paper, if his world comes to a reality, it seems like I should know
>> what I'm doing before I even start.  The ability to learn while coding
>> seems to have disappeared.  In a way, if the von Neumann bottleneck wasn't
>> there, I'm not sure programming would be as popular as it is today.
>>
>> Unfortunately, I'm still very new and quite ignorant about Haskell so I
>> do not know how much of Backus is incorporated in Haskell but so far, in
>> the start of my FP learning adventure, this is how things seem to be seen.
>>
>> If I may generously ask, where am I wrong and where am I right with this
>> thought?
>>
>> Thank you for any explanation
>>
>> P.S.  If anyone knows of a better place I can ask this question, please
>> feel free to show me the way.
>>
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/haskell-cafe/attachments/20130322/c3a74d7d/attachment.htm>


More information about the Haskell-Cafe mailing list