OWP | 20 Mar 23:54 2013
Picon

A Thought: Backus, FP, and Brute Force Learning

This thought isn't really related to Haskell specifically but it's more towards FP ideal in general. 

I'm new to the FP world and to get me started, I began reading a few papers.  One paper is by John Backus called "Can Programming Be Liberated from the von Neumann Style? A Functional Style and It's Algebra of Programs".

While I like the premise which notes the limitation of the von Neumann Architecture, his solution to this problem makes me feel queasy when I read it.  

For me personally, one thing I enjoy about a typical procedural program is that it allows me to "Brute Force Learn".  This means I stare at a particular section of the code for a while until I figure out what it does.  I may not know the reasoning behind it but I can have a pretty decent idea of what it does.  If I'm lucky, later on someone may tell me "oh, that just did a gradient of such and such matrix".  In a way, I feel happy I learned something highly complex without knowing I learned something highly complex. 

Backus seems to throw that out the window.  He introduces major new terms which require me to break out the math book which then requires me to break out a few other books to figure out which bases things using archaic symbols which then requires me to break out the pen and paper to mentally expand what in the world that does.  It makes me feel CISCish except without a definition book nearby.  It's nice if I already knew what a "gradient of such and such matrix" is but what happens if I don't? 

For the most part, I like the idea that I have the option of Brute Force Learning my way towards something.  I also like the declarative aspect of languages such as SQL which let's me asks the computer of things once I know the meaning of what I'm asking.  I like the ability to play and learn but I also like the ability to declare this or that once I do learn.  From Backus paper, if his world comes to a reality, it seems like I should know what I'm doing before I even start.  The ability to learn while coding seems to have disappeared.  In a way, if the von Neumann bottleneck wasn't there, I'm not sure programming would be as popular as it is today. 

Unfortunately, I'm still very new and quite ignorant about Haskell so I do not know how much of Backus is incorporated in Haskell but so far, in the start of my FP learning adventure, this is how things seem to be seen. 

If I may generously ask, where am I wrong and where am I right with this thought? 

Thank you for any explanation

P.S.  If anyone knows of a better place I can ask this question, please feel free to show me the way.

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
OWP | 20 Mar 23:59 2013
Picon

Re: A Thought: Backus, FP, and Brute Force Learning

I made an error.  I meant FP to stand for Functional Programming, the concept not the language. 

On Wed, Mar 20, 2013 at 6:54 PM, OWP <owpmailact <at> gmail.com> wrote:
This thought isn't really related to Haskell specifically but it's more towards FP ideal in general. 

I'm new to the FP world and to get me started, I began reading a few papers.  One paper is by John Backus called "Can Programming Be Liberated from the von Neumann Style? A Functional Style and It's Algebra of Programs".

While I like the premise which notes the limitation of the von Neumann Architecture, his solution to this problem makes me feel queasy when I read it.  

For me personally, one thing I enjoy about a typical procedural program is that it allows me to "Brute Force Learn".  This means I stare at a particular section of the code for a while until I figure out what it does.  I may not know the reasoning behind it but I can have a pretty decent idea of what it does.  If I'm lucky, later on someone may tell me "oh, that just did a gradient of such and such matrix".  In a way, I feel happy I learned something highly complex without knowing I learned something highly complex. 

Backus seems to throw that out the window.  He introduces major new terms which require me to break out the math book which then requires me to break out a few other books to figure out which bases things using archaic symbols which then requires me to break out the pen and paper to mentally expand what in the world that does.  It makes me feel CISCish except without a definition book nearby.  It's nice if I already knew what a "gradient of such and such matrix" is but what happens if I don't? 

For the most part, I like the idea that I have the option of Brute Force Learning my way towards something.  I also like the declarative aspect of languages such as SQL which let's me asks the computer of things once I know the meaning of what I'm asking.  I like the ability to play and learn but I also like the ability to declare this or that once I do learn.  From Backus paper, if his world comes to a reality, it seems like I should know what I'm doing before I even start.  The ability to learn while coding seems to have disappeared.  In a way, if the von Neumann bottleneck wasn't there, I'm not sure programming would be as popular as it is today. 

Unfortunately, I'm still very new and quite ignorant about Haskell so I do not know how much of Backus is incorporated in Haskell but so far, in the start of my FP learning adventure, this is how things seem to be seen. 

If I may generously ask, where am I wrong and where am I right with this thought? 

Thank you for any explanation

P.S.  If anyone knows of a better place I can ask this question, please feel free to show me the way.

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
Clark Gaebel | 21 Mar 01:15 2013
Picon
Picon

Re: A Thought: Backus, FP, and Brute Force Learning

Reading papers might not be the best way to get started with Haskell. It'll be a great way to expand your knowledge later, but they're generally not written to give the reader an introduction to functional programming.

I highly recommend Learn You A Haskell [1]. It is extremely well written.

Regards,
  - Clark



On Wed, Mar 20, 2013 at 6:59 PM, OWP <owpmailact <at> gmail.com> wrote:
I made an error.  I meant FP to stand for Functional Programming, the concept not the language. 

On Wed, Mar 20, 2013 at 6:54 PM, OWP <owpmailact <at> gmail.com> wrote:
This thought isn't really related to Haskell specifically but it's more towards FP ideal in general. 

I'm new to the FP world and to get me started, I began reading a few papers.  One paper is by John Backus called "Can Programming Be Liberated from the von Neumann Style? A Functional Style and It's Algebra of Programs".

While I like the premise which notes the limitation of the von Neumann Architecture, his solution to this problem makes me feel queasy when I read it.  

For me personally, one thing I enjoy about a typical procedural program is that it allows me to "Brute Force Learn".  This means I stare at a particular section of the code for a while until I figure out what it does.  I may not know the reasoning behind it but I can have a pretty decent idea of what it does.  If I'm lucky, later on someone may tell me "oh, that just did a gradient of such and such matrix".  In a way, I feel happy I learned something highly complex without knowing I learned something highly complex. 

Backus seems to throw that out the window.  He introduces major new terms which require me to break out the math book which then requires me to break out a few other books to figure out which bases things using archaic symbols which then requires me to break out the pen and paper to mentally expand what in the world that does.  It makes me feel CISCish except without a definition book nearby.  It's nice if I already knew what a "gradient of such and such matrix" is but what happens if I don't? 

For the most part, I like the idea that I have the option of Brute Force Learning my way towards something.  I also like the declarative aspect of languages such as SQL which let's me asks the computer of things once I know the meaning of what I'm asking.  I like the ability to play and learn but I also like the ability to declare this or that once I do learn.  From Backus paper, if his world comes to a reality, it seems like I should know what I'm doing before I even start.  The ability to learn while coding seems to have disappeared.  In a way, if the von Neumann bottleneck wasn't there, I'm not sure programming would be as popular as it is today. 

Unfortunately, I'm still very new and quite ignorant about Haskell so I do not know how much of Backus is incorporated in Haskell but so far, in the start of my FP learning adventure, this is how things seem to be seen. 

If I may generously ask, where am I wrong and where am I right with this thought? 

Thank you for any explanation

P.S.  If anyone knows of a better place I can ask this question, please feel free to show me the way.


_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
Albert Y. C. Lai | 21 Mar 05:36 2013
Picon

Re: A Thought: Backus, FP, and Brute Force Learning

On 13-03-20 06:54 PM, OWP wrote:
> For me personally, one thing I enjoy about a typical procedural program
> is that it allows me to "Brute Force Learn".
[...]

1. I believe that you can also stare at functional programs and figure 
out as much as what you can with procedural programs.

It only requires knowing the language and the libraries. And you can no 
longer argue that functional languages are more declarative or higher 
level than procedural languages. Once upon a time, it was true, with 
parametricity, algebraic data types, higher-order functions, and list 
comprehensions; now procedural languages have them too or competitive 
alternatives.

2. I doubt how much you can learn from staring, be it functional 
programs or procedural programs.

I posit that at most you're just reading aloud in your native tongue how 
to execute the program. But then you're just competing with a computer 
at its job. You barely know what requirement the program satisfies, much 
less why the program satisfies that requirement.

(With the exception that the program contains no iteration or recursion, 
or contains just boring iteration or recursion such as walking over an 
array.)

I do not mean that you lack jargons like "gradient" and "matrix", no. 
You can explain in your own words, if you know the right idea but just 
not the jargon. I am positing this: apart from telling me how to execute 
the program, you cannot explain the purpose of the program, not even in 
your own words, because you do not know.

Here is an example I learned recently. It is ingenious.

Let A, B be arrays of the same length N. Their elements are numbers. 
They use 0-based indexing. They are the input.

int h=0, i=0, j=0;
bool answer;

while (h<N && i<N && j<N) {
     int Aih = A[(i+h) % N], Bjh = B[(j+h) % N];

     if (Aih == Bjh) {
         h = h + 1;
     } else if (Aih > Bjh) {
         i = i + h + 1;
         h = 0;
     } else { /* Aih < Bjh */
         j = j + h + 1;
         h = 0;
     }
}
answer = (h >= N);

answer is the output. What does answer say about the input?

The algorithm is no different in Haskell (it makes me use lowercase a, 
b, n):

answer = go 0 0 0
go h i j =
     if h<n && i<n && j<n then
         case compare (a!((i+h) `mod` n)) (b!((j+h) `mod` n)) of
             EQ -> go (h+1) i j
             GT -> go 0 (i+h+1) j
             LT -> go 0 i (j+h+1)
     else h>=n

3. I completely refuse to believe that you literally do not know what 
you're doing before you start.

If it were true, it must be like this: You throw dice 500 times to 
generate a 500-character file. If the compiler doesn't like it, you 
throw dice more times to decide what to mutate in that file. Eventually 
the compiler surrenders. Now, and only now, you stare at the file for a 
few minutes, and discover: it implements doubly linked list! It will be 
useful when you write your own web browser later, it can help provide 
the "back" button and the "forward" button...

No, it has to be the other way round. You have to decide to attempt a 
web browser project or whatever in the first place. You are vague about 
details, what not to include, what to include, how to do them... but you 
know vaguely it's a web browser. After some time, you have to decide 
which part, however small, you start to code up. Maybe you decide to 
code up a doubly linked list first. Now you type up something. You type 
up something knowing that the short term goal is doubly linked list, and 
the long term goal is some kind of web browser or whatever project it 
is. This much you know. And while you type, every key you type you 
strive to get closer to a doubly linked list in good faith. Maybe 
sometimes you're creative, maybe sometimes you make mistakes, maybe you 
write clever code and I can't understand it, but it is not random 
typing, you know the purpose, you have reasons, you understand, and you 
don't just stare.
OWP | 22 Mar 08:53 2013
Picon

Re: A Thought: Backus, FP, and Brute Force Learning

When I said "I stare at a particular section of the code for a while", I meant it as an idiom for deeply studying that particular code alone.  It's just me and the code and whatever debugging tools I have readily available. 

Are you familiar with the difficulty in maintaining legacy platforms written by a team which no longer exists?  In some cases, it might be required to completely rewrite the entire platform in another code base (like Haskell) while maintaining the original behavior (including any and all bugs or undocumented features) of the legacy software.  In those cases, "staring at the code" may be one of the better option because the code will tell you most of what you need to know to recreate that software in another platform.   

Going back to the original thought, my curiosity has more to do about Backus (he is cited quite a lot) and how much of his theory made it into Haskell and other commonly used functional languages.  


On Thu, Mar 21, 2013 at 12:36 AM, Albert Y. C. Lai <trebla <at> vex.net> wrote:
On 13-03-20 06:54 PM, OWP wrote:
For me personally, one thing I enjoy about a typical procedural program
is that it allows me to "Brute Force Learn".
[...]

1. I believe that you can also stare at functional programs and figure out as much as what you can with procedural programs.

It only requires knowing the language and the libraries. And you can no longer argue that functional languages are more declarative or higher level than procedural languages. Once upon a time, it was true, with parametricity, algebraic data types, higher-order functions, and list comprehensions; now procedural languages have them too or competitive alternatives.

2. I doubt how much you can learn from staring, be it functional programs or procedural programs.

I posit that at most you're just reading aloud in your native tongue how to execute the program. But then you're just competing with a computer at its job. You barely know what requirement the program satisfies, much less why the program satisfies that requirement.

(With the exception that the program contains no iteration or recursion, or contains just boring iteration or recursion such as walking over an array.)

I do not mean that you lack jargons like "gradient" and "matrix", no. You can explain in your own words, if you know the right idea but just not the jargon. I am positing this: apart from telling me how to execute the program, you cannot explain the purpose of the program, not even in your own words, because you do not know.

Here is an example I learned recently. It is ingenious.

Let A, B be arrays of the same length N. Their elements are numbers. They use 0-based indexing. They are the input.

int h=0, i=0, j=0;
bool answer;

while (h<N && i<N && j<N) {
    int Aih = A[(i+h) % N], Bjh = B[(j+h) % N];

    if (Aih == Bjh) {
        h = h + 1;
    } else if (Aih > Bjh) {
        i = i + h + 1;
        h = 0;
    } else { /* Aih < Bjh */
        j = j + h + 1;
        h = 0;
    }
}
answer = (h >= N);

answer is the output. What does answer say about the input?

The algorithm is no different in Haskell (it makes me use lowercase a, b, n):

answer = go 0 0 0
go h i j =
    if h<n && i<n && j<n then
        case compare (a!((i+h) `mod` n)) (b!((j+h) `mod` n)) of
            EQ -> go (h+1) i j
            GT -> go 0 (i+h+1) j
            LT -> go 0 i (j+h+1)
    else h>=n

3. I completely refuse to believe that you literally do not know what you're doing before you start.

If it were true, it must be like this: You throw dice 500 times to generate a 500-character file. If the compiler doesn't like it, you throw dice more times to decide what to mutate in that file. Eventually the compiler surrenders. Now, and only now, you stare at the file for a few minutes, and discover: it implements doubly linked list! It will be useful when you write your own web browser later, it can help provide the "back" button and the "forward" button...

No, it has to be the other way round. You have to decide to attempt a web browser project or whatever in the first place. You are vague about details, what not to include, what to include, how to do them... but you know vaguely it's a web browser. After some time, you have to decide which part, however small, you start to code up. Maybe you decide to code up a doubly linked list first. Now you type up something. You type up something knowing that the short term goal is doubly linked list, and the long term goal is some kind of web browser or whatever project it is. This much you know. And while you type, every key you type you strive to get closer to a doubly linked list in good faith. Maybe sometimes you're creative, maybe sometimes you make mistakes, maybe you write clever code and I can't understand it, but it is not random typing, you know the purpose, you have reasons, you understand, and you don't just stare.


_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
Richard A. O'Keefe | 24 Mar 23:43 2013
Picon

Re: A Thought: Backus, FP, and Brute Force Learning

It's "Backus", people.  He was never the god of wine.

I cannot detect any trace of Backus's FP in Haskell at all.
FP is strict. Haskell is not.
FP is typeless.  Haskell is highly typeful.
FP does not name formal parameters.  Haskell often does.
FP has roots in APL.  Haskell doesn't.

I don't see any trace of Backus's FP in ML, Clean, or F# either.

The idea of writing programs by composing lots of small
functions is common to them both, but the idea of
combinators preceded them both.

As for
"Def Innerproduct = (Insert +) o (ApplyToAll x) o Transpose"
the idea is that this ought to be *easier* to understand than
an imperative loop because all of the parts are separated out
instead of being graunched up together.

inner_product :: Num a => ([a],[a]) -> a

inner_product = foldr1 (+) . map (uncurry (*)) . uncurry zip

_is_ expressible in Haskell, although

inner_product :: Num a => [a] -> [a] -> a

inner_product = sum . zipWith (*)

would be more idiomatic.  But this is not because of any influence
from FP, but because Haskell has function composition and higher
order functions.
Rustom Mody | 25 Mar 05:10 2013
Picon

Re: A Thought: Backus, FP, and Brute Force Learning


On Mon, Mar 25, 2013 at 4:13 AM, Richard A. O'Keefe <ok <at> cs.otago.ac.nz> wrote:
It's "Backus", people.  He was never the god of wine.

I cannot detect any trace of Backus's FP in Haskell at all.
FP is strict. Haskell is not.
FP is typeless.  Haskell is highly typeful.
FP does not name formal parameters.  Haskell often does.
FP has roots in APL.  Haskell doesn't.

I don't see any trace of Backus's FP in ML, Clean, or F# either.

The idea of writing programs by composing lots of small
functions is common to them both, but the idea of
combinators preceded them both.


I really wonder about this whole discussion -- not the details… the perspective.

Its like saying the Ford T
http://1.bp.blogspot.com/-KGhxacA_p1k/TV0g_qeHOFI/AAAAAAAACF8/5t8NRxpzUCo/s1600/1912-ford-model-t.jpg
is an unsafe car
http://1.bp.blogspot.com/-7gEOpb7-9IU/TV0g-88-dZI/AAAAAAAACF0/u3dh8CXAAGI/s1600/f1244_it0061.jpg
and should be equipped with airbags.

To the OP:
I believe that FP has much to contribute to programming, whether you identify yourself as an FPer or not.

- Not the fancy type hackery of modern haskell
- Not the math hackery of Backus
Just basic stuff like http://blog.languager.org/2012/10/functional-programming-lost-booty.html
[Sorry its only an outline and is skimpy]

For the most part I agree with the foll -- except the type-classes.

 
As for
"Def Innerproduct = (Insert +) o (ApplyToAll x) o Transpose"
the idea is that this ought to be *easier* to understand than
an imperative loop because all of the parts are separated out
instead of being graunched up together.

inner_product :: Num a => ([a],[a]) -> a

inner_product = foldr1 (+) . map (uncurry (*)) . uncurry zip

_is_ expressible in Haskell, although

inner_product :: Num a => [a] -> [a] -> a

inner_product = sum . zipWith (*)

would be more idiomatic. 

Personal note:
I recently taught a course in Erlang and I did what Ive done for 25 years -- start with FP.
In the past its always more or less worked well
- in 1988 it was scheme
- in 1992 it was gofer, the predecessor of haskell
- in 2001 it was scheme.
This time I used haskell and the results were not so good, primarily because of typeclasses
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
Rustom Mody | 25 Mar 05:14 2013
Picon

Re: A Thought: Backus, FP, and Brute Force Learning

On Mon, Mar 25, 2013 at 9:40 AM, Rustom Mody <rustompmody <at> gmail.com> wrote:
- in 1992 it was gofer, the predecessor of haskell
- in 2001 it was scheme.

Sorry typo: 2001 it was python

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
Richard A. O'Keefe | 25 Mar 22:47 2013
Picon

Re: A Thought: Backus, FP, and Brute Force Learning

I should mention that both functional programming in general
and Backus's FP _have_ been influenced by APL, which, while
imperative, strongly encourages "algebraic" combination of
small functions and had (a fixed set of) higher-order "operators".

As for Brute Force Learning by reading imperative code,
I have to say that you _can_ learn a lot that way, but there is
an abundance of imperative code which is utterly opaque.
Come to think if it, I've just taken two days to write 53 lines
of imperative code which requires four more pages to explain
why it exists and why it looks the way it does.  In a functional
language, it would be 2 fairly obvious lines, and I am _not_ kidding.

Gmane