Nehal Patel | 12 Feb 23:47 2013
Picon

Why isn't "Program Derivation" a first class citizen?

A few months ago I took the Haskell plunge, and all goes well... -- but I really want to understand the
paradigms as fully as possible, and as it stands, I find myself with three or four questions for which I've
yet to find suitable answers.  I've picked one to ask the cafe -- like my other questions, it's somewhat
amorphous and ill posed -- so much thanks in advance for any thoughts and comments!

----
Why isn't "Program Derivation" a first class citizen?
---

(Hopefully the term "program derivation" is commonly understood?  I mean it in the sense usually used to
describe the main motif of Bird's "The Algebra of Programming."  Others seem to use it as well...) 

For me, it has come to mean solving problems in roughly the following way
1) Defining the important functions and data types in a pedantic way so that the semantics are clear as
possible to a human, but possibly "inefficient" (I use quotes because one of my other questions is about
whether it is really possible to reason effectively about program performance in ghc/Haskell…)
2) Work out some proofs of various properties of your functions and data types 
3) Use the results from step 2 to provide an alternate implementation with provably same semantics but
possibly much better performance.

To me it seems that so much of Haskell's design is dedicated to making steps 1-3 possible, and those steps
represent for me (and I imagine many others) the thing that makes Haskell (and it cousins) so beautiful.

And so my question is, that in 2013, why isn't this process a full fledged part of the language? I imagine I
just don't know what I'm talking about, so correct me if I'm wrong, but this is how I understand the workflow
used in practice with program derivation:  1) state the problem pedantically in code, 2) work out a bunch of
proofs with pen and paper, 3) and then translate that back into code, perhaps leaving you with
function_v1, function_v2, function_v3, etc   -- that seems fine for 1998, but why is it still being done
this way?

(Continue reading)

Jan Stolarek | 13 Feb 08:37 2013
Picon

Re: Why isn't "Program Derivation" a first class citizen?

> To me, it seems that something like this should be possible -- am i being
> naive? does it already exist?  
During the compilation process GHC optimizes the code by performing successive transformations of 
the program. These transformations are known to preserve meaning of the program - they are based 
on some already proved facts and properties. I've written a blog post recently on this:

http://lambda.jstolarek.com/2013/01/taking-magic-out-of-ghc-or-tracing-compilation-by-transformation/

Now, you might be asking why does GHC transform the Core representation of a program and not the 
original Haskell code itself? The answer is simplicity. Core is equivalent to Haskell, but it's 
much easier to work with since there are few language constructs and it's easier to design 
transformations and prove them correct.

I hope this at least partially answers your question. Whether there automated ways of transforming 
Haskell programs into more efficient Haskell programs I don't know.

Janek
wren ng thornton | 13 Feb 09:06 2013

Re: Why isn't "Program Derivation" a first class citizen?

On 2/12/13 5:47 PM, Nehal Patel wrote:
> And so my question is, that in 2013, why isn't this process a full fledged part of the language? I imagine I
just don't know what I'm talking about, so correct me if I'm wrong, but this is how I understand the workflow
used in practice with program derivation:  1) state the problem pedantically in code, 2) work out a bunch of
proofs with pen and paper, 3) and then translate that back into code, perhaps leaving you with
function_v1, function_v2, function_v3, etc   -- that seems fine for 1998, but why is it still being done
this way?

I think there's a lot more complexity in (machine-verifiably) proving 
things than you realize. I've done a fair amount of programming in Coq 
and a bit in Twelf and Agda, and one of the things I've learned is that 
the kinds of proof we do on paper are rarely rigorous and are almost 
never spelled out in full detail. That is, afterall, not the point of a 
pen & paper proof. Pen & paper proofs are about convincing humans that 
some idea makes sense, and humans are both reasonable and gullible. 
Machine-verified proofs, on the other hand, are all about walking a 
naive computer through every possible contingency and explaining how and 
why things must be the case even in the worst possible world. There's no 
way to tell the compiler "you know what I mean", or "the other cases are 
similar", or "left as an exercise for the reader". And it's not until 
trying to machine-formalize things that we realize how often we say 
things like that on paper. Computers are very bad at generalizing 
patterns and filling in the blanks; but they're very good at 
exhaustively enumerating contingencies. So convincing a computer is 
quite the opposite from convincing a human.

That said, I'm all for getting more theorem proving goodness into 
Haskell. I often lament the fact that there's no way to require proof 
that instances obey the laws of a type class, and that GHC's rewrite 
rules don't require any proof that the rule is semantics preserving. The 
(Continue reading)

Austin Seipp | 13 Feb 09:40 2013
Picon

Re: Why isn't "Program Derivation" a first class citizen?

On Tue, Feb 12, 2013 at 4:47 PM, Nehal Patel <nehal.alum <at> gmail.com> wrote:
>
> A few months ago I took the Haskell plunge, and all goes well...
> ... snip ...
> And so my question is, that in 2013, why isn't this process a full fledged part of the language? I imagine I
just don't know what I'm talking about, so correct me if I'm wrong, but this is how I understand the workflow
used in practice with program derivation:  1) state the problem pedantically in code, 2) work out a bunch of
proofs with pen and paper, 3) and then translate that back into code, perhaps leaving you with
function_v1, function_v2, function_v3, etc   -- that seems fine for 1998, but why is it still being done
this way?

There is no one-stop shop for this sort of stuff. There are plenty of
ways to show equivalence properties for certain classes of programs.

I am unsure why you believe people tend to use pen and paper -
machines are actually pretty good at this stuff! For example, I have
been using Cryptol lately, which is a functional language from Galois
for cryptography. One feature it has is equivalence checking: you can
ask if two functions are provably equivalent. This is used in the
examples to prove that a naive Rijndael implementation is equivalent
to an optimized AES implementation.

You might be asking why you can't do this for Haskell. Well, you can -
kind of. There is a library called SBV which is very similar in spirit
to Cryptol but expressed as a Haskell DSL (and not really for Crypto
specifically.) All SBV programs are executable as Haskell programs -
but we can also compile them to C, and prove properties about them by
using an SMT solver. This is 'free' and requires no programmer
intervention beyond stating the property. So we can specify faster
implementations, and properties that show they're equivalent. SMT
(Continue reading)

Rustom Mody | 13 Feb 11:22 2013
Picon

Re: Why isn't "Program Derivation" a first class citizen?



On Wed, Feb 13, 2013 at 4:17 AM, Nehal Patel <nehal.alum <at> gmail.com> wrote:
A few months ago I took the Haskell plunge, and all goes well... -- but I really want to understand the paradigms as fully as possible, and as it stands, I find myself with three or four questions for which I've yet to find suitable answers.  I've picked one to ask the cafe -- like my other questions, it's somewhat amorphous and ill posed -- so much thanks in advance for any thoughts and comments!

----
Why isn't "Program Derivation" a first class citizen?
---

(Hopefully the term "program derivation" is commonly understood?  I mean it in the sense usually used to describe the main motif of Bird's "The Algebra of Programming."  Others seem to use it as well...)

For me, it has come to mean solving problems in roughly the following way
1) Defining the important functions and data types in a pedantic way so that the semantics are clear as possible to a human, but possibly "inefficient" (I use quotes because one of my other questions is about whether it is really possible to reason effectively about program performance in ghc/Haskell…)
2) Work out some proofs of various properties of your functions and data types
3) Use the results from step 2 to provide an alternate implementation with provably same semantics but possibly much better performance.

To me it seems that so much of Haskell's design is dedicated to making steps 1-3 possible, and those steps represent for me (and I imagine many others) the thing that makes Haskell (and it cousins) so beautiful.

And so my question is, that in 2013, why isn't this process a full fledged part of the language? I imagine I just don't know what I'm talking about, so correct me if I'm wrong, but this is how I understand the workflow used in practice with program derivation:  1) state the problem pedantically in code, 2) work out a bunch of proofs with pen and paper, 3) and then translate that back into code, perhaps leaving you with function_v1, function_v2, function_v3, etc   -- that seems fine for 1998, but why is it still being done this way?

What I'm asking about might sound related to theorem provers, -- but if so ,I feel like what I'm thinking about is not so much the very difficult problem of automated proofs or even proof assistants, but the somewhat simpler task of proof verification. Concretely, here's a possibility of how I imagine   the workflow could be:

++ in code, pedantically setup the problem.
++ in code, state a theorem, and prove it -- this would require a revision to the language (Haskell 201x) and perhaps look like Isabella's ISAR -- a -structured- proof syntax that is easy for humans to construct and understand -- in particular it would possible to easily reuse previous theorems and leave out various steps.  At compile time, the compiler would check that the proof was correct, or perhaps complain that it can't see how I went from step 3 to step 4, in which case I might have to add in another step (3b) to  help the compiler verify the proof.
++ afterwards, I would use my new theorems to create semantically identical variants of my original functions (again this process would be integrated into the language)

While I feel like theorem provers offer some aspects of this workflow (in particular with the ability to export scala or haskell code when you are done), I feel that in practice it is only useful for modeling fairly technically things like protocols and crypto stuff -- whereas if this existed within haskell proper it would find much broader use and have greater impact.

I haven't fully fleshed out all the various steps of what exactly it would mean to have program derivation be a first class citizen, but I'll pause here and followup if a conversation ensues.

To me, it seems that something like this should be possible -- am i being naive? does it already exist?  have people tried and given up? is it just around the corner?  can you help me make sense of all of this?

thanks! nehal



_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


I am stating these things from somewhat foggy memory (dont have any lambda-calculus texts handy) and so will welcome corrections from those who know better…

In lambda-calculus, if reduction means beta-reduction, then equality is decidable
Adding eta into the mix makes it undecidable
Haskell is basically a sugared beta-reducer
A prover must -- to be able to go anywhere -- be a beta-eta reducer.
Which is why a theorem-prover must be fundamentally more powerful and correspondingly less decidable than an implemented programming language.

If a theorem-prover were as 'hands-free' as a programming language
Or if an implemented programming language could do the proving that a theorem-prover could do, it would contradict the halting-problem/Rice theorem.

--
http://www.the-magus.in
http://blog.languager.org

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
Jon Fairbairn | 13 Feb 12:24 2013
X-Face
Picon
Picon

Re: Why isn't "Program Derivation" a first class citizen?

Rustom Mody <rustompmody <at> gmail.com> writes:

> On Wed, Feb 13, 2013 at 4:17 AM, Nehal Patel <nehal.alum <at> gmail.com> wrote:
>> ----
>> Why isn't "Program Derivation" a first class citizen?
>> ---
> I am stating these things from somewhat foggy memory (dont have any
> lambda-calculus texts handy) and so will welcome corrections from those who
> know better…
>
> In lambda-calculus, if reduction means beta-reduction, then equality is

semi

> decidable

> If a theorem-prover were as 'hands-free' as a programming language
> Or if an implemented programming language could do the proving that a
> theorem-prover could do, it would contradict the halting-problem/Rice
> theorem.

Just so, but I’ve long (I suggested something of the sort to a
PhD student in about 1991 but he wasn’t interested) thought
that, since automatic programme transformation isn’t going to
work (for the reason you give), having manually written
programme tranformations as part of the code would be a useful
way of coding. RULES pragmas go a little way towards this, but
the idea would be that the language supply various known valid
transformation operators (akin to theorem proving such as in
HOL), and programmers would explicitly write transformation for
their functions that the compiler would apply.

--

-- 
Jón Fairbairn                                 Jon.Fairbairn <at> cl.cam.ac.uk

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
Joachim Breitner | 14 Feb 11:31 2013
Picon

Re: Why isn't "Program Derivation" a first class citizen?

Hi,

Am Dienstag, den 12.02.2013, 17:47 -0500 schrieb Nehal Patel:
> To me, it seems that something like this should be possible -- am i
> being naive? does it already exist?  have people tried and given up?
> is it just around the corner?  can you help me make sense of all of
> this?

a related project is HOLCF-Prelude, by Christian Sternagel and others.
What they do is to reimplement the Haskell prelude as HOLCF data types
and functions in the theorem prover Isabelle. This allows you to write
functional code as HOLCF functions, prove properties of them (using
functions from the prelude and a library of lemmas about them) and, as
long as the definitions are the same as in your Haskell code, be happy
about it.

It is not a silver bullet, and not exactly what you need. For example,
you still have to manually translate between the HOLCF definition and
the Haskell code. Also, Haskell is not covered completely. Exceptions,
for example, are completely ignored.

But nevertheless it is useful: For example, we have started to verify
some of the transformations suggested by hlint, proved quite a few of
them and even found (very few) wrong ones.

And what wren said about the complexity of machine verified proofs is
very true. Especially when you also cover non-termination and lazyness
(as we do) and want to prove properties involving type classes with a
few assumptions about them as possible, and suddenly have the problem
that (==) can do arbitrary stuff.

You can find the code at https://sourceforge.net/p/holcf-prelude/
Comments and contributions are welcome!

Greetings,
Joachim

--

-- 
Joachim "nomeata" Breitner
Debian Developer
  nomeata <at> debian.org | ICQ# 74513189 | GPG-Keyid: 4743206C
  JID: nomeata <at> joachim-breitner.de | http://people.debian.org/~nomeata

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Gmane