Proofs are Programs: A Few Examples of the Curry-Howard Correspondence
Posted by I2cScion@reddit | programming | View on Reddit | 11 comments
Posted by I2cScion@reddit | programming | View on Reddit | 11 comments
dan7315@reddit
While Curry-Howard is an insightful and beautiful result, this article doesn't currently explain it. By Curry-Howard, if I say that type T corresponds to proposition P, what I'm saying is "it's possible to create a value of type T if and only if P is true" (with the caveat that the program that creates type T neither crashes nor infinitely loops).
Let's take a look at the first example in the article:
The key is
addOdds, which takes two odds and returns an even. But I could just as easily write a function that takes two odds and returns an odd:Is this a proof that adding two odds results in an odd? Of course not: there's nothing in the type of
addOddsthat says anything about addition. There's no point in a proof system that can prove false things. All it proves is that, given two odd numbers, an odd number exists. If you want to prove that adding two odds results in an even, you have to include the concept of addition in your types.One example of how to actually use Curry-Howard to prove things would be this tutorial in Idris, a language designed for more complex types: https://idris2.readthedocs.io/en/latest/proofs/inductive.html
mccalli@reddit
Yeah - I remember being taught formal methods using Z to specify them in the early nineties. I said at the time that's just a higher level programming language and a good way of introducing bugs at the specification instead of implementation level, and the professor was absolutely insistent it wasn't.
Low and behold, two years after I left university, the first Z compiler appeared...
jeenajeena@reddit
This is honestly brilliant: you made very compelling, down-to-earth examples to make this notion digestible to a larger audience. This topic is usually covered with other languages (Rocq, Agda, Lean, or Haskell), which intimidate the most. Your post is very approachable.
Very very well done, kudos!
I2cScion@reddit (OP)
Im not the author but I agree !! Great post
youngbull@reddit
There are a couple of (fun!) caveats.
First, for mathematics, programs that crash, violate the type signature, or run forever are not proofs. A lot of proof assistants essentially just provide that: they are programming languages with very powerful types (so that you can express the mathematics you want) and you can write programs that are guaranteed to be proofs.
Second, for conventional mathematics you need to be able to express proofs by contradiction (which are not direct proofs). You can just have that as function you invoke with a proof that the negation of your claim leads to a contradiction, but you can't execute that as code. If you definitely want to execute the code then the flavor of mathematics you get is called constructivism or intuitionistic.
jeenajeena@reddit
yes, that's true. The second example could be improved by modeling the notion of "a natural number greater or equal to 8".
Madsy9@reddit
I think the biggest difference between conventional mathematics and proof assistants is that the latter uses type theory based on intuitionistic logic which always requires constructed proofs. With hand-written proofs you can prove things like "based on such and such priors, there exists a number with property x, but I can't show you the number". But proof assistant like Lean and Idris can't assume the law of the excluded middle, so a valid proof has to construct the actual number. In any event, not being able to write such existence proofs is a small price to pay for what you get.
moschles@reddit
As I teenager, I suspected this for years. Now I find there is an actual theorem.
devraj7@reddit
They're not, they're just signatures of functions.
The bodies of these functions still need to be written.
Smallpaul@reddit
Nope
I2cScion@reddit (OP)
I’d heard about Curry–Howard before but never really got it until reading this .. the idea that proofs and programs are basically the same thing is kind of mind-blowing.
I was initially wondering how far that really goes though ? like what about dynamically typed languages, side effects / I/O, or programs that don’t terminate?
After digging a bit, it seems like the clean “programs = proofs” correspondence really applies to pure, total, strongly-typed systems.
But what I found especially interesting is that effects don’t necessarily break the idea, they can be modeled with things like monads, so you can still think of effectful programs as corresponding to proofs, just in a richer kind of logic.
Dynamic typing and non-termination (infinite loops or recursion) definitely break it.
Curious if that’s a reasonable way to think about the limits here, or if I’m still missing something.