Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Small fixes #112

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion axioms_and_computation.md
Original file line number Diff line number Diff line change
Expand Up @@ -341,7 +341,7 @@ In a sense, however, a cast does not change the meaning of an
expression. Rather, it is a mechanism to reason about the expression's
type. Given an appropriate semantics, it then makes sense to reduce
terms in ways that preserve their meaning, ignoring the intermediate
bookkeeping needed to make the reductions type correct. In that case,
bookkeeping needed to make the reductions type-correct. In that case,
adding new axioms in ``Prop`` does not matter; by proof irrelevance,
an expression in ``Prop`` carries no information, and can be safely
ignored by the reduction procedures.
Expand Down
3 changes: 2 additions & 1 deletion conv.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,8 @@ example (a b c : Nat) : a * (b * c) = a * (c * b) := by

The above snippet shows three navigation commands:

- `lhs` navigates to the left hand side of a relation (here equality), there is also a `rhs` navigating to the right hand side.
- `lhs` navigates to the left-hand side of a relation (equality, in this case).
There is also a `rhs` to navigate to the right-hand side.
- `congr` creates as many targets as there are (nondependent and explicit) arguments to the current head function
(here the head function is multiplication).
- `rfl` closes target using reflexivity.
Expand Down
151 changes: 79 additions & 72 deletions dependent_type_theory.md

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions induction_and_recursion.md
Original file line number Diff line number Diff line change
Expand Up @@ -688,7 +688,7 @@ if ``x`` has no predecessors, it is accessible. Given any type ``α``,
we should be able to assign a value to each accessible element of
``α``, recursively, by assigning values to all its predecessors first.

The statement that ``r`` is well founded, denoted ``WellFounded r``,
The statement that ``r`` is well-founded, denoted ``WellFounded r``,
is exactly the statement that every element of the type is
accessible. By the above considerations, if ``r`` is a well-founded
relation on a type ``α``, we should have a principle of well-founded
Expand All @@ -707,15 +707,15 @@ noncomputable def f {α : Sort u}

There is a long cast of characters here, but the first block we have
already seen: the type, ``α``, the relation, ``r``, and the
assumption, ``h``, that ``r`` is well founded. The variable ``C``
assumption, ``h``, that ``r`` is well-founded. The variable ``C``
represents the motive of the recursive definition: for each element
``x : α``, we would like to construct an element of ``C x``. The
function ``F`` provides the inductive recipe for doing that: it tells
us how to construct an element ``C x``, given elements of ``C y`` for
each predecessor ``y`` of ``x``.

Note that ``WellFounded.fix`` works equally well as an induction
principle. It says that if ``≺`` is well founded and you want to prove
principle. It says that if ``≺`` is well-founded and you want to prove
``∀ x, C x``, it suffices to show that for an arbitrary ``x``, if we
have ``∀ y ≺ x, C y``, then we have ``C x``.

Expand Down Expand Up @@ -810,7 +810,7 @@ def natToBin : Nat → List Nat
```

As a final example, we observe that Ackermann's function can be
defined directly, because it is justified by the well foundedness of
defined directly, because it is justified by the well-foundedness of
the lexicographic order on the natural numbers. The `termination_by` clause
instructs Lean to use a lexicographic order. This clause is actually mapping
the function arguments to elements of type `Nat × Nat`. Then, Lean uses typeclass
Expand Down
6 changes: 3 additions & 3 deletions interacting_with_lean.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ Importing Files
---------------

The goal of Lean's front end is to interpret user input, construct
formal expressions, and check that they are well formed and type
correct. Lean also supports the use of various editors, which provide
formal expressions, and check that they are well-formed and type-correct.
Lean also supports the use of various editors, which provide
continuous checking and feedback. More information can be found on the
Lean [documentation pages](https://lean-lang.org/documentation/).

Expand Down Expand Up @@ -542,7 +542,7 @@ The new notation is preferred to the binary notation since the latter,
before chaining, would stop parsing after `1 + 2`. If there are
multiple notations accepting the same longest parse, the choice will
be delayed until elaboration, which will fail unless exactly one
overload is type correct.
overload is type-correct.

Coercions
---------
Expand Down
8 changes: 4 additions & 4 deletions introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,13 @@ as to their correctness becomes a form of theorem proving. Conversely, the proof
lengthy computation, in which case verifying the truth of the theorem requires verifying that the computation does what
it is supposed to do.

The gold standard for supporting a mathematical claim is to provide a proof, and twentieth-century developments in logic
show most if not all conventional proof methods can be reduced to a small set of axioms and rules in any of a number of
The gold standard for supporting a mathematical claim is to provide a proof. Twentieth-century developments in logic
show that most if not all conventional proof methods can be reduced to a small set of axioms and rules in any of a number of
foundational systems. With this reduction, there are two ways that a computer can help establish a claim: it can help
find a proof in the first place, and it can help verify that a purported proof is correct.

*Automated theorem proving* focuses on the "finding" aspect. Resolution theorem provers, tableau theorem provers, fast
satisfiability solvers, and so on provide means of establishing the validity of formulas in propositional and
*Automated theorem proving* focuses on the "finding" aspect. Resolution theorem provers, tableau theorem provers
and satisfiability solvers, for example, provide means of establishing the validity of formulas in propositional and
first-order logic. Other systems provide search procedures and decision procedures for specific languages and domains,
such as linear or nonlinear expressions over the integers or the real numbers. Architectures like SMT ("satisfiability
modulo theories") combine domain-general search methods with domain-specific procedures. Computer algebra systems and
Expand Down
48 changes: 26 additions & 22 deletions propositions_and_proofs.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Propositions and Proofs
=======================

By now, you have seen some ways of defining objects and functions in
Lean. In this chapter, we will begin to explain how to write
Lean. In this chapter, we begin to explain how to write
mathematical assertions and proofs in the language of dependent type
theory as well.

Expand All @@ -13,10 +13,11 @@ One strategy for proving assertions about objects defined in the
language of dependent type theory is to layer an assertion language
and a proof language on top of the definition language. But there is
no reason to multiply languages in this way: dependent type theory is
flexible and expressive, and there is no reason we cannot represent
flexible and expressive, and it turns out that there is no reason we cannot represent
assertions and proofs in the same general framework.

For example, we could introduce a new type, ``Prop``, to represent
As an example of layering languages on top,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now I see that this is an example of using the same general framework; the "could/would" formulation below makes it a bit confusing. Saying, more directly, "... it turns out that we can represent assertions and proofs in the same general framework. For example..." , would help.

we could introduce a new type, ``Prop``, to represent
propositions, and introduce constructors to build new propositions
from others.

Expand Down Expand Up @@ -51,7 +52,7 @@ variable (p q : Prop)

In addition to axioms, however, we would also need rules to build new
proofs from old ones. For example, in many proof systems for
propositional logic, we have the rule of modus ponens:
propositional logic, we have the rule of *modus ponens*:

> From a proof of ``Implies p q`` and a proof of ``p``, we obtain a proof of ``q``.

Expand All @@ -77,7 +78,7 @@ We could render this as follows:
axiom implies_intro : (p q : Prop) → (Proof p → Proof q) → Proof (Implies p q)
```

This approach would provide us with a reasonable way of building assertions and proofs.
This would give us a reasonable way of building assertions and proofs.
Determining that an expression ``t`` is a correct proof of assertion ``p`` would then
simply be a matter of checking that ``t`` has type ``Proof p``.

Expand All @@ -92,7 +93,7 @@ show that we can pass back and forth between ``Implies p q`` and
``p → q``. In other words, implication between propositions ``p`` and ``q``
corresponds to having a function that takes any element of ``p`` to an
element of ``q``. As a result, the introduction of the connective
``Implies`` is entirely redundant: we can use the usual function space
``Implies`` is entirely redundant: we can use the function space
constructor ``p → q`` from dependent type theory as our notion of
implication.

Expand All @@ -106,7 +107,7 @@ syntactic sugar for ``Sort 0``, the very bottom of the type hierarchy
described in the last chapter. Moreover, ``Type u`` is also just
syntactic sugar for ``Sort (u+1)``. ``Prop`` has some special
features, but like the other type universes, it is closed under the
arrow constructor: if we have ``p q : Prop``, then ``p → q : Prop``.
arrow constructor: if ``p q : Prop``, then ``p → q : Prop``.

There are at least two ways of thinking about propositions as
types. To some who take a constructive view of logic and mathematics,
Expand All @@ -115,9 +116,9 @@ proposition ``p`` represents a sort of data type, namely, a
specification of the type of data that constitutes a proof. A proof of
``p`` is then simply an object ``t : p`` of the right type.

Those not inclined to this ideology can view it, rather, as a simple
Those not inclined to this philosophy can view it, rather, as a simple
coding trick. To each proposition ``p`` we associate a type that is
empty if ``p`` is false and has a single element, say ``*``, if ``p``
empty if ``p`` is false, and has a single element, say ``*``, if ``p``
is true. In the latter case, let us say that (the type associated
with) ``p`` is *inhabited*. It just so happens that the rules for
function application and abstraction can conveniently help us keep
Expand All @@ -136,7 +137,7 @@ that even though we can treat proofs ``t : p`` as ordinary objects in
the language of dependent type theory, they carry no information
beyond the fact that ``p`` is true.

The two ways we have suggested thinking about the
The two ways we have suggested for thinking about the
propositions-as-types paradigm differ in a fundamental way. From the
constructive point of view, proofs are abstract mathematical objects
that are *denoted* by suitable expressions in dependent type
Expand All @@ -150,19 +151,23 @@ proposition in question is true. In other words, the expressions
In the exposition below, we will slip back and forth between these two
ways of talking, at times saying that an expression "constructs" or
"produces" or "returns" a proof of a proposition, and at other times
simply saying that it "is" such a proof. This is similar to the way
that computer scientists occasionally blur the distinction between
simply saying that it "is" such a proof. This is similar to how
computer scientists occasionally blur the distinction between
syntax and semantics by saying, at times, that a program "computes" a
certain function, and at other times speaking as though the program
"is" the function in question.

In any case, all that really matters is the bottom line. To formally
In any case, all that really matters is the bottom line: To formally
express a mathematical assertion in the language of dependent type
theory, we need to exhibit a term ``p : Prop``. To *prove* that
assertion, we need to exhibit a term ``t : p``. Lean's task, as a
proof assistant, is to help us to construct such a term, ``t``, and to
assertion, we need to exhibit a term ``t : p`` that has `p` as its type.
Lean's task, as a proof assistant, is to help us to construct such a term ``t``, and to
verify that it is well-formed and has the correct type.

<!-- Add a side-node for proofs-as-programs correspondence?
From Wikipedia: "a proof is a program, and the formula it proves is the type for the program"
-->

Working with Propositions as Types
----------------------------------

Expand Down Expand Up @@ -191,7 +196,7 @@ true.
Note that the ``theorem`` command is really a version of the
``def`` command: under the propositions and types
correspondence, proving the theorem ``p → q → p`` is really the same
as defining an element of the associated type. To the kernel type
as defining an element of the associated type. To the Lean kernel type
checker, there is no difference between the two.

There are a few pragmatic differences between definitions and
Expand All @@ -202,7 +207,7 @@ theorem is complete, typically we only need to know that the proof
exists; it doesn't matter what the proof is. In light of that fact,
Lean tags proofs as *irreducible*, which serves as a hint to the
parser (more precisely, the *elaborator*) that there is generally no
need to unfold it when processing a file. In fact, Lean is generally
need to unfold them when processing a file. In fact, Lean is generally
able to process and check proofs in parallel, since assessing the
correctness of one proof does not require knowing the details of
another.
Expand Down Expand Up @@ -260,10 +265,9 @@ axiom hp : p
theorem t2 : q → p := t1 hp
```

Here, the ``axiom`` declaration postulates the existence of an
element of the given type and may compromise logical consistency. For
example, we can use it to postulate the empty type `False` has an
element.
The ``axiom`` declaration postulates the existence of an
element of the given type, and may compromise logical consistency. For
example, it can postulate that the empty type `False` has an element:

```lean
axiom unsound : False
Expand Down Expand Up @@ -296,7 +300,7 @@ theorem t1 : ∀ {p q : Prop}, p → q → p :=
```

If ``p`` and ``q`` have been declared as variables, Lean will
generalize them for us automatically:
generalize them automatically:

```lean
variable {p q : Prop}
Expand Down
Loading
Loading