Assignment B – The Dragon Curve
Due Thu 6/5 11:59pm
This year, Shin-Cheng Mu visited us and taught us about the Heighway Dragon, this beautiful curve:
During his lecture, he gave us two characterizations of the dragon. Both describe it as functions from the natural numbers to lists of turns, either left turns or right turns. To understand the curve as a list of turns, imagine that you are working your way down a list of turns, following them as if they were instructions. An L means to walk one pace forward, turn left, and then walk another pace forward. Similarly, R means to walk one pace forward, turn right, and then walk another pace forward. The path that you walk is the dragon.
In both characterizations of the curve, the 0th dragon is an empty list of turns. To go from the nth dragon to the n+1st dragon in the first of the characterizations, start with the nth dragon, and make a copy of it, connecting the end of the original to the end of the copy with a left turn in between. Because the end of the copy is connected to the end of the original, however, this means you’re going through the same turns, but in the opposite order and with the turns swapped (L becomes R and vice versa).
To understand this, have a look at the dragons of sizes 2, 3, and 4.
The size 2 dragon is the turns L, L, R the size 3 dragon is the turns L, L, R, L, L, R, R. Read the size three dragon from the left and the right at the same time. As you work your way inwards from the ends, the turns match up, where the ones on the right are the opposites of the ones on the left. Specifically, starting from the beginning of the size 3 curve you have exactly the size 2 curve: L, L, R, and reading from end towards the middle you have R, R, L. Then, the one turn in the middle is a left turn. This happens again to build the size four from the size thee, as L, L, R, L, L, R, R becomes L, L, R, L, L, R, R, L, L, L, R, R, L, R, R
|
data Turn : Set where |
L : Turn |
R : Turn |
|
inv : Turn -> Turn |
inv L = R |
inv R = L |
|
dragonU : ℕ -> 𝕃 Turn |
dragonU zero = [] |
dragonU (suc n) = |
dragonU n ++ |
((L :: []) ++ |
map inv (reverse (dragonU n))) |
The second characterization of the curve that Shin-Cheng gave us is to alternatingly add Ls and Rs in between the existing turns to go from the nth dragon to the n+1st dragon. So, if we start with the dragon of size 2 again, L, L, R, and then look at the odd index elements of the size 3 dragon (colored red here): L, L, R, L, L, R, R we see that the they are exactly the size 2 dragon. The other turns start with L and alternate between L and R.
|
LR> : 𝕃 Turn -> 𝕃 Turn |
RL> : 𝕃 Turn -> 𝕃 Turn |
|
LR> [] = L :: [] |
LR> (t :: ts) = L :: (t :: (RL> ts)) |
|
RL> [] = R :: [] |
RL> (t :: ts) = R :: (t :: (LR> ts)) |
|
dragonE : ℕ -> 𝕃 Turn |
dragonE zero = [] |
dragonE (suc n) = LR> (dragonE n) |
This code is slightly different from the code Shin-Cheng used. Specifically, he works with a generic interleave function (written ▷) that is happy to receive an infinite lists and a finite list, so he passes in this infinite list:
LR = L :: R :: LR |
And he also uses the tail of that infinite list, which is this infinite list:
RL = R :: L :: RL |
Defining such things is possible in Agda but requires more machinery than we really want to get into. Since he only ever uses the interleave function on these two lists, I’ve specialized his general-purpose interleave function to two specific functions, LR> that interleaves the LR list into its argument and the RL> function that interleaves the RL list into its argument.
|
thm : ∀ n -> dragonU n ≡ dragonE n |
thm = ? |
|
The way I succeeded at this was by watching the video of his talk and following his proof, step by step.
Please be aware that this proof can get unwieldy. Be sure to be disciplined when you work on it! Agda will punish you if you go too deep into “video-gaming” mode. In small doses, video gaming mode can be helpful, but be aware you’re doing it and be prepared to back out and think about what the goals mean.
I ran into a few complexities as I was solving the exercise and here is some advice based on that experience. Don’t feel like you have to follow the path I took, however; indeed if you wish to take an approach that’s different from Shin-Cheng’s, that’s also fine. As long as you don’t change any of the code above, any proof that Agda accepts in this hole is a good proof!
Shin-Cheng’s proof is based on one primary lemma and, step by step, he adapts one expression until it matches another to prove that lemma. His first step is to use induction. In the way we have been working in the class, this amounts to using rewrite with the inductive hypothesis.
Unfortunately, there are places on both sides of the equation where the inductive hypothesis applies, and rewriting the ones on the right-hand side causes trouble. So, what I did instead was to run his proof backwards, working from the right-hand side back to the left-hand side and then applying induction as the last step. At that point the only place where it applies is the place where Shin-Cheng used it in his lecture.
This causes Agda trouble, however. Agda somehow cannot tell that the function is always terminating if you simply try to apply the rewrite at the end. To deal with this, I used a with to grab the inductive hypothesis at the start of the proof, but didn’t actually rewrite with it until the end. Something like this:my-lemma : ℕ -> something ≡ something-else
my-lemma 0 = refl
my-lemma (suc n) with my-lemma n
my-lemma (suc n) | inductive-hypothesis
rewrite nicefact1
| anothernicefact
| inductive-hypothesis
= refl
Shin-Cheng makes use of the idea of an odd-length list in his proof, as guards on various lemmas. While it is certainly possible to replicate that with the length and odd functions from the IAL, it is going to be a lot nicer if you define evidence that captures what odd-lengthness means for a list. Just like we defined a type in class that captured which lists quicksort obviously terminates on and we also defined a type in class that captures which lists have a particular element, you can write a type definition that captures what it means to have an odd-length list (i.e., the list either has one element or it has two more elements stuck on the front of some other odd-length list) and this will be a lot nicer when you get it as an input (assumption) in various lemmas.
When Shin-Cheng is proving things, he sometimes uses facts that are, technically, not true but, as he points out in the video, various simple cases are immediate so can be ignored (and then everything he does is legit). Of course, Agda won’t let you ignore these cases! To compensate for this, you will find that you’re working with a suc n in places where he has an n, and you’ll be able to prove those simple cases in simple ways.
Accordingly, this means that the definitional equalities that Agda applies which, usually, are so helpful can make it confusing to follow along exactly with Shin-Cheng’s proofs. A good trick here is to write down the term that Shin-Cheng has in the video and then use C-c C-n to see if what you have somewhere in the goal or context and what he has are actually the same expression, keeping in mind that you might have a suc n where he has an n in some spots.
That is, if you think your goal is supposed to be something similar to what Shin-Cheng has in the video, type in what you see from the video into the hole. Then, check to see if if it normalizes to the same expression you actually see in the goal.
The lack of parentheses around multiple uses of ++ caused me confusion. So I went into the IAL and removed this line (it would probably have been better to just remove the _++_ part of the line in retrospect).
infixr 6 _::_ _++_
That line tells Agda that the expression xs ++ ys ++ zs is treated as if it were written with these parens xs ++ (ys ++ zs), i.e. you do the right-hand append first and then pass that in as the second argument to the left-hand append.
If you remove it, various things in the IAL will no longer be accepted by Agda, but you can fix them pretty easily by just adding the parens in yourself; there aren’t too too many in the files that I ended up using.
- At one point, Shin-Cheng offers an identity of the interleave function, namely:
(map f xs) ▷ (map f ys)
≡
map f (xs ▷ ys)
But he uses that only when f is the inv function which means that the expression is going to convert RL into LR (and vice versa), so we can specialize that to this:
LR> (map inv l) ≡ map inv (RL> l)
The idea is that the map f xs is the same as changing LR> into RL>. You won’t be surprised that the proofs about reverse were the most challenging. At some point I just gave up trying to sort out what the invariants for reverse-helper were and proved that reverse is equivalent to this function:
rev : ∀ {l} {A : Set l} -> 𝕃 A -> 𝕃 A
rev [] = []
rev (x :: l) = rev l ++ (x :: [])
and then I just wrote little “wrapper” lemmas that converted a proof about rev to one about reverse.
Good luck!