Add backlog based on comments from Andrea, implement some of them

This commit is contained in:
Frederik Hanghøj Iversen 2018-05-23 17:34:50 +02:00
parent 879f5bab52
commit 251fcf1966
9 changed files with 360 additions and 107 deletions

View file

@ -3,5 +3,74 @@ Talk about structure of library:
What can I say about reusability?
Misc
====
Meeting with Andrea May 18th
============================
App. 2 in HoTT gives typing rule for pathJ including a computational
rule for it.
If you have this computational rule definitionally, then you wouldn't
need to use `pathJprop`.
In discussion-section I mention HITs. I should remove this or come up
with a more elaborate example of something you could do, e.g.
something with pushouts in the category of sets.
The type Prop is a type where terms are *judgmentally* equal not just
propositionally so.
Maybe mention that Andreas Källberg is working on proving the
initiality conjecture.
Intensional Type Theory (ITT): Judgmental equality is decidable
Extensional Type Theory (ETT): Reflection is enough to make judgmental
equality undecidable.
Reflection : a ≡ b → a = b
ITT does not have reflections.
HTT ~ ITT + axiomatized univalence
Agda ~ ITT + K-rule
Coq ~ ITT (no K-rule)
Cubical Agda ~ ITT + Path + Glue
Prop is impredicative in Coq (whatever that means)
Prop ≠ hProp
Comments about abstract
-----
Pattern matching for paths (?)
Intro
-----
Main feature of judgmental equality is the conversion rule.
Conor explained: K + eliminators ≡ pat. matching
Explain jugmental equality independently of type-checking
Soundness for equality means that if `x = y` then `x` and `y` must be
equal according to the theory/model.
Decidability of `=` is a necessary condition for typechecking to be
decidable.
Canonicity is a nice-to-have though without canonicity terms can get
stuck. If we postulate results about judgmental equality. E.g. funext,
then we can construct a term of type natural number that is not a
numeral. Therefore stating canonicity with natural numbers:
∀ t . ⊢ t : N , ∃ n : N . ⊢ t = sⁿ 0 : N
is a sufficient condition to get a well-behaved equality.
Eta-equality for RawFunctor means that the associative law for
functors hold definitionally.
Computational property for funExt is only relevant in two places in my
whole formulation. Univalence and gradLemma does not influence any
proofs.

View file

@ -0,0 +1,74 @@
\chapter{Abstract functional extensionality}
\label{app:abstract-funext}
In two places in my formalization was the computational behaviours of
functional extensionality used. The reduction behaviour can be
disabled by marking functional extensionality as abstract. Below the
fully normalized goal and context with functional extensionality
marked abstract has been shown. The excerpts are from the module
%
\begin{center}
\sourcelink{Cat.Category.Monad.Voevodsky}
\end{center}
%
where this is also written as a comment next to the proofs. When
functional extensionality is not abstract the goal and current value
are the same. It is of course necessary to show the fully normalized
goal and context otherwise the reduction behaviours is not forced.
\subsubsection*{First goal}
Goal:
\begin{verbatim}
PathP (λ _ → §2-3.§2 omap (λ {z} → pure))
(§2-fromMonad
(.Cat.Category.Monad.toKleisli
(.Cat.Category.Monad.toMonoidal (§2-3.§2.toMonad m))))
(§2-fromMonad (§2-3.§2.toMonad m))
\end{verbatim}
Have:
\begin{verbatim}
PathP
(λ i →
§2-3.§2 K.IsMonad.omap
(K.RawMonad.pure
(K.Monad.raw
(funExt (λ m₁ → K.Monad≡ (.Cat.Category.Monad.toKleisliRawEq m₁))
i (§2-3.§2.toMonad m)))))
(§2-fromMonad
(.Cat.Category.Monad.toKleisli
(.Cat.Category.Monad.toMonoidal (§2-3.§2.toMonad m))))
(§2-fromMonad (§2-3.§2.toMonad m))
\end{verbatim}
\subsubsection*{Second goal}
Goal:
\begin{verbatim}
PathP (λ _ → §2-3.§1 omap (λ {X} → pure))
(§1-fromMonad
(.Cat.Category.Monad.toMonoidal
(.Cat.Category.Monad.toKleisli (§2-3.§1.toMonad m))))
(§1-fromMonad (§2-3.§1.toMonad m))
\end{verbatim}
Have:
\begin{verbatim}
PathP
(λ i →
§2-3.§1
(RawFunctor.omap
(Functor.raw
(M.RawMonad.R
(M.Monad.raw
(funExt
(λ m₁ → M.Monad≡ (.Cat.Category.Monad.toMonoidalRawEq m₁)) i
(§2-3.§1.toMonad m))))))
{X}
fst
(M.RawMonad.pureNT
(M.Monad.raw
(funExt
(λ m₁ → M.Monad≡ (.Cat.Category.Monad.toMonoidalRawEq m₁)) i
(§2-3.§1.toMonad m))))
X))
(§1-fromMonad
(.Cat.Category.Monad.toMonoidal
(.Cat.Category.Monad.toKleisli (§2-3.§1.toMonad m))))
(§1-fromMonad (§2-3.§1.toMonad m))
\end{verbatim}

View file

@ -1,55 +1,99 @@
\chapter{Perspectives}
\section{Discussion}
In the previous chapter the practical aspects of proving things in Cubical Agda
were highlighted. I also demonstrated the usefulness of separating ``laws'' from
``data''. One of the reasons for this is that dependencies within types can lead
to very complicated goals. One technique for alleviating this was to prove that
certain types are mere propositions.
In the previous chapter the practical aspects of proving things in
Cubical Agda were highlighted. I also demonstrated the usefulness of
separating ``laws'' from ``data''. One of the reasons for this is that
dependencies within types can lead to very complicated goals. One
technique for alleviating this was to prove that certain types are
mere propositions.
\subsection{Computational properties}
Another aspect (\TODO{That I actually did not highlight very well in the
previous chapter}) is the computational nature of paths. Say we have
formalized this common result about monads:
The new contribution of cubical Agda is that it has a constructive
proof of functional extensionality\index{functional extensionality}
and univalence\index{univalence}. This means that in particular that
the type checker can reduce terms defined with these theorems. So one
interesting result of this development is how much this influenced the
development. In particular having a functional extensionality that
``computes'' should simplify some proofs.
\TODO{Some equation\ldots}
I have tested this theory by using a feature of Agda where one can
mark certain bindings as being \emph{abstract}. This means that the
type-checker will not try to reduce that term further when
type-checking is performed. I tried making univalence and functional
extensionality abstract. It turns out that the conversion behaviour of
univalence is not used anywhere. For functional extensionality there
are two places in the whole solution where the reduction behaviour is
used to simplify some proofs. This is in showing that the maps between
the two formulations of monads are inverses. See the notes in this
module:
%
\begin{center}
\sourcelink{Cat.Category.Monad.Voevodsky}
\end{center}
%
I've also put this in a source listing in \ref{app:abstract-funext}. I
will not reproduce it in full here as the type is quite involved. The
method used to find in what places the computational behaviour of
these proofs are needed has the caveat of only working for places that
directly or transitively uses these two proofs. Fortunately though the
code is structured in such a way that this should be the case.
Nonetheless it is quite surprising that this computational behaviours
is not used more widely in the formalization.
By transporting this to the Kleisli formulation we get a result that we can use
to compute with. This is particularly useful because the Kleisli formulation
will be more familiar to programmers e.g.\ those coming from a background in
Haskell. Whereas the theory usually talks about monoidal monads.
\TODO{Mention that with postulates we cannot do this}
Barring this, however, the computational behaviour of paths can still
be useful. E.g. if a programmer want's to reuse functions that operate
on a monoidal monads to work with a monad in the Kleisli form that
this programmer has specified. To make this idea concrete, say we are
given some function $f \tp \Kleisli \to T$ having a path between $p
\tp \Monoidal \equiv \Kleisli$ induces a map $\coe\ p \tp \Monoidal
\to \Kleisli$. We can compose $f$ with this map to get $f \comp
\coe\ p \tp \Monoidal \to T$. Of course, since that map was
constructed with an isomorphism these maps already exist and could be
used directly. So this is arguably only interesting when one wants to
prove properties of such functions.
\subsection{Reusability of proofs}
The previous example also illustrate how univalence unifies two otherwise
disparate areas: The category-theoretic study of monads; and monads as in
functional programming. Univalence thus allows one to reuse proofs. You could
say that univalence gives the developer two proofs for the price of one.
The previous example illustrate how univalence unifies two otherwise
disparate areas: The category-theoretic study of monads; and monads as
in functional programming. Univalence thus allows one to reuse proofs.
You could say that univalence gives the developer two proofs for the
price of one. As an illustration of this I proved that monads are
groupoids. I initially proved this for the Kleisli
formulation\footnote{Actually doing this directly turned out to be
tricky as well, so I defined an equivalent formulation which was not
formulated with a record, but purely with $\sum$-types.}. Since the
two formulations are equal under univalence, substitution directly
gives us that this also holds for the monoidal formulation. This of
course generalizes to any family $P \tp 𝒰𝒰$ where $P$ is inhabited
at either formulation (i.e.\ either $P\ \Monoidal$ or $P\ \Kleisli$
holds).
The introduction (section \S\ref{sec:context}) mentioned an often
employed-technique for enabling extensional equalities is to use the
setoid-interpretation. Nowhere in this formalization has this been necessary,
$\Path$ has been used globally in the project as propositional equality. One
interesting place where this becomes apparent is in interfacing with the Agda
standard library. Multiple definitions in the Agda standard library have been
designed with the setoid-interpretation in mind. E.g. the notion of ``unique
setoid-interpretation. Nowhere in this formalization has this been
necessary, $\Path$ has been used globally in the project as
propositional equality. One interesting place where this becomes
apparent is in interfacing with the Agda standard library. Multiple
definitions in the Agda standard library have been designed with the
setoid-interpretation in mind. E.g. the notion of ``unique
existential'' is indexed by a relation that should play the role of
propositional equality. Likewise for equivalence relations, they are indexed,
not only by the actual equivalence relation, but also by another relation that
serve as propositional equality.
%% Unfortunately we cannot use the definition of equivalences found in the
%% standard library to do equational reasoning directly. The reason for this is
%% that the equivalence relation defined there must be a homogenous relation,
%% but paths are heterogeneous relations.
propositional equality. Likewise for equivalence relations, they are
indexed, not only by the actual equivalence relation, but also by
another relation that serve as propositional equality.
%% Unfortunately we cannot use the definition of equivalences found in
%% the standard library to do equational reasoning directly. The
%% reason for this is that the equivalence relation defined there must
%% be a homogenous relation, but paths are heterogeneous relations.
In the formalization at present a significant amount of energy has been put
towards proving things that would not have been needed in classical Agda. The
proofs that some given type is a proposition were provided as a strategy to
simplify some otherwise very complicated proofs (e.g.
\ref{eq:proof-prop-IsPreCategory} and \label{eq:productPath}). Often these
proofs would not be this complicated. If the J-rule holds definitionally the
proof-assistant can help simplify these goals considerably. The lack of the
J-rule has a significant impact on the complexity of these kinds of proofs.
In the formalization at present a significant amount of energy has
been put towards proving things that would not have been needed in
classical Agda. The proofs that some given type is a proposition were
provided as a strategy to simplify some otherwise very complicated
proofs (e.g. \ref{eq:proof-prop-IsPreCategory}
and \label{eq:productPath}). Often these proofs would not be this
complicated. If the J-rule holds definitionally the proof-assistant
can help simplify these goals considerably. The lack of the J-rule has
a significant impact on the complexity of these kinds of proofs.
\TODO{Universe levels.}
@ -60,15 +104,16 @@ Jesper Cockx' work extending the universe-level-laws for Agda and the
\subsection{Compiling Cubical Agda}
\label{sec:compiling-cubical-agda}
Compilation of program written in Cubical Agda is currently not supported. One
issue here is that the backends does not provide an implementation for the
cubical primitives (such as the path-type). This means that even though the
path-type gives us a computational interpretation of functional extensionality,
univalence, transport, etc., we do not have a way of actually using this to
compile our programs that use these primitives. It would be interesting to see
practical applications of this. The path between monads that this library
exposes could provide one particularly interesting case-study.
Compilation of program written in Cubical Agda is currently not
supported. One issue here is that the backends does not provide an
implementation for the cubical primitives (such as the path-type).
This means that even though the path-type gives us a computational
interpretation of functional extensionality, univalence, transport,
etc., we do not have a way of actually using this to compile our
programs that use these primitives. It would be interesting to see
practical applications of this. The path between monads that this
library exposes could provide one particularly interesting case-study.
\subsection{Higher inductive types}
This library has not explored the usefulness of higher inductive types in the
context of Category Theory.
This library has not explored the usefulness of higher inductive types
in the context of Category Theory.

View file

@ -0,0 +1,9 @@
App. 2 in HoTT gives typing rule for pathJ including a computational
rule for it.
If you have this computational rule definitionally, then you wouldn't
need to use `pathJprop`.
In discussion-section I mention HITs. I should remove this or come up
with a more elaborate example of something you could do, e.g.
something with pushouts in the category of sets.

View file

@ -1,42 +1,64 @@
\chapter{Introduction}
This thesis is a case-study in the application of Cubical Agda in the
This thesis is a case-study in the application of cubical Agda in the
context of category theory. At the center of this is the notion of
\nomenindex{equality}. In type-theory there are two pervasive notions
of equality: \nomenindex{judgmental equality} and
\nomenindex{propositional equality}. Judgmental equality is a property
of the type system, it is a property that is automatically checked by
a type checker. As such there are some properties judgmental
equalities must crucially have. It must be \nomenindex{decidable},
\nomenindex{sound}, enjoy \nomenindex{canonicity} and be a
\nomen{congruence relation}. Being decidable simply means that that an
algorithm exists to decide whether two terms are equal. For any
practical implementation the decidability must also be effectively
computable. Soundness means that things judged to be equal actually
\emph{are} considered equal. It must be a congruence relation because
otherwise the relation certainly does not adhere to our notion of
equality. One would be able to conclude things like: $x \nequiv y
\rightarrow f\ x \equiv f\ y$. Canonicity will be explained later in
this introduction after we've seen an example of judgmental- and
propositional equality at play for a simple example.\TODO{How to
motivate canonicity for equality}.
of the type system. Judgmental equality on the other hand is usually
defined \emph{within} the system. When introducing definitions this
report will use the notation $\defeq$. Judgmental equalities written
$=$. For propositional equalities the notation $\equiv$ is used.
For judgmental equality there are some properties that it must
satisfy. \nomenindex{sound}, enjoy \nomenindex{canonicity} and be a
\nomen{congruence relation}. Soundness means that things judged to be
equal are equal with respects to the model of the theory (the meta
theory). It must be a congruence relation because otherwise the
relation certainly does not adhere to our notion of equality. One
would be able to conclude things like: $x \equiv y \rightarrow f\ x
\nequiv f\ y$. Canonicity means that any well typed term evaluates to
a \emph{canonical} form. For example for a closed term $e \tp \bN$ it
will be the case that $e$ reduces to $n$ applications of
$\mathit{suc}$ to $0$ for some $n$; $e = \mathit{suc}^n\ 0$. Without
canonicity terms in the language can get ``stuck'' -- meaning that
they do not reduce to a canonical form.
To work as a programming languages it is necessary for judgmental
equality to be \nomenindex{decidable}. Being decidable simply means
that that an algorithm exists to decide whether two terms are equal.
For any practical implementation the decidability must also be
effectively computable.
For propositional equality the decidability requirement is relaxed. It
is not in general possible to decide the correctness of logical
propositions (cf.\ Hilbert's \emph{entscheidigungsproblem}).
Propositional equality are provided by the developer. When introducing
definitions this report will use the notation $\defeq$. Judgmental
equalities written $=$. For propositional equalities the notation
$\equiv$ is used.
There are two flavors of type-theory. \emph{Intensional-} and
\emph{extensional-} type theory. Identity types in extensional type
theory are required to be \nomen{propositions}{proposition}. That is,
a type with at most one inhabitant. In extensional type thoery the
principle of reflection
%
$$a ≡ b → a = b$$
%
is enough to make type checking undecidable. This report focuses on
Agda which at a glance can be thought of a version of intensional type
theory. Pattern-matching in regular Agda let's one prove
\nomenindex{axiom K}. Axiom K states that any two identity proofs are
propositionally identical.
The usual notion of propositional equality in \nomenindex{Intensional
Type Theory} (ITT) is quite restrictive. In the next section a few
motivating examples will highlight this. There exist techniques to
circumvent these problems, as we shall see. This thesis will explore
an extension to Agda that redefines the notion of propositional
equality and as such is an alternative to these other techniques. What
makes this extension particularly interesting is that it gives a
\emph{constructive} interpretation of univalence. What this means will
be elaborated in the following sections.
equality and as such is an alternative to these other techniques. The
extension is called cubical Agda. Cubical Agda drops Axiom K as this
does not permit \nomenindex{functional extensionality} and
\nomenindex{univalence}. What makes this extension particularly
interesting is that it gives a \emph{constructive} interpretation of
univalence. What all this means will be elaborated in the following
sections.
%
\section{Motivating examples}
%
@ -192,22 +214,16 @@ There are alternative approaches to working in a cubical setting where
one can still have univalence and functional extensionality. One
option is to postulate these as axioms. This approach, however, has
other shortcomings, e.g. you lose \nomenindex{canonicity}
(\TODO{Pageno!} \cite{huber-2016}). Canonicity means that any well
typed term evaluates to a \emph{canonical} form. For example for a
closed term $e \tp \bN$ it will be the case that $e$ reduces to $n$
applications of $\mathit{suc}$ to $0$ for some $n$; $e =
\mathit{suc}^n\ 0$. Without canonicity terms in the language can get
``stuck'' -- meaning that they do not reduce to a canonical form.
(\TODO{Pageno!} \cite{huber-2016}).
Another approach is to use the \emph{setoid interpretation} of type
theory (\cite{hofmann-1995,huber-2016}). With this approach one works
with
\nomenindex{extensional sets} $(X, \sim)$, that is a type $X \tp \MCU$
and an equivalence relation $\sim\ \tp X \to X \to \MCU$ on that type.
Under the setoid interpretation the equivalence relation serve as a
sort of ``local'' propositional equality. Since the developer gets to
pick this relation it is not guaranteed to be a congruence relation
a priori. So this must be verified manually by the developer.
with \nomenindex{extensional sets} $(X, \sim)$, that is a type $X \tp
\MCU$ and an equivalence relation $\sim\ \tp X \to X \to \MCU$ on that
type. Under the setoid interpretation the equivalence relation serve
as a sort of ``local'' propositional equality. Since the developer
gets to pick this relation it is not guaranteed to be a congruence
relation a priori. So this must be verified manually by the developer.
Furthermore, functions between different setoids must be shown to be
setoid homomorphism, that is; they preserve the relation.

View file

@ -124,3 +124,6 @@
\newcommand{\doclink}{\hrefsymb{\sourcebasepath}{\texttt{\sourcebasepath}}}
\newcommand{\clll}{\mathrel{\bC.\mathord{\lll}}}
\newcommand{\dlll}{\mathrel{\bD.\mathord{\lll}}}
\newcommand\coe{\varindex{coe}}
\newcommand\Monoidal{\varindex{Monoidal}}
\newcommand\Kleisli{\varindex{Kleisli}}

View file

@ -62,9 +62,10 @@
\nocite{coquand-2013}
\bibliography{refs}
%% \begin{appendices}
%% \setcounter{page}{1}
%% \pagenumbering{roman}
\begin{appendices}
\setcounter{page}{1}
\pagenumbering{roman}
\input{appendix/abstract-funext.tex}
%% \input{sources.tex}
%% \input{planning.tex}
%% \input{halftime.tex}

View file

@ -198,7 +198,7 @@ module _ {a b : Level} ( : Category a b) where
Req = Functor≡ rawEq
pureTEq : Monoidal.RawMonad.pureT (toMonoidalRaw (toKleisli m)) pureT
pureTEq = funExt (λ X refl)
pureTEq = refl
pureNTEq : (λ i NaturalTransformation Functors.identity (Req i))
[ Monoidal.RawMonad.pureNT (toMonoidalRaw (toKleisli m)) pureNT ]

View file

@ -1,7 +1,7 @@
{-
This module provides construction 2.3 in [voe]
-}
{-# OPTIONS --cubical --allow-unsolved-metas #-}
{-# OPTIONS --cubical #-}
module Cat.Category.Monad.Voevodsky where
open import Cat.Prelude
@ -152,7 +152,26 @@ module voe {a b : Level} ( : Category a b) where
§2-fromMonad
((Monoidal→Kleisli Kleisli→Monoidal)
(§2-3.§2.toMonad m))
≡⟨ (cong-d (\ φ §2-fromMonad (φ (§2-3.§2.toMonad m))) re-ve)
-- Below is the fully normalized goal and context with
-- `funExt` made abstract.
--
-- Goal: PathP (λ _ → §2-3.§2 omap (λ {z} → pure))
-- (§2-fromMonad
-- (.Cat.Category.Monad.toKleisli
-- (.Cat.Category.Monad.toMonoidal (§2-3.§2.toMonad m))))
-- (§2-fromMonad (§2-3.§2.toMonad m))
-- Have: PathP
-- (λ i →
-- §2-3.§2 K.IsMonad.omap
-- (K.RawMonad.pure
-- (K.Monad.raw
-- (funExt (λ m₁ → K.Monad≡ (.Cat.Category.Monad.toKleisliRawEq m₁))
-- i (§2-3.§2.toMonad m)))))
-- (§2-fromMonad
-- (.Cat.Category.Monad.toKleisli
-- (.Cat.Category.Monad.toMonoidal (§2-3.§2.toMonad m))))
-- (§2-fromMonad (§2-3.§2.toMonad m))
≡⟨ ( cong-d {x = Monoidal→Kleisli Kleisli→Monoidal} {y = idFun K.Monad} (\ φ §2-fromMonad (φ (§2-3.§2.toMonad m))) re-ve)
(§2-fromMonad §2-3.§2.toMonad) m
≡⟨ lemma
m
@ -174,24 +193,41 @@ module voe {a b : Level} ( : Category a b) where
§1-fromMonad
((Kleisli→Monoidal Monoidal→Kleisli)
(§2-3.§1.toMonad m))
-- Below is the fully normalized `agda2-goal-and-context`
-- with `funExt` made abstract.
--
-- Goal: PathP (λ _ → §2-3.§1 omap (λ {X} → pure))
-- (§1-fromMonad
-- (.Cat.Category.Monad.toMonoidal
-- (.Cat.Category.Monad.toKleisli (§2-3.§1.toMonad m))))
-- (§1-fromMonad (§2-3.§1.toMonad m))
-- Have: PathP
-- (λ i →
-- §2-3.§1
-- (RawFunctor.omap
-- (Functor.raw
-- (M.RawMonad.R
-- (M.Monad.raw
-- (funExt
-- (λ m₁ → M.Monad≡ (.Cat.Category.Monad.toMonoidalRawEq m₁)) i
-- (§2-3.§1.toMonad m))))))
-- (λ {X} →
-- fst
-- (M.RawMonad.pureNT
-- (M.Monad.raw
-- (funExt
-- (λ m₁ → M.Monad≡ (.Cat.Category.Monad.toMonoidalRawEq m₁)) i
-- (§2-3.§1.toMonad m))))
-- X))
-- (§1-fromMonad
-- (.Cat.Category.Monad.toMonoidal
-- (.Cat.Category.Monad.toKleisli (§2-3.§1.toMonad m))))
-- (§1-fromMonad (§2-3.§1.toMonad m))
≡⟨ (cong-d (\ φ §1-fromMonad (φ (§2-3.§1.toMonad m))) ve-re)
§1-fromMonad (§2-3.§1.toMonad m)
≡⟨ lemmaz
m
where
-- having eta equality on causes roughly the same work as checking this proof of foo,
-- which is quite expensive because it ends up reducing complex terms.
-- rhs = §1-fromMonad (Kleisli→Monoidal ((Monoidal→Kleisli (§2-3.§1.toMonad m))))
-- foo : §1-fromMonad (Kleisli→Monoidal (§2-3.§2.toMonad (§2-fromMonad (Monoidal→Kleisli (§2-3.§1.toMonad m)))))
-- ≡ §1-fromMonad (Kleisli→Monoidal ((Monoidal→Kleisli (§2-3.§1.toMonad m))))
-- §2-3.§1.fmap (foo i) = §2-3.§1.fmap rhs
-- §2-3.§1.join (foo i) = §2-3.§1.join rhs
-- §2-3.§1.RisFunctor (foo i) = §2-3.§1.RisFunctor rhs
-- §2-3.§1.pureN (foo i) = §2-3.§1.pureN rhs
-- §2-3.§1.joinN (foo i) = §2-3.§1.joinN rhs
-- §2-3.§1.isMonad (foo i) = §2-3.§1.isMonad rhs
lemmaz : §1-fromMonad (§2-3.§1.toMonad m) m
§2-3.§1.fmap (lemmaz i) = §2-3.§1.fmap m
§2-3.§1.join (lemmaz i) = §2-3.§1.join m