Merge branch 'dev'

This commit is contained in:
Frederik Hanghøj Iversen 2018-05-29 15:26:38 +02:00
commit 98cad057d5
40 changed files with 3505 additions and 1779 deletions

1
.gitignore vendored Normal file
View file

@ -0,0 +1 @@
html/

View file

@ -1,6 +1,13 @@
Change log Change log
========= =========
Version 1.6.0
-------------
This version mainly contains changes to the report.
This is the version I submit for my MSc..
Version 1.5.0 Version 1.5.0
------------- -------------
Prove postulates in `Cat.Wishlist`: Prove postulates in `Cat.Wishlist`:

View file

@ -1,5 +1,11 @@
build: src/**.agda build: src/**.agda
agda src/Cat.agda agda --library-file ./libraries src/Cat.agda
clean: clean:
find src -name "*.agdai" -type f -delete find src -name "*.agdai" -type f -delete
html:
agda --html src/Cat.agda
upload: html
scp -r html/ remote11.chalmers.se:www/cat/doc/

2
doc/.gitignore vendored
View file

@ -10,3 +10,5 @@
*.idx *.idx
*.ilg *.ilg
*.ind *.ind
*.nav
*.snm

View file

@ -3,5 +3,74 @@ Talk about structure of library:
What can I say about reusability? What can I say about reusability?
Misc Meeting with Andrea May 18th
==== ============================
App. 2 in HoTT gives typing rule for pathJ including a computational
rule for it.
If you have this computational rule definitionally, then you wouldn't
need to use `pathJprop`.
In discussion-section I mention HITs. I should remove this or come up
with a more elaborate example of something you could do, e.g.
something with pushouts in the category of sets.
The type Prop is a type where terms are *judgmentally* equal not just
propositionally so.
Maybe mention that Andreas Källberg is working on proving the
initiality conjecture.
Intensional Type Theory (ITT): Judgmental equality is decidable
Extensional Type Theory (ETT): Reflection is enough to make judgmental
equality undecidable.
Reflection : a ≡ b → a = b
ITT does not have reflections.
HTT ~ ITT + axiomatized univalence
Agda ~ ITT + K-rule
Coq ~ ITT (no K-rule)
Cubical Agda ~ ITT + Path + Glue
Prop is impredicative in Coq (whatever that means)
Prop ≠ hProp
Comments about abstract
-----
Pattern matching for paths (?)
Intro
-----
Main feature of judgmental equality is the conversion rule.
Conor explained: K + eliminators ≡ pat. matching
Explain jugmental equality independently of type-checking
Soundness for equality means that if `x = y` then `x` and `y` must be
equal according to the theory/model.
Decidability of `=` is a necessary condition for typechecking to be
decidable.
Canonicity is a nice-to-have though without canonicity terms can get
stuck. If we postulate results about judgmental equality. E.g. funext,
then we can construct a term of type natural number that is not a
numeral. Therefore stating canonicity with natural numbers:
∀ t . ⊢ t : N , ∃ n : N . ⊢ t = sⁿ 0 : N
is a sufficient condition to get a well-behaved equality.
Eta-equality for RawFunctor means that the associative law for
functors hold definitionally.
Computational property for funExt is only relevant in two places in my
whole formulation. Univalence and gradLemma does not influence any
proofs.

View file

@ -1,21 +1,24 @@
\chapter*{Abstract} \chapter*{Abstract}
The usual notion of propositional equality in intensional type-theory is The usual notion of propositional equality in intensional type-theory
restrictive. For instance it does not admit functional extensionality or is restrictive. For instance it does not admit functional
univalence. This poses a severe limitation on both what is \emph{provable} and extensionality nor univalence. This poses a severe limitation on both
the \emph{re-usability} of proofs. Recent developments have, however, resulted what is \emph{provable} and the \emph{re-usability} of proofs. Recent
in cubical type theory which permits a constructive proof of these two important developments have however resulted in cubical type theory which
notions. The programming language Agda has been extended with capabilities for permits a constructive proof of these two important notions. The
working in such a cubical setting. This thesis will explore the usefulness of programming language Agda has been extended with capabilities for
this extension in the context of category theory. working in such a cubical setting. This thesis will explore the
usefulness of this extension in the context of category theory.
The thesis will motivate and explain why propositional equality in cubical Agda The thesis will motivate the need for univalence and explain why
is more expressive than in standard Agda. Alternative approaches to Cubical Agda propositional equality in cubical Agda is more expressive than in
will be presented and their pros and cons will be explained. It will emphasize standard Agda. Alternative approaches to Cubical Agda will be
why it is useful to have a constructive interpretation of univalence. As an presented and their pros and cons will be explained. As an example of
example of this two formulations of monads will be presented: Namely monaeds in the application of univalence two formulations of monads will be
the monoidal form an monads in the Kleisli form. presented: Namely monads in the monoidal form and monads in the
Kleisli form and under the univalent interpretation it will be shown
how these are equal.
Finally the thesis will explain the challenges that a developer will face when Finally the thesis will explain the challenges that a developer will
working with cubical Agda and give some techniques to overcome these face when working with cubical Agda and give some techniques to
difficulties. It will also try to suggest how furhter work can help allievate overcome these difficulties. It will also try to suggest how further
some of these challenges. work can help alleviate some of these challenges.

1
doc/acknowledgement.tex Normal file
View file

@ -0,0 +1 @@
\chapter*{Acknowledgements}

13
doc/acknowledgements.tex Normal file
View file

@ -0,0 +1,13 @@
\chapter*{Acknowledgements}
I would like to thank my supervisor Thierry Coquand for giving me a
chance to work on this interesting topic. I would also like to thank
Andrea Vezzosi for some very long and very insightful meetings during
the project. It is fascinating and almost uncanny how quickly Andrea
can conjure up various proofs. I also want to recognize the support
of Knud Højgaards Fond who graciously sponsored me with a 20.000 DKK
scholarship which helped toward sponsoring the two years I have spent
studying abroad. I would also like to give a warm thanks to my fellow
students Pierre Kraft and Nachiappan Villiappan who have made the time
spent working on the thesis way more enjoyable. Lastly I would like to
give a special thanks to Valentina Méndez who have been a great moral
support throughout the whole process.

View file

@ -0,0 +1,74 @@
\chapter{Non-reducing functional extensionality}
\label{app:abstract-funext}
In two places in my formalization was the computational behaviours of
functional extensionality used. The reduction behaviour can be
disabled by marking functional extensionality as abstract. Below the
fully normalized goal and context with functional extensionality
marked abstract has been shown. The excerpts are from the module
%
\begin{center}
\sourcelink{Cat.Category.Monad.Voevodsky}
\end{center}
%
where this is also written as a comment next to the proofs. When
functional extensionality is not abstract the goal and current value
are the same. It is of course necessary to show the fully normalized
goal and context otherwise the reduction behaviours is not forced.
\subsubsection*{First goal}
Goal:
\begin{verbatim}
PathP (λ _ → §2-3.§2 omap (λ {z} → pure))
(§2-fromMonad
(.Cat.Category.Monad.toKleisli
(.Cat.Category.Monad.toMonoidal (§2-3.§2.toMonad m))))
(§2-fromMonad (§2-3.§2.toMonad m))
\end{verbatim}
Have:
\begin{verbatim}
PathP
(λ i →
§2-3.§2 K.IsMonad.omap
(K.RawMonad.pure
(K.Monad.raw
(funExt (λ m₁ → K.Monad≡ (.Cat.Category.Monad.toKleisliRawEq m₁))
i (§2-3.§2.toMonad m)))))
(§2-fromMonad
(.Cat.Category.Monad.toKleisli
(.Cat.Category.Monad.toMonoidal (§2-3.§2.toMonad m))))
(§2-fromMonad (§2-3.§2.toMonad m))
\end{verbatim}
\subsubsection*{Second goal}
Goal:
\begin{verbatim}
PathP (λ _ → §2-3.§1 omap (λ {X} → pure))
(§1-fromMonad
(.Cat.Category.Monad.toMonoidal
(.Cat.Category.Monad.toKleisli (§2-3.§1.toMonad m))))
(§1-fromMonad (§2-3.§1.toMonad m))
\end{verbatim}
Have:
\begin{verbatim}
PathP
(λ i →
§2-3.§1
(RawFunctor.omap
(Functor.raw
(M.RawMonad.R
(M.Monad.raw
(funExt
(λ m₁ → M.Monad≡ (.Cat.Category.Monad.toMonoidalRawEq m₁)) i
(§2-3.§1.toMonad m))))))
{X}
fst
(M.RawMonad.pureNT
(M.Monad.raw
(funExt
(λ m₁ → M.Monad≡ (.Cat.Category.Monad.toMonoidalRawEq m₁)) i
(§2-3.§1.toMonad m))))
X))
(§1-fromMonad
(.Cat.Category.Monad.toMonoidal
(.Cat.Category.Monad.toKleisli (§2-3.§1.toMonad m))))
(§1-fromMonad (§2-3.§1.toMonad m))
\end{verbatim}

BIN
doc/assets/isomorphism.pdf Normal file

Binary file not shown.

View file

Before

Width:  |  Height:  |  Size: 266 KiB

After

Width:  |  Height:  |  Size: 266 KiB

View file

@ -46,7 +46,8 @@
{\Huge\@title}\\[.5cm] {\Huge\@title}\\[.5cm]
{\Large A formalization of category theory in Cubical Agda}\\[6cm] {\Large A formalization of category theory in Cubical Agda}\\[6cm]
\begin{center} \begin{center}
\includegraphics[width=\linewidth,keepaspectratio]{isomorphism.png} \includegraphics[width=\linewidth,keepaspectratio]{assets/isomorphism.pdf}
%% \includepdf{isomorphism.pdf}
\end{center} \end{center}
% Cover text % Cover text
\vfill \vfill

View file

@ -1,45 +1,53 @@
\chapter{Conclusion} \chapter{Conclusion}
This thesis highlighted some issues with the standard inductive definition of This thesis highlighted some issues with the standard inductive
propositional equality used in Agda. Functional extensionality and univalence definition of propositional equality used in Agda. Functional
are examples of two propositions not admissible in Intensional Type Theory extensionality and univalence are examples of two propositions not
(ITT). This has a big impact on what is provable and the reusability of proofs. admissible in Intensional Type Theory (ITT). This has a big impact on
This issue is overcome with an extension to Agda's type system called Cubical what is provable and the reusability of proofs. This issue is
Agda. With Cubical Agda both functional extensionality and univalence are overcome with an extension to Agda's type system called Cubical Agda.
admissible. Cubical Agda is more expressive, but there are certain issues that With Cubical Agda both functional extensionality and univalence are
arise that are not present in standard Agda. For one thing ITT and standard Agda admissible. Cubical Agda is more expressive, but there are certain
enjoys Uniqueness of Identity Proofs (UIP). This is not the case in Cubical issues that arise that are not present in standard Agda. For one
Agda. In stead there exists a hierarchy of types with increasing thing Agda enjoys Uniqueness of Identity Proofs (UIP) though a flag
\nomen{homotopical structure}. It turns out to be useful to built the exists to turn this off. This feature is not present in Cubical Agda.
Rather than having unique identity proofs cubical Agda gives rise to a
hierarchy of types with increasing \nomen{homotopical
structure}{homotopy levels}. It turns out to be useful to built the
formalization with this hierarchy in mind as it can simplify proofs formalization with this hierarchy in mind as it can simplify proofs
considerably. Another issue one must overcome in Cubical Agda is when a type has considerably. Another issue one must overcome in Cubical Agda is when
a field whose type depends on a previous field. In this case paths between such a type has a field whose type depends on a previous field. In this
types will be heterogeneous paths. This problem is related to Cubical Agda not case paths between such types will be heterogeneous paths. In
having the K-rule \TODO{Not mentioned anywhere in the report}. In practice it practice it turns out to be considerably more difficult to work with
turns out to be considerably more difficult to work heterogeneous paths than heterogeneous paths than with homogeneous paths. The thesis
with homogeneous paths. The thesis demonstrated some techniques to overcome demonstrated the application of some techniques to overcome these
these difficulties, such as based path-induction. difficulties, such as based path induction.
This thesis formalized some of the core concepts from category theory including; This thesis formalizes some of the core concepts from category theory
categories, functors, products, exponentials, Cartesian closed categories, including; categories, functors, products, exponentials, Cartesian
natural transformations, the yoneda embedding, monads and more. Category theory closed categories, natural transformations, the yoneda embedding,
is an interesting case-study for the application of Cubical Agda for two reasons monads and more. Category theory is an interesting case study for the
in particular: Because category theory is the study of abstract algebra of application of cubical Agda for two reasons in particular: Because
functions, meaning that functional extensionality is particularly relevant. category theory is the study of abstract algebra of functions, meaning
Another reason is that in category theory it is commonplace to identify that functional extensionality is particularly relevant. Another
isomorphic structures and univalence allows for making this notion precise. This reason is that in category theory it is commonplace to identify
thesis also demonstrated another technique that is common in category theory; isomorphic structures. Univalence allows for making this notion
namely to define categories to prove properties of other structures. precise. This thesis also demonstrated another technique that is
Specifically a category was defined to demonstrate that any two product objects common in category theory; namely to define categories to prove
in a category are isomorphic. Furthermore the thesis showed two formulations of properties of other structures. Specifically a category was defined
monads and proved that they indeed are equivalent: Namely monads in the to demonstrate that any two product objects in a category are
monoidal- and Kleisli- form. The monoidal formulation is more typical to isomorphic. Furthermore the thesis showed two formulations of monads
category theoretic formulations and the Kleisli formulation will be more and proved that they indeed are equivalent: Namely monads in the
familiar to functional programmers. In the formulation we also saw how paths can monoidal- and Kleisli- form. The monoidal formulation is more typical
be used to extract functions. A path between two types induce an isomorphism to category theoretic formulations and the Kleisli formulation will be
between the two types. This e.g. permits developers to write a monad instance more familiar to functional programmers. It would have been very
for a given type using the Kleisli formulation. By transporting along the path difficult to make a similar proof with setoids and the proof would be
between the monoidal- and Kleisli- formulation one can reuse all the operations very difficult to read. In the formulation we also saw how paths can
and results shown for monoidal- monads in the context of kleisli monads. be used to extract functions. A path between two types induce an
isomorphism between the two types. This e.g.\ permits developers to
write a monad instance for a given type using the Kleisli formulation.
By transporting along the path between the monoidal- and Kleisli-
formulation one can reuse all the operations and results shown for
monoidal- monads in the context of kleisli monads.
%% %%
%% problem with inductive type %% problem with inductive type
%% overcome with cubical %% overcome with cubical

View file

@ -1,36 +1,36 @@
\chapter{Cubical Agda} \chapter{Cubical Agda}
\section{Propositional equality} \section{Propositional equality}
Judgmental equality in Agda is a feature of the type-system. Its something that Judgmental equality in Agda is a feature of the type system. It is
can be checked automatically by the type-checker: In the example from the something that can be checked automatically by the type checker: In
introduction $n + 0$ can be judged to be equal to $n$ simply by expanding the the example from the introduction $n + 0$ can be judged to be equal to
definition of $+$. $n$ simply by expanding the definition of $+$.
On the other hand, propositional equality is something defined within the On the other hand, propositional equality is something defined within
language itself. Propositional equality cannot be derived automatically. The the language itself. Propositional equality cannot be derived
normal definition of judgmental equality is an inductive data-type. Cubical Agda automatically. The normal definition of judgmental equality is an
discards this type in favor of a new primitives that has certain computational inductive data type. Cubical Agda discards this type in favor of some
properties exclusive to it. new primitives.
Exceprts of the source code relevant to this section can be found in appendix Most of the source code related with this section is implemented in
\S\ref{sec:app-cubical}. \cite{cubical-demo} it can be browsed in hyperlinked and syntax
highlighted HTML online. The links can be found in the beginning of
section \S\ref{ch:implementation}.
\subsection{The equality type} \subsection{The equality type}
The usual notion of judgmental equality says that given a type $A \tp \MCU$ and The usual notion of judgmental equality says that given a type $A \tp
two points of $A$; $a_0, a_1 \tp A$ we can form the type: \MCU$ and two points hereof $a_0, a_1 \tp A$ we can form the type:
% %
\begin{align} \begin{align}
a_0 \equiv a_1 \tp \MCU a_0 \equiv a_1 \tp \MCU
\end{align} \end{align}
% %
In Agda this is defined as an inductive data-type with the single constructor In Agda this is defined as an inductive data type with the single
for any $a \tp A$: constructor $\refl$ that for any $a \tp A$ gives:
% %
\begin{align} \begin{align}
\refl \tp a \equiv a \refl \tp a \equiv a
\end{align} \end{align}
% %
For any $a \tp A$.
There also exist a related notion of \emph{heterogeneous} equality which allows There also exist a related notion of \emph{heterogeneous} equality which allows
for equating points of different types. In this case given two types $A, B \tp for equating points of different types. In this case given two types $A, B \tp
\MCU$ and two points $a \tp A$, $b \tp B$ we can construct the type: \MCU$ and two points $a \tp A$, $b \tp B$ we can construct the type:
@ -39,8 +39,8 @@ for equating points of different types. In this case given two types $A, B \tp
a \cong b \tp \MCU a \cong b \tp \MCU
\end{align} \end{align}
% %
This is likewise defined as an inductive data-type with a single constructors This likewise has the single constructor $\refl$ that for any $a \tp
for any $a \tp A$: A$ gives:
% %
\begin{align} \begin{align}
\refl \tp a \cong a \refl \tp a \cong a
@ -53,107 +53,106 @@ heterogeneous paths respectively.
Judgmental equality in Cubical Agda is encapsulated with the type: Judgmental equality in Cubical Agda is encapsulated with the type:
% %
\begin{equation} \begin{equation}
\Path \tp (P \tp I → \MCU) → P\ 0 → P\ 1 → \MCU \Path \tp (P \tp \I\MCU) → P\ 0 → P\ 1 → \MCU
\end{equation} \end{equation}
% %
$I$ is a special data-type (\TODO{that also has special computational properties The special type $\I$ is called the index set. The index set can be
AFAIK}) called the index set. $I$ can be thought of simply as the interval on thought of simply as the interval on the real numbers from $0$ to $1$
the real numbers from $0$ to $1$. $P$ is a family of types over the index set (both inclusive). The family $P$ over $\I$ will be referred to as the
$I$. I will sometimes refer to $P$ as the ``path-space'' of some path $p \tp \nomenindex{path space} given some path $p \tp \Path\ P\ a\ b$. By
\Path\ P\ a\ b$. By this token $P\ 0$ then corresponds to the type at the that token $P\ 0$ corresponds to the type at the left endpoint of $p$.
left-endpoint and $P\ 1$ as the type at the right-endpoint. The type is called Likewise $P\ 1$ is the type at the right endpoint. The type is called
$\Path$ because it is connected with paths in homotopy theory. The intuition $\Path$ because the idea has roots in homotopy theory. The intuition
behind this is that $\Path$ describes paths in $\MCU$ -- i.e. between types. For is that $\Path$ describes\linebreak[1] paths in $\MCU$. I.e.\ paths
a path $p$ for the point $p\ i$ the index $i$ describes how far along the path between types. For a path $p$ the expression $p\ i$ can be thought of
one has moved. An inhabitant of $\Path\ P\ a_0\ a_1$ is a (dependent-) function, as a \emph{point} on this path. The index $i$ describes how far along
$p$, from the index-space to the path-space: the path one has moved. An inhabitant of $\Path\ P\ a_0\ a_1$ is a
(dependent) function from the index set to the path space:
% %
$$ $$
p \tp \prod_{i \tp I} P\ i p \tp \prod_{i \tp \I} P\ i
$$ $$
% %
Which must satisfy being judgmentally equal to $a_0$ (respectively $a_1$) at the Which must satisfy being judgmentally equal to $a_0$ at the
endpoints. I.e.: left endpoint and equal to $a_1$ at the other end. I.e.:
% %
\begin{align*} \begin{align*}
p\ 0 & = a_0 \\ p\ 0 & = a_0 \\
p\ 1 & = a_1 p\ 1 & = a_1
\end{align*} \end{align*}
% %
The notion of ``homogeneous equalities'' is recovered when $P$ does not depend The notion of \nomenindex{homogeneous equalities} is recovered when $P$ does not
on its argument: depend on its argument. That is for $A \tp \MCU$ and $a_0, a_1 \tp A$ the
homogenous equality between $a_0$ and $a_1$ is the type:
% %
$$ $$
a_0 \equiv a_1 \defeq \Path\ (\lambda i \to A)\ a_0\ a_1 a_0 \equiv a_1 \defeq \Path\ (\lambda\;i \to A)\ a_0\ a_1
$$ $$
% %
For $A \tp \MCU$, $a_0, a_1 \tp A$. I will generally prefer to use the notation I will generally prefer to use the notation $a \equiv b$ when talking
$a \equiv b$ when talking about non-dependent paths and use the notation about non-dependent paths and use the notation $\Path\ (\lambda\; i
$\Path\ (\lambda i \to P\ i)\ a\ b$ when the path-space is of particular \to P\ i)\ a\ b$ when the path space is of particular interest.
interest.
With this definition we can also recover reflexivity. That is, for any $A \tp With this definition we can also recover reflexivity. That is, for any $A \tp
\MCU$ and $a \tp A$: \MCU$ and $a \tp A$:
% %
\begin{equation} \begin{equation}
\begin{aligned} \begin{aligned}
\refl & \tp \Path (\lambda i \to A)\ a\ a \\ \refl & \tp a \equiv a \\
\refl & \defeq \lambda i \to a \refl & \defeq \lambda\; i \to a
\end{aligned} \end{aligned}
\end{equation} \end{equation}
% %
Here the path-space is $P \defeq \lambda i \to A$ and it satsifies $P\ i = A$ Here the path space is $P \defeq \lambda\; i \to A$ and it satsifies
definitionally. So to inhabit it, is to give a path $I \to A$ which is $P\ i = A$ definitionally. So to inhabit it, is to give a path $\I \to
judgmentally $a$ at either endpoint. This is satisfied by the constant path; A$ which is judgmentally $a$ at either endpoint. This is satisfied by
i.e. the path that stays at $a$ at any index $i$. the constant path; i.e.\ the path that is constantly $a$ at any index
$i \tp \I$.
It is also surpisingly easy to show functional extensionality with which we can It is also surprisingly easy to show functional extensionality.
construct a path between $f$ and $g$ -- the function defined in the introduction Functional extensionality is the proposition that given a type $A \tp
(section \S\ref{sec:functional-extensionality}). \MCU$, a family of types $B \tp A \to \MCU$ and functions $f, g \tp
%% module _ {a b} {A : Set a} {B : A → Set b} where \prod_{a \tp A} B\ a$ gives:
%% funExt : {f g : (x : A) → B x} → ((x : A) → f x ≡ g x) → f ≡ g
Functional extensionality is the proposition, given a type $A \tp \MCU$, a
family of types $B \tp A \to \MCU$ and functions $f, g \tp \prod_{a \tp A}
B\ a$:
% %
\begin{equation} \begin{equation}
\label{eq:funExt} \label{eq:funExt}
\funExt \tp \prod_{a \tp A} f\ a \equiv g\ a \to f \equiv g \funExt \tp \left(\prod_{a \tp A} f\ a \equiv g\ a \right) \to f \equiv g
\end{equation} \end{equation}
% %
%% p = λ i a → p a i %% p = λ\; i a → p a i
So given $p \tp \prod_{a \tp A} f\ a \equiv g\ a$ we must give a path $f \equiv So given $η \tp \prod_{a \tp A} f\ a \equiv g\ a$ we must give a path $f \equiv
g$. That is a function $I \to \prod_{a \tp A} B\ a$. So let $i \tp I$ be given. g$. That is a function $\I \to \prod_{a \tp A} B\ a$. So let $i \tp \I$ be given.
We must now give an expression $\phi \tp \prod_{a \tp A} B\ a$ satisfying We must now give an expression $\phi \tp \prod_{a \tp A} B\ a$ satisfying
$\phi\ 0 \equiv f\ a$ and $\phi\ 1 \equiv g\ a$. This neccesitates that the $\phi\ 0 \equiv f\ a$ and $\phi\ 1 \equiv g\ a$. This neccesitates that the
expression must be a lambda-abstraction, so let $a \tp A$ be given. Now we can expression must be a lambda-abstraction, so let $a \tp A$ be given. Now we can
apply $a$ to $p$ and get the path $p\ a \tp f\ a \equiv g\ a$. And this exactly apply $a$ to $η$ and get the path $η\ a \tp f\ a \equiv g\ a$. And this exactly
satisfied the conditions for $\phi$. In conclustion \ref{eq:funExt} is inhabited satisfies the conditions for $\phi$. In conclustion \ref{eq:funExt} is inhabited
by the term: by the term:
% %
\begin{equation} \begin{equation*}
\label{eq:funExt} \funExt\ η \defeq λ\; i\ a → η\ a\ i
\funExt\ p \defeq λ i\ a → p\ a\ i \end{equation*}
\end{equation}
% %
With this we can now prove the desired equality $f \equiv g$ from section With $\funExt$ in place we can now construct a path between
\S\ref{sec:functional-extensionality}: $\var{zeroLeft}$ and $\var{zeroRight}$ -- the functions defined in the
introduction \S\ref{sec:functional-extensionality}:
% %
\begin{align*} \begin{align*}
p & \tp f \equiv g \\ p & \tp \var{zeroLeft} \equiv \var{zeroRight} \\
p & \defeq \funExt\ \lambda n \to \refl p & \defeq \funExt\ \var{zrn}
\end{align*} \end{align*}
% %
Paths have some other important properties, but they are not the focus of Here $\var{zrn}$ is the proof from \ref{eq:zrn}.
this thesis. \TODO{Refer the reader somewhere for more info.} %
\section{Homotopy levels} \section{Homotopy levels}
In ITT all equality proofs are identical (in a closed context). This means that, In ITT all equality proofs are identical (in a closed context). This
in some sense, any two inhabitants of $a \equiv b$ are ``equally good'' -- they means that, in some sense, any two inhabitants of $a \equiv b$ are
do not have any interesting structure. This is referred to as Uniqueness of ``equally good''. They do not have any interesting structure. This is
Identity Proofs (UIP). Unfortunately it is not possible to have a type-theory referred to as Uniqueness of Identity Proofs (UIP). Unfortunately it
with both univalence and UIP. In stead we have a hierarchy of types with an is not possible to have a type theory with both univalence and UIP. In
increasing amount of homotopic structure. At the bottom of this hierarchy we stead in cubical Agda we have a hierarchy of types with an increasing
have the set of contractible types: amount of homotopic structure. At the bottom of this hierarchy is the
set of contractible types:
% %
\begin{equation} \begin{equation}
\begin{aligned} \begin{aligned}
@ -165,11 +164,12 @@ have the set of contractible types:
\end{equation} \end{equation}
% %
The first component of $\isContr\ A$ is called ``the center of contraction''. The first component of $\isContr\ A$ is called ``the center of contraction''.
Under the propositions-as-types interpretation of type-theory $\isContr\ A$ can Under the propositions-as-types interpretation of type theory $\isContr\ A$ can
be thought of as ``the true proposition $A$''. And indeed $\top$ is be thought of as ``the true proposition $A$''. And indeed $\top$ is
contractible: contractible:
%
\begin{equation*} \begin{equation*}
\var{tt} , \lambda x \to \refl \tp \isContr\ \top (\var{tt} , \lambda\; x \to \refl) \tp \isContr\ \top
\end{equation*} \end{equation*}
% %
It is a theorem that if a type is contractible, then it is isomorphic to the It is a theorem that if a type is contractible, then it is isomorphic to the
@ -184,23 +184,23 @@ The next step in the hierarchy is the set of mere propositions:
\end{aligned} \end{aligned}
\end{equation} \end{equation}
% %
$\isProp\ A$ can be thought of as the set of true and false propositions. And One can think of $\isProp\ A$ as the set of true and false propositions. And
indeed both $\top$ and $\bot$ are propositions: indeed both $\top$ and $\bot$ are propositions:
% %
\begin{align*} \begin{align*}
λ \var{tt}\ \var{tt} → refl & \tp \isProp\ \\ \; \var{tt}, \var{tt} → refl) & \tp \isProp\ \\
λ\varnothing\ \varnothing & \tp \isProp\ λ\;\varnothing\ \varnothing & \tp \isProp\
\end{align*} \end{align*}
% %
$\varnothing$ is used here to denote an impossible pattern. It is a theorem that The term $\varnothing$ is used here to denote an impossible pattern. It is a
if a mere proposition $A$ is inhabited, then so is it contractible. If it is not theorem that if a mere proposition $A$ is inhabited, then so is it contractible.
inhabited it is equivalent to the empty-type (or false If it is not inhabited it is equivalent to the empty-type (or false
proposition).\TODO{Cite!!} proposition).
I will refer to a type $A \tp \MCU$ as a \emph{mere} proposition if I want to I will refer to a type $A \tp \MCU$ as a \emph{mere proposition} if I want to
stress that we have $\isProp\ A$. stress that we have $\isProp\ A$.
Then comes the set of homotopical sets: The next step in the hierarchy is the set of homotopical sets:
% %
\begin{equation} \begin{equation}
\begin{aligned} \begin{aligned}
@ -209,13 +209,14 @@ Then comes the set of homotopical sets:
\end{aligned} \end{aligned}
\end{equation} \end{equation}
% %
I will not give an example of a set at this point. It turns out that proving I will not give an example of a set at this point. It turns out that
e.g. $\isProp\ \bN$ is not so straight-forward (see \cite[\S3.1.4]{hott-2013}). proving e.g.\ $\isProp\ \bN$ directly is not so straightforward (see
There will be examples of sets later in this report. At this point it should be \cite[\S3.1.4]{hott-2013}). Hedberg's theorem states that any type
noted that the term ``set'' is somewhat conflated; there is the notion of sets with decidable equality is a set. There will be examples of sets later
from set-theory, in Agda types are denoted \texttt{Set}. I will use it in this report. At this point it should be noted that the term ``set''
consistently to refer to a type $A$ as a set exactly if $\isSet\ A$ is a is somewhat conflated; there is the notion of sets from set-theory, in
proposition. Agda types are denoted \texttt{Set}. I will use it consistently to
refer to a type $A$ as a set exactly if $\isSet\ A$ is a proposition.
As the reader may have guessed the next step in the hierarchy is the type: As the reader may have guessed the next step in the hierarchy is the type:
% %
@ -228,8 +229,9 @@ As the reader may have guessed the next step in the hierarchy is the type:
% %
And so it continues. In fact we can generalize this family of types by indexing And so it continues. In fact we can generalize this family of types by indexing
them with a natural number. For historical reasons, though, the bottom of the them with a natural number. For historical reasons, though, the bottom of the
hierarchy, the contractible types, is said to be a \nomen{-2-type}, propositions hierarchy, the contractible types, is said to be a \nomen{-2-type}{homotopy
are \nomen{-1-types}, (homotopical) sets are \nomen{0-types} and so on\ldots levels}, propositions are \nomen{-1-types}{homotopy levels}, (homotopical)
sets are \nomen{0-types}{homotopy levels} and so on\ldots
Just as with paths, homotopical sets are not at the center of focus for this Just as with paths, homotopical sets are not at the center of focus for this
thesis. But I mention here some properties that will be relevant for this thesis. But I mention here some properties that will be relevant for this
@ -238,90 +240,171 @@ exposition:
Proposition: Homotopy levels are cumulative. That is, if $A \tp \MCU$ has Proposition: Homotopy levels are cumulative. That is, if $A \tp \MCU$ has
homotopy level $n$ then so does it have $n + 1$. homotopy level $n$ then so does it have $n + 1$.
Let $\left\Vert A \right\Vert = n$ denote that the level of $A$ is $n$. For any level $n$ it is the case that to be of level $n$ is a mere proposition.
Proposition: For any homotopic level $n$ this is a mere proposition.
% %
\section{A few lemmas} \section{A few lemmas}
Rather than getting into the nitty-gritty details of Agda I venture to take a Rather than getting into the nitty-gritty details of Agda I venture to
more ``combinator-based'' approach. That is, I will use theorems about paths take a more ``combinator-based'' approach. That is I will use
already that have already been formalized. Specifically the results come from theorems about paths that have already been formalized.
the Agda library \texttt{cubical} (\TODO{Cite}). I have used a handful of Specifically the results come from the Agda library \texttt{cubical}
results from this library as well as contributed a few lemmas myself.\footnote{The module \texttt{Cat.Prelude} lists the upstream dependencies. As well my contribution to \texttt{cubical} can be found in the git logs \TODO{Cite}.} (\cite{cubical-demo}). I have used a handful of results from this
library as well as contributed a few lemmas myself%
\footnote{The module \texttt{Cat.Prelude} lists the upstream
dependencies. As well my contribution to \texttt{cubical} can be
found in the git logs which are available at
\hrefsymb{https://github.com/Saizan/cubical-demo}{\texttt{https://github.com/Saizan/cubical-demo}}.
}.
These theorems are all purely related to homotopy theory and cubical Agda and as These theorems are all purely related to homotopy type theory and as
such not specific to the formalization of Category Theory. I will present a few such not specific to the formalization of Category Theory. I will
of these theorems here, as they will be used later in chapter present a few of these theorems here as they will be used throughout
\ref{ch:implementation} throughout. chapter \ref{ch:implementation}. They should also give the reader some
intuition about the path type.
\subsection{Path induction} \subsection{Path induction}
\label{sec:pathJ} \label{sec:pathJ}
The induction principle for paths intuitively gives us a way to reason about a The induction principle for paths intuitively gives us a way to reason
type-family indexed by a path by only considering if said path is $\refl$ (the about a type family indexed by a path by only considering if said path
``base-case''). For \emph{based path induction}, that equality is \emph{based} is $\refl$ (the \nomen{base case}{path induction}). For \emph{based
at some element $a \tp A$. path induction}, that equality is \emph{based} at some element $a
\tp A$.
Let a type $A \tp \MCU$ and an element of the type $a \tp A$ be given. $a$ is said to be the base of the induction. Given a family of types: \pagebreak[3]
\begin{samepage}
Let a type $A \tp \MCU$ and an element of the type $a \tp A$ be
given. $a$ is said to be the base of the induction.\linebreak[3] Given
a family of types:
% %
$$ $$
P \tp \prod_{a' \tp A} \prod_{p \tp a ≡ a'} \MCU D \tp \prod_{b \tp A} \prod_{p \tp a ≡ b} \MCU
$$ $$
% %
And an inhabitant of $P$ at $\refl$: And an inhabitant of $D$ at $\refl$:
% %
$$ $$
p \tp P\ a\ \refl d \tp D\ a\ \refl
$$ $$
% %
We have the function: We have the function:
% %
\begin{equation} \begin{equation}
\pathJ\ P\ p \tp \prod_{a' \tp A} \prod_{p \tp a ≡ a'} P\ a\ p \pathJ\ D\ d \tp \prod_{b \tp A} \prod_{p \tp a ≡ b} D\ b\ p
\end{equation}
\end{samepage}%
A simple application of $\pathJ$ is for proving that $\var{sym}$ is an
involution. Namely for any set $A \tp \MCU$, points $a, b \tp A$ and a path
between them $p \tp a \equiv b$:
%
\begin{equation}
\label{eq:sym-invol}
\var{sym}\ (\var{sym}\ p) ≡ p
\end{equation} \end{equation}
% %
I will not give an example of using $\pathJ$ here. An application can be found The proof will be by induction on $p$ and will be based at $a$. That
later in \ref{eq:pathJ-example}. is $D$ will be the family:
%
\begin{align*}
D & \tp \prod_{b' \tp A} \prod_{p \tp a ≡ b'} \MCU \\
D\ b'\ p' & \defeq \var{sym}\ (\var{sym}\ p') ≡ p'
\end{align*}
%
The base case will then be:
%
\begin{align*}
d & \tp \var{sym}\ (\var{sym}\ \refl) ≡ \refl \\
d & \defeq \refl
\end{align*}
%
The reason $\refl$ proves this is that $\var{sym}\ \refl = \refl$ holds
definitionally. In summary \ref{eq:sym-invol} is inhabited by the term:
%
\begin{align*}
\pathJ\ D\ d\ b\ p
\tp
\var{sym}\ (\var{sym}\ p) ≡ p
\end{align*}
%
Another application of $\pathJ$ is for proving associativity of $\trans$. That
is, given a type $A \tp \MCU$, elements of $A$, $a, b, c, d \tp A$ and paths
between them $p \tp a \equiv b$, $q \tp b \equiv c$ and $r \tp c \equiv d$ we
have the following:
%
\begin{equation}
\label{eq:cum-trans}
\trans\ p\ (\trans\ q\ r) ≡ \trans\ (\trans\ p\ q)\ r
\end{equation}
%
In this case the induction will be based at $c$ (the left-endpoint of $r$) and
over the family:
%
\begin{align*}
T & \tp \prod_{d' \tp A} \prod_{r' \tp c ≡ d'} \MCU \\
T\ d'\ r' & \defeq \trans\ p\ (\trans\ q\ r') ≡ \trans\ (\trans\ p\ q)\ r'
\end{align*}
%
So the base case is proven with $t$ which is defined as:
%
\begin{align*}
\trans\ p\ (\trans\ q\ \refl) &
\trans\ p\ q \\
&
\trans\ (\trans\ p\ q)\ \refl
\end{align*}
%
Here we have used the proposition $\trans\ p\ \refl \equiv p$ without proof. In
conclusion \ref{eq:cum-trans} is inhabited by the term:
%
\begin{align*}
\pathJ\ T\ t\ d\ r
\end{align*}
%
We shall see another application of path induction in \ref{eq:pathJ-example}.
\subsection{Paths over propositions} \subsection{Paths over propositions}
\label{sec:lemPropF} \label{sec:lemPropF}
Another very useful combinator is $\lemPropF$: Another very useful combinator is $\lemPropF$: Given a type $A \tp
\MCU$ and a type family on $A$; $D \tp A \to \MCU$. Let $\var{propD}
To `promote' this to a dependent path we can use another useful combinator; \tp \prod_{x \tp A} \isProp\ (D\ x)$ be the proof that $D$ is a mere
$\lemPropF$. Given a type $A \tp \MCU$ and a type family on $A$; $P \tp A \to proposition for all elements of $A$. Furthermore say we have a path
\MCU$. Let $\var{propP} \tp \prod_{x \tp A} \isProp\ (P\ x)$ be the proof that between some two elements in $A$; $p \tp a_0 \equiv a_1$ then we can
$P$ is a mere proposition for all elements of $A$. Furthermore say we have a built a heterogeneous path between any two elements of $d_0 \tp
path between some two elements in $A$; $p \tp a_0 \equiv a_1$ then we can built D\ a_0$ and $d_1 \tp D\ a_1$.
a heterogeneous path between any two elements of $p_0 \tp P\ a_0$ and $p_1 \tp
P\ a_1$:
% %
$$ $$
\lemPropF\ \var{propP}\ p \defeq \Path\ (\lambda\; i \mto P\ (p\ i))\ p_0\ p_1 \lemPropF\ \var{propD}\ p \tp \Path\ (\lambda\; i \mto D\ (p\ i))\ d_0\ d_1
$$ $$
% %
This is quite a mouthful. So let me try to show how this is a very general and Note that $d_0$ and $d_1$, though points of the same family, have
useful result. different types. This is quite a mouthful. So let me try to show how
this is a very general and useful result.
Often when proving equalities between elements of some dependent types Often when proving equalities between elements of some dependent types
$\lemPropF$ can be used to boil this complexity down to showing that the $\lemPropF$ can be used to boil this complexity down to showing that
dependent parts of the type are mere propositions. For instance, saw we have a type: the dependent parts of the type are mere propositions. For instance
say we have a type:
% %
$$ $$
T \defeq \sum_{a \tp A} P\ a T \defeq \sum_{a \tp A} D\ a
$$ $$
% %
For some proposition $P \tp A \to \MCU$. If we want to prove $t_0 \equiv t_1$ For some proposition $D \tp A \to \MCU$. That is we have $\var{propD}
for two elements $t_0, t_1 \tp T$ then this will be a pair of paths: \tp \prod_{a \tp A} \isProp\ (D\ a)$. If we want to prove $t_0 \equiv
t_1$ for two elements $t_0, t_1 \tp T$ then this will be a pair of
paths:
% %
% %
\begin{align*} \begin{align*}
p \tp & \fst\ t_0 \equiv \fst\ t_1 \\ p \tp & \fst\ t_0 \equiv \fst\ t_1 \\
& \Path\ (\lambda i \to P\ (p\ i))\ \snd\ t_0 \equiv \snd\ t_1 & \Path\ (\lambda\; i \to D\ (p\ i))\ (\snd\ t_0)\ (\snd\ t_1)
\end{align*} \end{align*}
% %
Here $\lemPropF$ directly allow us to prove the latter of these: Here $\lemPropF$ directly allow us to prove the latter of these given
that we have already provided $p$.
% %
$$ $$
\lemPropF\ \var{propP}\ p \lemPropF\ \var{propD}\ p
\tp \Path\ (\lambda i \to P\ (p\ i))\ \snd\ t_0 \equiv \snd\ t_1 \tp \Path\ (\lambda\; i \to D\ (p\ i))\ (\snd\ t_0)\ (\snd\ t_1)
$$ $$
% %
\subsection{Functions over propositions} \subsection{Functions over propositions}
@ -335,9 +418,9 @@ $$
\subsection{Pairs over propositions} \subsection{Pairs over propositions}
\label{sec:propSig} \label{sec:propSig}
% %
$\sum$-types preserve propositionality whenever its first component is a $\sum$-types preserve propositionality whenever its first component is
proposition, and its second component is a proposition for all points of in the a proposition, and its second component is a proposition for all
left type. points of the left type.
% %
$$ $$
\mathit{propSig} \tp \isProp\ A \to \left(\prod_{a \tp A} \isProp\ (P\ a)\right) \to \isProp\ \left(\sum_{a \tp A} P\ a\right) \mathit{propSig} \tp \isProp\ A \to \left(\prod_{a \tp A} \isProp\ (P\ a)\right) \to \isProp\ \left(\sum_{a \tp A} P\ a\right)

View file

@ -1,74 +1,139 @@
\chapter{Perspectives} \chapter{Perspectives}
\section{Discussion} \section{Discussion}
In the previous chapter the practical aspects of proving things in Cubical Agda In the previous chapter the practical aspects of proving things in
were highlighted. I also demonstrated the usefulness of separating ``laws'' from Cubical Agda were highlighted. I also demonstrated the usefulness of
``data''. One of the reasons for this is that dependencies within types can lead separating ``laws'' from ``data''. One of the reasons for this is that
to very complicated goals. One technique for alleviating this was to prove that dependencies within types can lead to very complicated goals. One
certain types are mere propositions. technique for alleviating this was to prove that certain types are
mere propositions.
\subsection{Computational properties} \subsection{Computational properties}
Another aspect (\TODO{That I actually did not highlight very well in the The new contribution of cubical Agda is that it has a constructive
previous chapter}) is the computational nature of paths. Say we have proof of functional extensionality\index{functional extensionality}
formalized this common result about monads: and univalence\index{univalence}. This means that in particular that
the type checker can reduce terms defined with these theorems. So one
interesting result of this development is how much this influenced the
development. In particular having a functional extensionality that
``computes'' should simplify some proofs.
\TODO{Some equation\ldots} I have tested this by using a feature of Agda where one can mark
certain bindings as being \emph{abstract}. This means that the
type-checker will not try to reduce that term further during type
checking. I tried making univalence and functional extensionality
abstract. It turns out that the conversion behaviour of univalence is
not used anywhere. For functional extensionality there are two places
in the whole solution where the reduction behaviour is used to
simplify some proofs. This is in showing that the maps between the
two formulations of monads are inverses. See the notes in this
module:
%
\begin{center}
\sourcelink{Cat.Category.Monad.Voevodsky}
\end{center}
%
By transporting this to the Kleisli formulation we get a result that we can use I will not reproduce it in full here as the type is quite involved. In
to compute with. This is particularly useful because the Kleisli formulation stead I have put this in a source listing in \ref{app:abstract-funext}.
will be more familiar to programmers e.g. those coming from a background in The method used to find in what places the computational behaviour of
Haskell. Whereas the theory usually talks about monoidal monads. these proofs are needed has the caveat of only working for places that
directly or transitively uses these two proofs. Fortunately though
the code is structured in such a way that this is the case. So in
conclusion the way I have structured these proofs means that the
computational behaviour of functional extensionality and univalence
has not been so relevant.
\TODO{Mention that with postulates we cannot do this} Barring this the computational behaviour of paths can still be useful.
E.g.\ if a programmer wants to reuse functions that operate on a
monoidal monads to work with a monad in the Kleisli form that the
programmer has specified. To make this idea concrete, say we are
given some function $f \tp \Kleisli \to T$ having a path between $p
\tp \Monoidal \equiv \Kleisli$ induces a map $\coe\ p \tp \Monoidal
\to \Kleisli$. We can compose $f$ with this map to get $f \comp
\coe\ p \tp \Monoidal \to T$. Of course, since that map was
constructed with an isomorphism these maps already exist and could be
used directly. So this is arguably only interesting when one also
wants to prove properties of applying such functions.
\subsection{Reusability of proofs} \subsection{Reusability of proofs}
The previous example also illustrate how univalence unifies two otherwise The previous example illustrate how univalence unifies two otherwise
disparate areas: The category-theoretic study of monads; and monads as in disparate areas: The category-theoretic study of monads; and monads as
functional programming. Univalence thus allows one to reuse proofs. You could in functional programming. Univalence thus allows one to reuse proofs.
say that univalence gives the developer two proofs for the price of one. You could say that univalence gives the developer two proofs for the
price of one. As an illustration of this I proved that monads are
groupoids. I initially proved this for the Kleisli
formulation\footnote{Actually doing this directly turned out to be
tricky as well, so I defined an equivalent formulation which was not
formulated with a record, but purely with $\sum$-types.}. Since the
two formulations are equal under univalence, substitution directly
gives us that this also holds for the monoidal formulation. This of
course generalizes to any family $P \tp 𝒰𝒰$ where $P$ is inhabited
at either formulation (i.e.\ either $P\ \Monoidal$ or $P\ \Kleisli$
holds).
The introduction (section \S\ref{sec:context}) mentioned an often The introduction (section \S\ref{sec:context}) mentioned that a
employed-technique for enabling extensional equalities is to use the typical way of getting access to functional extensionality is to work
setoid-interpretation. Nowhere in this formalization has this been necessary, with setoids. Nowhere in this formalization has this been necessary,
$\Path$ has been used globally in the project as propositional equality. One $\Path$ has been used globally in the project for propositional
interesting place where this becomes apparent is in interfacing with the Agda equality. One interesting place where this becomes apparent is in
standard library. Multiple definitions in the Agda standard library have been interfacing with the Agda standard library. Multiple definitions in
designed with the setoid-interpretation in mind. E.g. the notion of ``unique the Agda standard library have been designed with the
existential'' is indexed by a relation that should play the role of setoid-interpretation in mind. E.g.\ the notion of \emph{unique
propositional equality. Likewise for equivalence relations, they are indexed, existential} is indexed by a relation that should play the role of
not only by the actual equivalence relation, but also by another relation that propositional equality. Equivalence relations are likewise indexed,
serve as propositional equality. not only by the actual equivalence relation but also by another
%% Unfortunately we cannot use the definition of equivalences found in the relation that serve as propositional equality.
%% standard library to do equational reasoning directly. The reason for this is %% Unfortunately we cannot use the definition of equivalences found in
%% that the equivalence relation defined there must be a homogenous relation, %% the standard library to do equational reasoning directly. The
%% but paths are heterogeneous relations. %% reason for this is that the equivalence relation defined there must
%% be a homogenous relation, but paths are heterogeneous relations.
In the formalization at present a significant amount of energy has been put In the formalization at present a significant amount of energy has
towards proving things that would not have been needed in classical Agda. The been put towards proving things that would not have been needed in
proofs that some given type is a proposition were provided as a strategy to classical Agda. The proofs that some given type is a proposition were
simplify some otherwise very complicated proofs (e.g. provided as a strategy to simplify some otherwise very complicated
\ref{eq:proof-prop-IsPreCategory} and \label{eq:productPath}). Often these proofs (e.g.\ \ref{eq:proof-prop-IsPreCategory}
proofs would not be this complicated. If the J-rule holds definitionally the and \ref{eq:productPath}). Often these proofs would not be this
proof-assistant can help simplify these goals considerably. The lack of the complicated. If the J-rule holds definitionally the proof-assistant
J-rule has a significant impact on the complexity of these kinds of proofs. can help simplify these goals considerably. The lack of the J-rule has
a significant impact on the complexity of these kinds of proofs.
\TODO{Universe levels.} \subsection{Motifs}
An oft-used technique in this development is using based path
induction to prove certain properties. One particular challenge that
arises when doing so is that Agda is not able to automatically infer
the family that one wants to do induction over. For instance in the
proof $\var{sym}\ (\var{sym}\ p) ≡ p$ from \ref{eq:sym-invol} the
family that we chose to do induction over was $D\ b'\ p' \defeq
\var{sym}\ (\var{sym}\ p') ≡ p'$. However, if one interactively tries
to give this hole, all the information that Agda can provide is that
one must provide an element of $𝒰$. Agda could be more helpful in this
context, perhaps even infer this family in some situations. In this
very simple example this is of course not a big problem, but there are
examples in the source code where this gets more involved.
\section{Future work} \section{Future work}
\subsection{Agda \texttt{Prop}}
Jesper Cockx' work extending the universe-level-laws for Agda and the
\texttt{Prop}-type.
\subsection{Compiling Cubical Agda} \subsection{Compiling Cubical Agda}
\label{sec:compiling-cubical-agda} \label{sec:compiling-cubical-agda}
Compilation of program written in Cubical Agda is currently not supported. One Compilation of program written in Cubical Agda is currently not
issue here is that the backends does not provide an implementation for the supported. One issue here is that the backends does not provide an
cubical primitives (such as the path-type). This means that even though the implementation for the cubical primitives (such as the path-type).
path-type gives us a computational interpretation of functional extensionality, This means that even though the path-type gives us a computational
univalence, transport, etc., we do not have a way of actually using this to interpretation of functional extensionality, univalence, transport,
compile our programs that use these primitives. It would be interesting to see etc., we do not have a way of actually using this to compile our
practical applications of this. The path between monads that this library programs that use these primitives. It would be interesting to see
exposes could provide one particularly interesting case-study. practical applications of this.
\subsection{Higher inductive types} \subsection{Proving laws of programs}
This library has not explored the usefulness of higher inductive types in the Another interesting thing would be to use the Kleisli formulation of
context of Category Theory. monads to prove properties of functional programs. The existence of
univalence will make it possible to re-use proofs stated in terms of
the monoidal formulation in this setting.
%% \subsection{Higher inductive types}
%% This library has not explored the usefulness of higher inductive types
%% in the context of Category Theory.
\subsection{Initiality conjecture}
A fellow student at Chalmers, Andreas Källberg, is currently working
on proving the initiality conjecture. He will be using this library
to do so.

View file

@ -0,0 +1,9 @@
App. 2 in HoTT gives typing rule for pathJ including a computational
rule for it.
If you have this computational rule definitionally, then you wouldn't
need to use `pathJprop`.
In discussion-section I mention HITs. I should remove this or come up
with a more elaborate example of something you could do, e.g.
something with pushouts in the category of sets.

View file

@ -3,7 +3,7 @@ I've written this as an appendix because 1) the aim of the thesis changed
drastically from the planning report/proposal 2) partly I'm not sure how to drastically from the planning report/proposal 2) partly I'm not sure how to
structure my thesis. structure my thesis.
My work so far has very much focused on the formalization, i.e. coding. It's My work so far has very much focused on the formalization, i.e.\ coding. It's
unclear to me at this point what I should have in the final report. Here I will unclear to me at this point what I should have in the final report. Here I will
describe what I have managed to formalize so far and what outstanding challenges describe what I have managed to formalize so far and what outstanding challenges
I'm facing. I'm facing.
@ -57,8 +57,8 @@ composition and identity, laws; preservation of identity and composition) plus
the extra condition that it is univalent - namely that you can get an equality the extra condition that it is univalent - namely that you can get an equality
of two objects from an isomorphism. of two objects from an isomorphism.
I make no distinction between a pre-category and a real category (as in the I make no distinction between a pre category and a real category (as in the
[HoTT]-sense). A pre-category in my implementation would be a category sans the [HoTT]-sense). A pre category in my implementation would be a category sans the
witness to univalence. witness to univalence.
I also prove that being a category is a proposition. This gives rise to an I also prove that being a category is a proposition. This gives rise to an

File diff suppressed because it is too large Load diff

View file

@ -1,27 +1,88 @@
\chapter{Introduction} \chapter{Introduction}
This thesis is a case-study in the application of cubical Agda in the
context of category theory. At the center of this is the notion of
\nomenindex{equality}. In type-theory there are two pervasive notions
of equality: \nomenindex{judgmental equality} and
\nomenindex{propositional equality}. Judgmental equality is a property
of the type system. Judgmental equality on the other hand is usually
defined \emph{within} the system. When introducing definitions this
report will use the symbol $\defeq$. Judgmental equalities will be
denoted with $=$ and for propositional equalities the notation
$\equiv$ is used.
The rules of judgmental equality are related with $β$- and
$η$-reduction which gives a notion of computation in a given type
theory.
%
There are some properties that one usually want judgmental equality to
satisfy. It must be \nomenindex{sound}, enjoy \nomenindex{canonicity}
and be a \nomenindex{congruence relation}. Soundness means that things
judged to be equal are equal with respects to the \nomenindex{model}
of the theory or the \emph{meta theory}. It must be a congruence
relation because otherwise the relation certainly does not adhere to
our notion of equality. One would be able to conclude things like: $x
\equiv y \rightarrow f\ x \nequiv f\ y$. Canonicity means that any
well typed term evaluates to a \emph{canonical} form. For example for
a closed term $e \tp \bN$ it will be the case that $e$ reduces to $n$
applications of $\mathit{suc}$ to $0$ for some $n$; $e =
\mathit{suc}^n\ 0$. Without canonicity terms in the language can get
``stuck'' meaning that they do not reduce to a canonical form.
To work as a programming languages it is necessary for judgmental
equality to be \nomenindex{decidable}. Being decidable simply means
that that an algorithm exists to decide whether two terms are equal.
For any practical implementation the decidability must also be
effectively computable.
For propositional equality the decidability requirement is relaxed. It
is not in general possible to decide the correctness of logical
propositions (cf.\ Hilbert's \emph{entscheidigungsproblem}).
There are two flavors of type-theory. \emph{Intensional-} and
\emph{extensional-} type theory (ITT and ETT respectively). Identity
types in extensional type theory are required to be
\nomen{propositions}{proposition}. That is, a type with at most one
inhabitant. In extensional type theory the principle of reflection
%
$$a ≡ b → a = b$$
%
is enough to make type checking undecidable. This report focuses on
Agda which at a glance can be thought of as a version of intensional
type theory. Pattern-matching in regular Agda lets one prove
\nomenindex{Uniqueness of Identity Proofs} (UIP). UIP states that any
two identity proofs are propositionally identical.
The usual notion of propositional equality in ITT is quite
restrictive. In the next section a few motivating examples will
highlight this. There exist techniques to circumvent these problems,
as we shall see. This thesis will explore an extension to Agda that
redefines the notion of propositional equality and as such is an
alternative to these other techniques. The extension is called cubical
Agda. Cubical Agda drops UIP as this does not permit
\nomenindex{functional extensionality} and
\nomenindex{univalence}. What makes this extension particularly
interesting is that it gives a \emph{constructive} interpretation of
univalence. What all this means will be elaborated in the following
sections.
%
\section{Motivating examples} \section{Motivating examples}
% %
In the following two sections I present two examples that illustrate some In the following two sections I present two examples that illustrate
limitations inherent in ITT and -- by extension -- Agda. some limitations inherent in ITT and -- by extension -- Agda.
% %
\subsection{Functional extensionality} \subsection{Functional extensionality}
\label{sec:functional-extensionality}% \label{sec:functional-extensionality}%
Consider the functions: Consider the functions:
% %
\begin{multicols}{2} \begin{align*}%
\noindent \var{zeroLeft} & \defeq \lambda\; (n \tp \bN) \to (0 + n \tp \bN) \\
\begin{equation*} \var{zeroRight} & \defeq \lambda\; (n \tp \bN) \to (n + 0 \tp \bN)
f \defeq (n \tp \bN) \mto (0 + n \tp \bN) \end{align*}%
\end{equation*}
\begin{equation*}
g \defeq (n \tp \bN) \mto (n + 0 \tp \bN)
\end{equation*}
\end{multicols}
% %
$n + 0$ is \nomen{definitionally} equal to $n$, which we write as $n + 0 = n$. The term $n + 0$ is \nomenindex{definitionally} equal to $n$, which we
This is also called \nomen{judgmental} equality. We call it definitional write as $n + 0 = n$. This is also called \nomenindex{judgmental
equality because the \emph{equality} arises from the \emph{definition} of $+$ equality}. We call it definitional equality because the
which is: \emph{equality} arises from the \emph{definition} of $+$ which is:
% %
\begin{align*} \begin{align*}
+ & \tp \bN \to \bN \to \bN \\ + & \tp \bN \to \bN \to \bN \\
@ -29,166 +90,181 @@ which is:
n + (\suc{m}) & \defeq \suc{(n + m)} n + (\suc{m}) & \defeq \suc{(n + m)}
\end{align*} \end{align*}
% %
Note that $0 + n$ is \emph{not} definitionally equal to $n$. $0 + n$ is in Note that $0 + n$ is \emph{not} definitionally equal to $n$. This is
normal form. I.e.; there is no rule for $+$ whose left-hand-side matches this because $0 + n$ is in normal form. I.e.\ there is no rule for $+$
expression. We \emph{do}, however, have that they are \nomen{propositionally} whose left hand side matches this expression. We do however have that
equal, which we write as $n + 0 \equiv n$. Propositional equality means that they are \nomen{propositionally}{propositional equality} equal, which
there is a proof that exhibits this relation. Since equality is a transitive we write as $n \equiv n + 0$. Propositional equality means that there
relation we have that $n + 0 \equiv 0 + n$. is a proof that exhibits this relation. We can do induction over $n$
to prove this:
Unfortunately we don't have $f \equiv g$.\footnote{Actually showing this is
outside the scope of this text. Essentially it would involve giving a model
for our type theory that validates all our axioms but where $f \equiv g$ is
not true.} There is no way to construct a proof asserting the obvious
equivalence of $f$ and $g$ -- even though we can prove them equal for all
points. This is exactly the notion of equality of functions that we are
interested in; that they are equal for all inputs. We call this
\nomen{point-wise equality}, where the \emph{points} of a function refers
to its arguments.
In the context of category theory functional extensionality is e.g. needed to
show that representable functors are indeed functors. The representable functor
for a category $\bC$ and a fixed object in $A \in \bC$ is defined to be:
% %
\begin{align*} \begin{align}
\fmap \defeq X \mto \Hom_{\bC}(A, X) \label{eq:zrn}
\end{align*} \begin{split}
\var{zrn}\ & \tp ∀ n → n ≡ \var{zeroRight}\ n \\
\var{zrn}\ \var{zero} & \defeq \var{refl} \\
\var{zrn}\ (\var{suc}\ n) & \defeq \var{cong}\ \var{suc}\ (\var{zrn}\ n)
\end{split}
\end{align}
% %
The proof obligation that this satisfies the identity law of functors This show that zero is a right neutral element hence the name $\var{zrn}$.
($\fmap\ \idFun \equiv \idFun$) thus becomes: Since equality is a transitive relation we have that $\forall n \to
% \var{zeroLeft}\ n \equiv \var{zeroRight}\ n$. Unfortunately we don't
\begin{align*} have $\var{zeroLeft} \equiv \var{zeroRight}$. There is no way to
\Hom(A, \idFun_{\bX}) = (g \mto \idFun \comp g) \equiv \idFun_{\Sets} construct a proof asserting the obvious equivalence of
\end{align*} $\var{zeroLeft}$ and $\var{zeroRight}$. Actually showing this is
% outside the scope of this text. Essentially it would involve giving a
One needs functional extensionality to ``go under'' the function arrow and apply model for our type theory that validates all our axioms but where
the (left) identity law of the underlying category to prove $\idFun \comp g $\var{zeroLeft} \equiv \var{zeroRight}$ is not true. We cannot show
\equiv g$ and thus close the goal. that they are equal even though we can prove them equal for all
points. For functions this is exactly the notion of equality that we
are interested in: Functions are considered equal when they are equal
for all inputs. This is called \nomenindex{pointwise equality}, where
the \emph{points} of a function refer to its arguments.
% %
\subsection{Equality of isomorphic types} \subsection{Equality of isomorphic types}
% %
Let $\top$ denote the unit type -- a type with a single constructor. In the Let $\top$ denote the unit type -- a type with a single constructor.
propositions-as-types interpretation of type theory $\top$ is the proposition In the propositions as types interpretation of type theory $\top$ is
that is always true. The type $A \x \top$ and $A$ has an element for each $a : the proposition that is always true. The type $A \x \top$ and $A$ has
A$. So in a sense they have the same shape (Greek; \nomen{isomorphic}). The an element for each $a \tp A$. So in a sense they have the same shape
second element of the pair does not add any ``interesting information''. It can (Greek;
be useful to identify such types. In fact, it is quite commonplace in \nomenindex{isomorphic}). The second element of the pair does not
mathematics. Say we look at a set $\{x \mid \phi\ x \land \psi\ x\}$ and somehow add any ``interesting information''. It can be useful to identify such
conclude that $\psi\ x \equiv \top$ for all $x$. A mathematician would types. In fact, it is quite commonplace in mathematics. Say we look at
immediately conclude $\{x \mid \phi\ x \land \psi\ x\} \equiv \{x \mid a set $\{x \mid \phi\ x \land \psi\ x\}$ and somehow conclude that
\phi\ x\}$ without thinking twice. Unfortunately such an identification can not $\psi\ x \equiv \top$ for all $x$. A mathematician would immediately
conclude $\{x \mid \phi\ x \land \psi\ x\} \equiv \{x \mid \phi\ x\}$
without thinking twice. Unfortunately such an identification can not
be performed in ITT. be performed in ITT.
More specifically what we are interested in is a way of identifying More specifically what we are interested in is a way of identifying
\nomen{equivalent} types. I will return to the definition of equivalence later \nomenindex{equivalent} types. I will return to the definition of
in section \S\ref{sec:equiv}, but for now it is sufficient to think of an equivalence later in section \S\ref{sec:equiv}, but for now it is
equivalence as a one-to-one correspondence. We write $A \simeq B$ to assert that sufficient to think of an equivalence as a one-to-one correspondence.
$A$ and $B$ are equivalent types. The principle of univalence says that: We write $A \simeq B$ to assert that $A$ and $B$ are equivalent types.
The principle of univalence says that:
% %
$$\mathit{univalence} \tp (A \simeq B) \simeq (A \equiv B)$$ $$\mathit{univalence} \tp (A \simeq B) \simeq (A \equiv B)$$
% %
In particular this allows us to construct an equality from an equivalence In particular this allows us to construct an equality from an equivalence
($\mathit{ua} \tp (A \simeq B) \to (A \equiv B)$) and vice-versa. %
$$\mathit{ua} \tp (A \simeq B) \to (A \equiv B)$$
%
and vice versa.
\section{Formalizing Category Theory} \section{Formalizing Category Theory}
% %
The above examples serve to illustrate a limitation of ITT. One case where these The above examples serve to illustrate a limitation of ITT. One case
limitations are particularly prohibitive is in the study of Category Theory. At where these limitations are particularly prohibitive is in the study
a glance category theory can be described as ``the mathematical study of of Category Theory. At a glance category theory can be described as
(abstract) algebras of functions'' (\cite{awodey-2006}). By that token ``the mathematical study of (abstract) algebras of functions''
functional extensionality is particularly useful for formulating Category (\cite{awodey-2006}). By that token functional extensionality is
Theory. In Category theory it is also common to identify isomorphic structures particularly useful for formulating Category Theory. In Category
and univalence gives us a way to make this notion precise. In fact we can theory it is also commonplace to identify isomorphic structures and
formulate this requirement within our formulation of categories by requiring the univalence gives us a way to make this notion precise. In fact we can
\emph{categories} themselves to be univalent as we shall see. formulate this requirement within our formulation of categories by
requiring the \emph{categories} themselves to be univalent as we shall
see in \S\ref{sec:univalence}.
\section{Context} \section{Context}
\label{sec:context} \label{sec:context}
% %
The idea of formalizing Category Theory in proof assistants is not new. There The idea of formalizing Category Theory in proof assistants is not new. There
are a multitude of these available online. Just as a first reference see this are a multitude of these available online. Notably:
question on Math Overflow: \cite{mo-formalizations}. Notably these
implementations of category theory in Agda:
% %
\begin{itemize} \begin{itemize}
\item \item
A formalization in Agda using the setoid approach:
\url{https://github.com/copumpkin/categories} \url{https://github.com/copumpkin/categories}
A formalization in Agda using the setoid approach
\item \item
A formalization in Agda with univalence and functional
extensionality as postulates:
\url{https://github.com/pcapriotti/agda-categories} \url{https://github.com/pcapriotti/agda-categories}
A formalization in Agda with univalence and functional extensionality as
postulates.
\item \item
A formalization in Coq in the homotopic setting:
\url{https://github.com/HoTT/HoTT/tree/master/theories/Categories} \url{https://github.com/HoTT/HoTT/tree/master/theories/Categories}
A formalization in Coq in the homotopic setting
\item \item
A formalization in \emph{CubicalTT} -- a language designed for
cubical type theory. Formalizes many different things, but only a
few concepts from category theory:
\url{https://github.com/mortberg/cubicaltt} \url{https://github.com/mortberg/cubicaltt}
A formalization in CubicalTT - a language designed for cubical-type-theory.
Formalizes many different things, but only a few concepts from category
theory.
\end{itemize} \end{itemize}
% %
The contribution of this thesis is to explore how working in a cubical setting The contribution of this thesis is to explore how working in a cubical setting
will make it possible to prove more things and to reuse proofs and to try and will make it possible to prove more things and to reuse proofs and to try and
compare some aspects of this formalization with the existing ones.\TODO{How can compare some aspects of this formalization with the existing ones.
I live up to this?}
There are alternative approaches to working in a cubical setting where one can There are alternative approaches to working in a cubical setting where
still have univalence and functional extensionality. One option is to postulate one can still have univalence and functional extensionality. One
these as axioms. This approach, however, has other shortcomings, e.g.; you lose option is to postulate these as axioms. This approach, however, has
\nomen{canonicity} (\TODO{Pageno!} \cite{huber-2016}). Canonicity means that any other shortcomings, e.g. you lose \nomenindex{canonicity}
well-typed term evaluates to a \emph{canonical} form. For example for a closed (\cite[p. 3]{huber-2016}).
term $e \tp \bN$ it will be the case that $e$ reduces to $n$ applications of
$\mathit{suc}$ to $0$ for some $n$; $e = \mathit{suc}^n\ 0$. Without canonicity
terms in the language can get ``stuck'' -- meaning that they do not reduce to a
canonical form.
Another approach is to use the \emph{setoid interpretation} of type theory Another approach is to use the \emph{setoid interpretation} of type
(\cite{hofmann-1995,huber-2016}). With this approach one works with theory (\cite{hofmann-1995,huber-2016}). With this approach one works
\nomen{extensional sets} $(X, \sim)$, that is a type $X \tp \MCU$ and an with \nomenindex{extensional sets} $(X, \sim)$. That is a type $X \tp
equivalence relation $\sim \tp X \to X \to \MCU$ on that type. Under the setoid \MCU$ and an equivalence relation $\sim\ \tp X \to X \to \MCU$ on that
interpretation the equivalence relation serve as a sort of ``local'' type. Under the setoid interpretation the equivalence relation serve
propositional equality. This approach has other drawbacks; it does not satisfy as a sort of ``local'' propositional equality. Since the developer
all propositional equalities of type theory (\TODO{Citation needed}), is gets to pick this relation it is not a~priori a congruence
cumbersome to work with in practice (\cite[p. 4]{huber-2016}) and makes relation. So this must be verified manually by the developer.
equational proofs less reusable since equational proofs $a \sim_{X} b$ are Furthermore, functions between different setoids must be shown to be
inherently `local' to the extensional set $(X , \sim)$. setoid homomorphism, that is; they preserve the relation.
This approach has other drawbacks; it does not satisfy all
propositional equalities of type theory a\~priori. That is, the
developer must manually show that e.g.\ the relation is a congruence.
Equational proofs $a \sim_{X} b$ are in some sense `local' to the
extensional set $(X , \sim)$. To e.g.\ prove that $x y → f\ x
f\ y$ for some function $f \tp A → B$ between two extensional sets $A$
and $B$ it must be shown that $f$ is a groupoid homomorphism. This
makes it very cumbersome to work with in practice (\cite[p.
4]{huber-2016}).
\section{Conventions} \section{Conventions}
\TODO{Talk a bit about terminology. Find a good place to stuff this little In the remainder of this paper I will use the term \nomenindex{Type}
section.} to describe -- well -- types. Thereby departing from the notation in
Agda where the keyword \texttt{Set} refers to types. \nomenindex{Set}
on the other hand shall refer to the homotopical notion of a set. I
will also leave all universe levels implicit. This of course does not
mean that a statement such as $\MCU \tp \MCU$ means that we have
type-in-type but rather that the arguments to the universes are
implicit.
In the remainder of this paper I will use the term \nomen{Type} to describe -- And I use the term
well, types. Thereby diverging from the notation in Agda where the keyword \nomenindex{arrow} to refer to morphisms in a category,
\texttt{Set} refers to types. \nomen{Set} on the other hand shall refer to the whereas the terms
homotopical notion of a set. I will also leave all universe levels implicit. \nomenindex{morphism},
\nomenindex{map} or
\nomenindex{function}
shall be reserved for talking about type theoretic functions; i.e.
functions in Agda.
And I use the term \nomen{arrow} to refer to morphisms in a category, whereas As already noted $\defeq$ will be used for introducing definitions $=$
the terms morphism, map or function shall be reserved for talking about will be used to for judgmental equality and $\equiv$ will be used for
type-theoretic functions; i.e. functions in Agda. propositional equality.
$\defeq$ will be used for introducing definitions. $=$ will be used to for
judgmental equality and $\equiv$ will be used for propositional equality.
All this is summarized in the following table: All this is summarized in the following table:
%
\begin{samepage}
\begin{center} \begin{center}
\begin{tabular}{ c c c } \begin{tabular}{ c c c }
Name & Agda & Notation \\ Name & Agda & Notation \\
\hline \hline
\nomen{Type} & \texttt{Set} & $\Type$ \\
\nomen{Set} & \texttt{Σ Set IsSet} & $\Set$ \\ \varindex{Type} & \texttt{Set} & $\Type$ \\
\varindex{Set} & \texttt{Σ Set IsSet} & $\Set$ \\
Function, morphism, map & \texttt{A → B} & $A → B$ \\ Function, morphism, map & \texttt{A → B} & $A → B$ \\
Dependent- ditto & \texttt{(a : A) → B} & $_{a \tp A} B$ \\ Dependent- ditto & \texttt{(a : A) → B} & $_{a \tp A} B$ \\
\nomen{Arrow} & \texttt{Arrow A B} & $\Arrow\ A\ B$ \\
\nomen{Object} & \texttt{C.Object} & $̱ℂ.Object$ \\ \varindex{Arrow} & \texttt{Arrow A B} & $\Arrow\ A\ B$ \\
\varindex{Object} & \texttt{C.Object} & $̱ℂ.Object$ \\
Definition & \texttt{=} & $̱\defeq$ \\ Definition & \texttt{=} & $̱\defeq$ \\
Judgmental equality & \null & $̱=$ \\ Judgmental equality & \null & $̱=$ \\
Propositional equality & \null & $̱\equiv$ Propositional equality & \null & $̱\equiv$
\end{tabular} \end{tabular}
\end{center} \end{center}
\end{samepage}

View file

@ -9,15 +9,15 @@
%% Alternatively: %% Alternatively:
%% \newcommand{\defeq}{} %% \newcommand{\defeq}{}
\newcommand{\bN}{\mathbb{N}} \newcommand{\bN}{\mathbb{N}}
\newcommand{\bC}{\mathbb{C}} \newcommand{\bC}{}
\newcommand{\bX}{\mathbb{X}} \newcommand{\bD}{𝔻}
\newcommand{\bX}{𝕏}
% \newcommand{\to}{\rightarrow} % \newcommand{\to}{\rightarrow}
%% \newcommand{\mto}{\mapsto} %% \newcommand{\mto}{\mapsto}
\newcommand{\mto}{\rightarrow} \newcommand{\mto}{\rightarrow}
\newcommand{\UU}{\ensuremath{\mathcal{U}}\xspace} \newcommand{\UU}{\ensuremath{\mathcal{U}}\xspace}
\let\type\UU \let\type\UU
\newcommand{\MCU}{\UU} \newcommand{\MCU}{\UU}
\newcommand{\nomen}[1]{\emph{#1}}
\newcommand{\todo}[1]{\textit{#1}} \newcommand{\todo}[1]{\textit{#1}}
\newcommand{\comp}{\circ} \newcommand{\comp}{\circ}
\newcommand{\x}{\times} \newcommand{\x}{\times}
@ -33,7 +33,9 @@
} }
\makeatother \makeatother
\newcommand{\var}[1]{\ensuremath{\mathit{#1}}} \newcommand{\var}[1]{\ensuremath{\mathit{#1}}}
\newcommand{\varindex}[1]{\ensuremath{\mathit{#1}}\index{#1}} \newcommand{\varindex}[1]{\ensuremath{\var{#1}}\index{$\var{#1}$}}
\newcommand{\nomen}[2]{\emph{#1}\index{#2}}
\newcommand{\nomenindex}[1]{\nomen{#1}{#1}}
\newcommand{\Hom}{\varindex{Hom}} \newcommand{\Hom}{\varindex{Hom}}
\newcommand{\fmap}{\varindex{fmap}} \newcommand{\fmap}{\varindex{fmap}}
@ -71,13 +73,17 @@
\newcommand\isequiv{\varindex{isequiv}} \newcommand\isequiv{\varindex{isequiv}}
\newcommand\qinv{\varindex{qinv}} \newcommand\qinv{\varindex{qinv}}
\newcommand\fiber{\varindex{fiber}} \newcommand\fiber{\varindex{fiber}}
\newcommand\shuffle{\varindex{shuffle}} \newcommand\shufflef{\varindex{shuffle}}
\newcommand\Univalent{\varindex{Univalent}} \newcommand\Univalent{\varindex{Univalent}}
\newcommand\refl{\varindex{refl}} \newcommand\refl{\varindex{refl}}
\newcommand\isoToId{\varindex{isoToId}} \newcommand\isoToId{\varindex{isoToId}}
\newcommand\Isomorphism{\varindex{Isomorphism}} \newcommand\Isomorphism{\varindex{Isomorphism}}
\newcommand\rrr{\ggg} \newcommand\rrr{\ggg}
\newcommand\fish{\mathrel{\wideoverbar{\rrr}}} %% \newcommand\fish{\mathbin{}}
%% \newcommand\fish{\mathbin{}}
\newcommand\fish{\mathbin{}}
%% \newcommand\fish{\mathbin{}}
%% \newcommand\fish{\mathrel{\wideoverbar{\rrr}}}
\newcommand\fst{\varindex{fst}} \newcommand\fst{\varindex{fst}}
\newcommand\snd{\varindex{snd}} \newcommand\snd{\varindex{snd}}
\newcommand\Path{\varindex{Path}} \newcommand\Path{\varindex{Path}}
@ -86,9 +92,39 @@
\newcommand*{\QED}{\hfill\ensuremath{\square}}% \newcommand*{\QED}{\hfill\ensuremath{\square}}%
\newcommand\uexists{\exists!} \newcommand\uexists{\exists!}
\newcommand\Arrow{\varindex{Arrow}} \newcommand\Arrow{\varindex{Arrow}}
\newcommand\embellish[1]{\widehat{#1}}
\newcommand\nattrans[1]{\embellish{#1}}
\newcommand\functor[1]{\embellish{#1}}
\newcommand\NTsym{\varindex{NT}} \newcommand\NTsym{\varindex{NT}}
\newcommand\NT[2]{\NTsym\ #1\ #2} \newcommand\NT[2]{\NTsym\ #1\ #2}
\newcommand\Endo[1]{\varindex{Endo}\ #1} \newcommand\Endo[1]{\varindex{Endo}\ #1}
\newcommand\EndoR{\mathcal{R}} \newcommand\EndoR{\functor{\mathcal{R}}}
\newcommand\omapR{\mathcal{R}}
\newcommand\omapF{\mathcal{F}}
\newcommand\omapG{\mathcal{G}}
\newcommand\FunF{\functor{\omapF}}
\newcommand\FunG{\functor{\omapG}}
\newcommand\funExt{\varindex{funExt}} \newcommand\funExt{\varindex{funExt}}
\newcommand{\suc}[1]{\mathit{suc}\ #1} \newcommand{\suc}[1]{\varindex{suc}\ #1}
\newcommand{\trans}{\varindex{trans}}
\newcommand{\toKleisli}{\varindex{toKleisli}}
\newcommand{\toMonoidal}{\varindex{toMonoidal}}
\newcommand\pairA{\mathcal{A}}
\newcommand\pairB{\mathcal{B}}
\newcommand{\joinNT}{\functor{\varindex{join}}}
\newcommand{\pureNT}{\functor{\varindex{pure}}}
\newcommand{\hrefsymb}[2]{\href{#1}{#2 \ExternalLink}}
\newcommand{\sourcebasepath}{http://web.student.chalmers.se/\textasciitilde hanghj/cat/doc/html/}
\newcommand{\docbasepath}{https://github.com/fredefox/cat/}
\newcommand{\sourcelink}[1]{\hrefsymb
{\sourcebasepath#1.html}
{\texttt{#1}}
}
\newcommand{\gitlink}{\hrefsymb{\docbasepath}{\texttt{\docbasepath}}}
\newcommand{\doclink}{\hrefsymb{\sourcebasepath}{\texttt{\sourcebasepath}}}
\newcommand{\clll}{\mathrel{\bC.\mathord{\lll}}}
\newcommand{\dlll}{\mathrel{\bD.\mathord{\lll}}}
\newcommand\coe{\varindex{coe}}
\newcommand\Monoidal{\varindex{Monoidal}}
\newcommand\Kleisli{\varindex{Kleisli}}
\newcommand\I{\mathds{I}}

View file

@ -1,4 +1,5 @@
\documentclass[a4paper]{report} \documentclass[a4paper]{report}
%% \documentclass[tightpage]{preview}
%% \documentclass[compact,a4paper]{article} %% \documentclass[compact,a4paper]{article}
\input{packages.tex} \input{packages.tex}
@ -49,6 +50,7 @@
\myfrontmatter \myfrontmatter
\maketitle \maketitle
\input{abstract.tex} \input{abstract.tex}
\input{acknowledgements.tex}
\tableofcontents \tableofcontents
\mainmatter \mainmatter
% %
@ -65,7 +67,8 @@
\begin{appendices} \begin{appendices}
\setcounter{page}{1} \setcounter{page}{1}
\pagenumbering{roman} \pagenumbering{roman}
\input{sources.tex} \input{appendix/abstract-funext.tex}
%% \input{sources.tex}
%% \input{planning.tex} %% \input{planning.tex}
%% \input{halftime.tex} %% \input{halftime.tex}
\end{appendices} \end{appendices}

View file

@ -3,21 +3,27 @@
\usepackage{natbib} \usepackage{natbib}
\bibliographystyle{plain} \bibliographystyle{plain}
\usepackage{xcolor}
%% \mode<report>{
\usepackage[ \usepackage[
hidelinks, %% hidelinks,
pdfusetitle, pdfusetitle,
pdfsubject={category theory}, pdfsubject={category theory},
pdfkeywords={type theory, homotopy theory, category theory, agda}] pdfkeywords={type theory, homotopy theory, category theory, agda}]
{hyperref} {hyperref}
%% }
%% \definecolor{darkorange}{HTML}{ff8c00}
%% \hypersetup{allbordercolors={darkorange}}
\hypersetup{hidelinks}
\usepackage{graphicx} \usepackage{graphicx}
%% \usepackage[active,tightpage]{preview}
\usepackage{parskip} \usepackage{parskip}
\usepackage{multicol} \usepackage{multicol}
\usepackage{amssymb,amsmath,amsthm,stmaryrd,mathrsfs,wasysym} \usepackage{amssymb,amsmath,amsthm,stmaryrd,mathrsfs,wasysym}
\usepackage[toc,page]{appendix} \usepackage[toc,page]{appendix}
\usepackage{xspace} \usepackage{xspace}
\usepackage[a4paper,top=3cm,bottom=3cm]{geometry} \usepackage[paper=a4paper,top=3cm,bottom=3cm]{geometry}
\usepackage{makeidx} \usepackage{makeidx}
\makeindex \makeindex
% \setlength{\parskip}{10pt} % \setlength{\parskip}{10pt}
@ -78,15 +84,15 @@
\newunicodechar{}{\textfallback{}} \newunicodechar{}{\textfallback{}}
\newunicodechar{}{\textfallback{}} \newunicodechar{}{\textfallback{}}
\newunicodechar{}{\textfallback{}} \newunicodechar{}{\textfallback{}}
\newunicodechar{}{\textfallback{}} %% \newunicodechar{}{\textfallback{}}
\newunicodechar{}{\textfallback{}} \newunicodechar{}{\textfallback{}}
\newunicodechar{}{\textfallback{}} \newunicodechar{}{\textfallback{}}
\newunicodechar{}{\textfallback{}} \newunicodechar{}{\textfallback{}}
\newunicodechar{}{\textfallback{}} \newunicodechar{}{\textfallback{}}
\newunicodechar{}{\textfallback{}} \newunicodechar{}{\textfallback{}}
\newunicodechar{}{\textfallback{}} \newunicodechar{}{\textfallback{}}
\newunicodechar{𝒜}{\textfallback{?}} %% \newunicodechar{𝒜}{\textfallback{𝒜}}
\newunicodechar{}{\textfallback{?}} %% \newunicodechar{}{\textfallback{}}
%% \newunicodechar{}{\textfallback{}} %% \newunicodechar{}{\textfallback{}}
\makeatletter \makeatletter
\newcommand*{\rom}[1]{\expandafter\@slowroman\romannumeral #1@} \newcommand*{\rom}[1]{\expandafter\@slowroman\romannumeral #1@}
@ -113,3 +119,26 @@
} }
\makeatother \makeatother
\usepackage{xspace}
\usepackage{tikz}
\newcommand{\ExternalLink}{%
\tikz[x=1.2ex, y=1.2ex, baseline=-0.05ex]{%
\begin{scope}[x=1ex, y=1ex]
\clip (-0.1,-0.1)
--++ (-0, 1.2)
--++ (0.6, 0)
--++ (0, -0.6)
--++ (0.6, 0)
--++ (0, -1);
\path[draw,
line width = 0.5,
rounded corners=0.5]
(0,0) rectangle (1,1);
\end{scope}
\path[draw, line width = 0.5] (0.5, 0.5)
-- (1, 1);
\path[draw, line width = 0.5] (0.6, 1)
-- (1, 1) -- (1, 0.6);
}
}
\usepackage{ dsfont }

653
doc/presentation.tex Normal file
View file

@ -0,0 +1,653 @@
\documentclass[a4paper,handout]{beamer}
\usetheme{metropolis}
\beamertemplatenavigationsymbolsempty
%% \usecolortheme[named=seagull]{structure}
\input{packages.tex}
\input{macros.tex}
\title[Univalent Categories]{Univalent Categories\\ \footnotesize A formalization of category theory in Cubical Agda}
\newcommand{\myname}{Frederik Hangh{\o}j Iversen}
\author[\myname]{
\myname\\
\footnotesize Supervisors: Thierry Coquand, Andrea Vezzosi\\
Examiner: Andreas Abel
}
\institute{Chalmers University of Technology}
\begin{document}
\frame{\titlepage}
\begin{frame}
\frametitle{Motivating example}
\framesubtitle{Functional extensionality}
Consider the functions
\begin{align*}
\var{zeroLeft} &\lambda (n \tp \bN) \mto (0 + n \tp \bN) \\
\var{zeroRight} &\lambda (n \tp \bN) \mto (n + 0 \tp \bN)
\end{align*}
\pause
We have
%
$$
_{n \tp \bN} \var{zeroLeft}\ n ≡ \var{zeroRight}\ n
$$
%
\pause
But not
%
$$
\var{zeroLeft}\var{zeroRight}
$$
%
\pause
We need
%
$$
\funExt \tp_{a \tp A} f\ a ≡ g\ a → f ≡ g
$$
\end{frame}
\begin{frame}
\frametitle{Motivating example}
\framesubtitle{Univalence}
Consider the set
$\{x \mid \phi\ x \land \psi\ x\}$
\pause
If we show $∀ x . \psi\ x ≡ \top$
then we want to conclude
$\{x \mid \phi\ x \land \psi\ x\}\{x \mid \phi\ x\}$
\pause
We need univalence:
$$(A ≃ B)(A ≡ B)$$
\pause
%
We will return to $$, but for now think of it as an
isomorphism, so it induces maps:
\begin{align*}
\var{toPath} & \tp (A ≃ B) → (A ≡ B) \\
\var{toEquiv} & \tp (A ≡ B) → (A ≃ B)
\end{align*}
\end{frame}
\begin{frame}
\frametitle{Paths}
\framesubtitle{Definition}
Heterogeneous paths
\begin{equation*}
\Path \tp (P \tp I → \MCU) → P\ 0 → P\ 1 → \MCU
\end{equation*}
\pause
For $P \tp I → \MCU$, $A \tp \MCU$ and $a_0, a_1 \tp A$
inhabitants of $\Path\ P\ a_0\ a_1$ are like functions
%
$$
p \tp_{i \tp I} P\ i
$$
%
Which satisfy $p\ 0 & = a_0$ and $p\ 1 & = a_1$
\pause
Homogenous paths
$$
a_0 ≡ a_1 ≜ \Path\ (\var{const}\ A)\ a_0\ a_1
$$
\end{frame}
\begin{frame}
\frametitle{Paths}
\framesubtitle{Functional extenstionality}
$$
\funExt & \tp_{a \tp A} f\ a ≡ g\ a → f ≡ g
$$
\pause
$$
\funExt\ p ≜ λ i\ a → p\ a\ i
$$
\pause
$$
\funExt\ (\var{const}\ \refl)
\tp
\var{zeroLeft}\var{zeroRight}
$$
\end{frame}
\begin{frame}
\frametitle{Paths}
\framesubtitle{Homotopy levels}
\begin{align*}
& \isContr && \tp \MCU\MCU \\
& \isContr\ A && ≜ ∑_{c \tp A}_{a \tp A} a ≡ c
\end{align*}
\pause
\begin{align*}
& \isProp && \tp \MCU\MCU \\
& \isProp\ A && ≜ ∏_{a_0, a_1 \tp A} a_0 ≡ a_1
\end{align*}
\pause
\begin{align*}
& \isSet && \tp \MCU\MCU \\
& \isSet\ A && ≜ ∏_{a_0, a_1 \tp A} \isProp\ (a_0 ≡ a_1)
\end{align*}
\begin{align*}
& \isGroupoid && \tp \MCU\MCU \\
& \isGroupoid\ A && ≜ ∏_{a_0, a_1 \tp A} \isSet\ (a_0 ≡ a_1)
\end{align*}
\end{frame}
\begin{frame}
\frametitle{Paths}
\framesubtitle{A few lemmas}
Let $D$ be a type-family:
$$
D \tp_{b \tp A}_{p \tp a ≡ b} \MCU
$$
%
\pause
And $d$ and in inhabitant of $D$ at $\refl$:
%
$$
d \tp D\ a\ \refl
$$
%
\pause
We then have the function:
%
$$
\pathJ\ D\ d \tp_{b \tp A}_{p \tp a ≡ b} D\ b\ p
$$
\end{frame}
\begin{frame}
\frametitle{Paths}
\framesubtitle{A few lemmas}
Given
\begin{align*}
A & \tp \MCU \\
P & \tp A → \MCU \\
\var{propP} & \tp_{x \tp A} \isProp\ (P\ x) \\
p & \tp a_0 ≡ a_1 \\
p_0 & \tp P\ a_0 \\
p_1 & \tp P\ a_1
\end{align*}
%
We have
$$
\lemPropF\ \var{propP}\ p
\tp
\Path\ (\lambda\; i \mto P\ (p\ i))\ p_0\ p_1
$$
%
\end{frame}
\begin{frame}
\frametitle{Paths}
\framesubtitle{A few lemmas}
$$ preserves $\isProp$:
$$
\mathit{propPi}
\tp
\left(∏_{a \tp A} \isProp\ (P\ a)\right)
\isProp\ \left(∏_{a \tp A} P\ a\right)
$$
\pause
$$ preserves $\isProp$:
$$
\mathit{propSig} \tp \isProp\ A → \left(∏_{a \tp A} \isProp\ (P\ a)\right) → \isProp\ \left(∑_{a \tp A} P\ a\right)
$$
\end{frame}
\begin{frame}
\frametitle{Pre categories}
\framesubtitle{Definition}
Data:
\begin{align*}
\Object & \tp \Type \\
\Arrow & \tp \Object\Object\Type \\
\identity & \tp \Arrow\ A\ A \\
\lll & \tp \Arrow\ B\ C → \Arrow\ A\ B → \Arrow\ A\ C
\end{align*}
%
\pause
Laws:
%
$$
h \lll (g \lll f) ≡ (h \lll g) \lll f
$$
$$
(\identity \lll f ≡ f)
×
(f \lll \identity ≡ f)
$$
\pause
1-categories:
$$
\isSet\ (\Arrow\ A\ B)
$$
\end{frame}
\begin{frame}
\frametitle{Pre categories}
\framesubtitle{Propositionality}
$$
\isProp\ \left( (\identity \comp f ≡ f) × (f \comp \identity ≡ f) \right)
$$
\pause
\begin{align*}
\isProp\ \IsPreCategory
\end{align*}
\pause
\begin{align*}
\var{isAssociative} & \tp \var{IsAssociative}\\
\isIdentity & \tp \var{IsIdentity}\\
\var{arrowsAreSets} & \tp \var{ArrowsAreSets}
\end{align*}
\pause
\begin{align*}
& \var{propIsAssociative} && a.\var{isAssociative}\
&& b.\var{isAssociative} && i \\
& \propIsIdentity && a.\isIdentity\
&& b.\isIdentity && i \\
& \var{propArrowsAreSets} && a.\var{arrowsAreSets}\
&& b.\var{arrowsAreSets} && i
\end{align*}
\end{frame}
\begin{frame}
\frametitle{Categories}
\framesubtitle{Univalence}
\begin{align*}
\var{IsIdentity} &
_{A\ B \tp \Object}_{f \tp \Arrow\ A\ B} \phi\ f
%% \\
%% & \mathrel{\ } \identity \lll f ≡ f × f \lll \identity ≡ f
\end{align*}
where
$$
\phi\ f ≜ \identity
( \lll f ≡ f )
×
( f \lll \identity ≡ f)
$$
\pause
Let $\approxeq$ denote ismorphism of objects. We can then construct
the identity isomorphism in any category:
$$
\identity , \identity , \var{isIdentity} \tp A \approxeq A
$$
\pause
Likewise since paths are substitutive we can promote a path to an isomorphism:
$$
\idToIso \tp A ≡ B → A ≊ B
$$
\pause
For a category to be univalent we require this to be an equivalence:
%
$$
\isEquiv\ (A ≡ B)\ (A \approxeq B)\ \idToIso
$$
%
\end{frame}
\begin{frame}
\frametitle{Categories}
\framesubtitle{Univalence, cont'd}
$$\isEquiv\ (A ≡ B)\ (A \approxeq B)\ \idToIso$$
\pause%
$$(A ≡ B)(A \approxeq B)$$
\pause%
$$(A ≡ B)(A \approxeq B)$$
\pause%
Name the above maps:
$$\idToIso \tp A ≡ B → A ≊ B$$
%
$$\isoToId \tp (A \approxeq B)(A ≡ B)$$
\end{frame}
\begin{frame}
\frametitle{Categories}
\framesubtitle{Propositionality}
$$
\isProp\ \IsCategory = ∏_{a, b \tp \IsCategory} a ≡ b
$$
\pause
So, for
$$
a\ b \tp \IsCategory
$$
the proof obligation is the pair:
%
\begin{align*}
p & \tp a.\isPreCategory ≡ b.\isPreCategory \\
& \mathrel{\ } \Path\ (\lambda\; i → (p\ i).Univalent)\ a.\isPreCategory\ b.\isPreCategory
\end{align*}
\end{frame}
\begin{frame}
\frametitle{Categories}
\framesubtitle{Propositionality, cont'd}
First path given by:
$$
p
\var{propIsPreCategory}\ a\ b
\tp
a.\isPreCategory ≡ b.\isPreCategory
$$
\pause
Use $\lemPropF$ for the latter.
\pause
%
Univalence is indexed by an identity proof. So $A ≜
IsIdentity\ identity$ and $B ≜ \var{Univalent}$.
\pause
%
$$
\lemPropF\ \var{propUnivalent}\ p
$$
\end{frame}
\begin{frame}
\frametitle{Categories}
\framesubtitle{A theorem}
%
Let the isomorphism $(ι, \inv{ι}) \tp A \approxeq B$.
%
\pause
%
The isomorphism induces the path
%
$$
p ≜ \idToIso\ (\iota, \inv{\iota}) \tp A ≡ B
$$
%
\pause
and consequently an arrow:
%
$$
p_{\var{dom}}\congruence\ (λ x → \Arrow\ x\ X)\ p
\tp
\Arrow\ A\ X ≡ \Arrow\ B\ X
$$
%
\pause
The proposition is:
%
\begin{align}
\label{eq:coeDom}
\tag{$\var{coeDom}$}
_{f \tp A → X}
\var{coe}\ p_{\var{dom}}\ f ≡ f \lll \inv{\iota}
\end{align}
\end{frame}
\begin{frame}
\frametitle{Categories}
\framesubtitle{A theorem, proof}
\begin{align*}
\var{coe}\ p_{\var{dom}}\ f
& ≡ f \lll \inv{(\idToIso\ p)} && \text{By path-induction} \\
& ≡ f \lll \inv{\iota}
&& \text{$\idToIso$ and $\isoToId$ are inverses}\\
\end{align*}
\pause
%
Induction will be based at $A$. Let $\widetilde{B}$ and $\widetilde{p}
\tp A ≡ \widetilde{B}$ be given.
%
\pause
%
Define the family:
%
$$
D\ \widetilde{B}\ \widetilde{p}
\var{coe}\ \widetilde{p}_{\var{dom}}\ f
f \lll \inv{(\idToIso\ \widetilde{p})}
$$
\pause
%
The base-case becomes:
$$
d \tp D\ A\ \refl =
\var{coe}\ \refl_{\var{dom}}\ f ≡ f \lll \inv{(\idToIso\ \refl)}
$$
\end{frame}
\begin{frame}
\frametitle{Categories}
\framesubtitle{A theorem, proof, cont'd}
$$
d \tp
\var{coe}\ \refl_{\var{dom}}\ f ≡ f \lll \inv{(\idToIso\ \refl)}
$$
\pause
\begin{align*}
\var{coe}\ \refl^*\ f
& ≡ f
&& \text{$\refl$ is a neutral element for $\var{coe}$}\\
& ≡ f \lll \identity \\
& ≡ f \lll \var{subst}\ \refl\ \identity
&& \text{$\refl$ is a neutral element for $\var{subst}$}\\
& ≡ f \lll \inv{(\idToIso\ \refl)}
&& \text{By definition of $\idToIso$}\\
\end{align*}
\pause
In conclusion, the theorem is inhabited by:
$$
\label{eq:pathJ-example}
\pathJ\ D\ d\ B\ p
$$
\end{frame}
\begin{frame}
\frametitle{Span category} \framesubtitle{Definition} Given a base
category $\bC$ and two objects in this category $\pairA$ and $\pairB$
we can construct the \nomenindex{span category}:
%
\pause
Objects:
$$
_{X \tp Object} \Arrow\ X\ \pairA × \Arrow\ X\ \pairB
$$
\pause
%
Arrows between objects $A ,\ a_{\pairA} ,\ a_{\pairB}$ and
$B ,\ b_{\pairA} ,\ b_{\pairB}$:
%
$$
_{f \tp \Arrow\ A\ B}
b_{\pairA} \lll f ≡ a_{\pairA} ×
b_{\pairB} \lll f ≡ a_{\pairB}
$$
\end{frame}
\begin{frame}
\frametitle{Span category}
\framesubtitle{Univalence}
\begin{align*}
\label{eq:univ-0}
(X , x_{𝒜} , x_{}) ≡ (Y , y_{𝒜} , y_{})
\end{align*}
\begin{align*}
\label{eq:univ-1}
\begin{split}
p \tp & X ≡ Y \\
& \Path\ (λ i → \Arrow\ (p\ i)\ 𝒜)\ x_{𝒜}\ y_{𝒜} \\
& \Path\ (λ i → \Arrow\ (p\ i)\ )\ x_{}\ y_{}
\end{split}
\end{align*}
\begin{align*}
\begin{split}
\var{iso} \tp & X \approxeq Y \\
& \Path\ (λ i → \Arrow\ (\widetilde{p}\ i)\ 𝒜)\ x_{𝒜}\ y_{𝒜} \\
& \Path\ (λ i → \Arrow\ (\widetilde{p}\ i)\ )\ x_{}\ y_{}
\end{split}
\end{align*}
\begin{align*}
(X , x_{𝒜} , x_{}) ≊ (Y , y_{𝒜} , y_{})
\end{align*}
\end{frame}
\begin{frame}
\frametitle{Span category}
\framesubtitle{Univalence, proof}
%
\begin{align*}
%% (f, \inv{f}, \var{inv}_f, \var{inv}_{\inv{f}})
%% \tp
(X, x_{𝒜}, x_{}) \approxeq (Y, y_{𝒜}, y_{})
\to
\begin{split}
\var{iso} \tp & X \approxeq Y \\
& \Path\ (λ i → \Arrow\ (\widetilde{p}\ i)\ 𝒜)\ x_{𝒜}\ y_{𝒜} \\
& \Path\ (λ i → \Arrow\ (\widetilde{p}\ i)\ )\ x_{}\ y_{}
\end{split}
\end{align*}
\pause
%
Let $(f, \inv{f}, \var{inv}_f, \var{inv}_{\inv{f}})$ be an inhabitant
of the antecedent.\pause
Projecting out the first component gives us the isomorphism
%
$$
(\fst\ f, \fst\ \inv{f}
, \congruence\ \fst\ \var{inv}_f
, \congruence\ \fst\ \var{inv}_{\inv{f}}
)
\tp X \approxeq Y
$$
\pause
%
This gives rise to the following paths:
%
\begin{align*}
\begin{split}
\widetilde{p} & \tp X ≡ Y \\
\widetilde{p}_{𝒜} & \tp \Arrow\ X\ 𝒜\Arrow\ Y\ 𝒜 \\
\end{split}
\end{align*}
%
\end{frame}
\begin{frame}
\frametitle{Span category}
\framesubtitle{Univalence, proof, cont'd}
It remains to construct:
%
\begin{align*}
\begin{split}
\label{eq:product-paths}
& \Path\ (λ i → \widetilde{p}_{𝒜}\ i)\ x_{𝒜}\ y_{𝒜}
\end{split}
\end{align*}
\pause
%
This is achieved with the following lemma:
%
\begin{align*}
_{q \tp A ≡ B} \var{coe}\ q\ x_{𝒜} ≡ y_{𝒜}
\Path\ (λ i → q\ i)\ x_{𝒜}\ y_{𝒜}
\end{align*}
%
Which is used without proof.\pause
So the construction reduces to:
%
\begin{align*}
\var{coe}\ \widetilde{p}_{𝒜}\ x_{𝒜} ≡ y_{𝒜}
\end{align*}%
\pause%
This is proven with:
%
\begin{align*}
\var{coe}\ \widetilde{p}_{𝒜}\ x_{𝒜}
& ≡ x_{𝒜} \lll \fst\ \inv{f} && \text{\ref{eq:coeDom}} \\
& ≡ y_{𝒜} && \text{Property of span category}
\end{align*}
\end{frame}
\begin{frame}
\frametitle{Propositionality of products}
We have
%
$$
\isProp\ \var{Terminal}
$$\pause
%
We can show:
\begin{align*}
\var{Terminal}\var{Product}\ \ 𝒜\
\end{align*}
\pause
And since equivalences preserve homotopy levels we get:
%
$$
\isProp\ \left(\var{Product}\ \bC\ 𝒜\ \right)
$$
\end{frame}
\begin{frame}
\frametitle{Monads}
\framesubtitle{Monoidal form}
%
\begin{align*}
\EndoR & \tp \Endo \\
\pureNT
& \tp \NT{\EndoR^0}{\EndoR} \\
\joinNT
& \tp \NT{\EndoR^2}{\EndoR}
\end{align*}
\pause
%
Let $\fmap$ be the map on arrows of $\EndoR$. Likewise
$\pure$ and $\join$ are the maps of the natural transformations
$\pureNT$ and $\joinNT$ respectively.
%
\begin{align*}
\join \lll \fmap\ \join
&\join \lll \join \\
\join \lll \pure\ &\identity \\
\join \lll \fmap\ \pure &\identity
\end{align*}
\end{frame}
\begin{frame}
\frametitle{Monads}
\framesubtitle{Kleisli form}
%
\begin{align*}
\omapR & \tp \Object\Object \\
\pure & \tp %_{X \tp Object}
\Arrow\ X\ (\omapR\ X) \\
\bind & \tp
\Arrow\ X\ (\omapR\ Y)
\to
\Arrow\ (\omapR\ X)\ (\omapR\ Y)
\end{align*}\pause
%
\begin{align*}
\fish & \tp
\Arrow\ A\ (\omapR\ B)
\Arrow\ B\ (\omapR\ C)
\Arrow\ A\ (\omapR\ C) \\
f \fish g & ≜ f \rrr (\bind\ g)
\end{align*}
\pause
%
\begin{align*}
\label{eq:monad-kleisli-laws-0}
\bind\ \pure &\identity_{\omapR\ X} \\
\label{eq:monad-kleisli-laws-1}
\pure \fish f & ≡ f \\
\label{eq:monad-kleisli-laws-2}
(\bind\ f) \rrr (\bind\ g) &\bind\ (f \fish g)
\end{align*}
\end{frame}
\begin{frame}
\frametitle{Monads}
\framesubtitle{Equivalence}
In the monoidal formulation we can define $\bind$:
%
$$
\bind\ f ≜ \join \lll \fmap\ f
$$
\pause
%
And likewise in the Kleisli formulation we can define $\join$:
%
$$
\join\bind\ \identity
$$
\pause
The laws are logically equivalent. So we get:
%
$$
\var{Monoidal}\var{Kleisli}
$$
%
\end{frame}
\end{document}

95
doc/title.tex Normal file
View file

@ -0,0 +1,95 @@
%% FRONTMATTER
\frontmatter
%% \newgeometry{top=3cm, bottom=3cm,left=2.25 cm, right=2.25cm}
\begingroup
\thispagestyle{empty}
{\Huge\thetitle}\\[.5cm]
{\Large A formalization of category theory in Cubical Agda}\\[6cm]
\begin{center}
\includegraphics[width=\linewidth,keepaspectratio]{isomorphism.png}
\end{center}
% Cover text
\vfill
%% \renewcommand{\familydefault}{\sfdefault} \normalfont % Set cover page font
{\Large\theauthor}\\[.5cm]
Master's thesis in Computer Science
\endgroup
%% \end{titlepage}
% BACK OF COVER PAGE (BLANK PAGE)
\newpage
%% \newgeometry{a4paper} % Temporarily change margins
%% \restoregeometry
\thispagestyle{empty}
\null
%% \begin{titlepage}
% TITLE PAGE
\newpage
\thispagestyle{empty}
\begin{center}
\textsc{\LARGE Master's thesis \the\year}\\[4cm] % Report number is currently not in use
\textbf{\LARGE \thetitle} \\[1cm]
{\large \subtitle}\\[1cm]
{\large \theauthor}
\vfill
\centering
\includegraphics[width=0.2\pdfpagewidth]{logo_eng.pdf}
\vspace{5mm}
\textsc{Department of Computer Science and Engineering}\\
\textsc{{\researchgroup}}\\
%Name of research group (if applicable)\\
\textsc{\institution} \\
\textsc{Gothenburg, Sweden \the\year}\\
\end{center}
% IMPRINT PAGE (BACK OF TITLE PAGE)
\newpage
\thispagestyle{plain}
\textit{\thetitle}\\
\subtitle\\
\copyright\ \the\year ~ \textsc{\theauthor}
\vspace{4.5cm}
\setlength{\parskip}{0.5cm}
\textbf{Author:}\\
\theauthor\\
\href{mailto:\authoremail>}{\texttt{<\authoremail>}}
\textbf{Supervisor:}\\
\supervisor\\
\href{mailto:\supervisoremail>}{\texttt{<\supervisoremail>}}\\
\supervisordepartment
\textbf{Co-supervisor:}\\
\cosupervisor\\
\href{mailto:\cosupervisoremail>}{\texttt{<\cosupervisoremail>}}\\
\cosupervisordepartment
\textbf{Examiner:}\\
\examiner\\
\href{mailto:\examineremail>}{\texttt{<\examineremail>}}\\
\examinerdepartment
\vfill
Master's Thesis \the\year\\ % Report number currently not in use
\department\\
%Division of Division name\\
%Name of research group (if applicable)\\
\institution\\
SE-412 96 Gothenburg\\
Telephone +46 31 772 1000 \setlength{\parskip}{0.5cm}\\
% Caption for cover page figure if used, possibly with reference to further information in the report
%% Cover: Wind visualization constructed in Matlab showing a surface of constant wind speed along with streamlines of the flow. \setlength{\parskip}{0.5cm}
%Printed by [Name of printing company]\\
Gothenburg, Sweden \the\year
%% \restoregeometry
%% \end{titlepage}
\tableofcontents

@ -1 +1 @@
Subproject commit 55ad461aa4fc6cf22e97812b7ff8128b3c7a902c Subproject commit 209626953d56294e9bd3d8892eda43b844b0edf9

View file

@ -10,10 +10,11 @@ open import Cat.Category.NaturalTransformation
open import Cat.Category.Yoneda open import Cat.Category.Yoneda
open import Cat.Category.Monoid open import Cat.Category.Monoid
open import Cat.Category.Monad open import Cat.Category.Monad
open Cat.Category.Monad.Monoidal open import Cat.Category.Monad.Monoidal
open Cat.Category.Monad.Kleisli open import Cat.Category.Monad.Kleisli
open import Cat.Category.Monad.Voevodsky open import Cat.Category.Monad.Voevodsky
open import Cat.Categories.Opposite
open import Cat.Categories.Sets open import Cat.Categories.Sets
open import Cat.Categories.Cat open import Cat.Categories.Cat
open import Cat.Categories.Rel open import Cat.Categories.Rel

View file

@ -5,6 +5,7 @@ open import Cat.Prelude
open import Cat.Category open import Cat.Category
open import Cat.Category.Functor open import Cat.Category.Functor
open import Cat.Categories.Fam open import Cat.Categories.Fam
open import Cat.Categories.Opposite
module _ {a b : Level} where module _ {a b : Level} where
record CwF : Set (lsuc (a b)) where record CwF : Set (lsuc (a b)) where

View file

@ -1,13 +1,14 @@
{-# OPTIONS --allow-unsolved-metas --cubical --caching #-} {-# OPTIONS --allow-unsolved-metas --cubical --caching #-}
module Cat.Categories.Fun where module Cat.Categories.Fun where
open import Cat.Prelude open import Cat.Prelude
open import Cat.Equivalence open import Cat.Equivalence
open import Cat.Category open import Cat.Category
open import Cat.Category.Functor open import Cat.Category.Functor
import Cat.Category.NaturalTransformation import Cat.Category.NaturalTransformation
as NaturalTransformation as NaturalTransformation
open import Cat.Categories.Opposite
module Fun {c c' d d' : Level} ( : Category c c') (𝔻 : Category d d') where module Fun {c c' d d' : Level} ( : Category c c') (𝔻 : Category d d') where
open NaturalTransformation 𝔻 public hiding (module Properties) open NaturalTransformation 𝔻 public hiding (module Properties)
@ -52,14 +53,14 @@ module Fun {c c' d d' : Level} ( : Category c c') (𝔻 : C
lem : coe (pp {C}) 𝔻.identity f→g lem : coe (pp {C}) 𝔻.identity f→g
lem = trans (𝔻.9-1-9-right {b = Functor.omap F C} 𝔻.identity p*) 𝔻.rightIdentity lem = trans (𝔻.9-1-9-right {b = Functor.omap F C} 𝔻.identity p*) 𝔻.rightIdentity
idToNatTrans : NaturalTransformation F G -- idToNatTrans : NaturalTransformation F G
idToNatTrans = (λ C coe pp 𝔻.identity) , λ f begin -- idToNatTrans = (λ C → coe pp 𝔻.identity) , λ f → begin
coe pp 𝔻.identity 𝔻.<<< F.fmap f ≡⟨ cong (𝔻._<<< F.fmap f) lem -- coe pp 𝔻.identity 𝔻.<<< F.fmap f ≡⟨ cong (𝔻._<<< F.fmap f) lem ⟩
-- Just need to show that f→g is a natural transformation -- -- Just need to show that f→g is a natural transformation
-- I know that it has an inverse; g→f -- -- I know that it has an inverse; g→f
f→g 𝔻.<<< F.fmap f ≡⟨ {!lem!} -- f→g 𝔻.<<< F.fmap f ≡⟨ {!lem!} ⟩
G.fmap f 𝔻.<<< f→g ≡⟨ cong (G.fmap f 𝔻.<<<_) (sym lem) -- G.fmap f 𝔻.<<< f→g ≡⟨ cong (G.fmap f 𝔻.<<<_) (sym lem) ⟩
G.fmap f 𝔻.<<< coe pp 𝔻.identity -- G.fmap f 𝔻.<<< coe pp 𝔻.identity ∎
module _ {A B : Functor 𝔻} where module _ {A B : Functor 𝔻} where
module A = Functor A module A = Functor A
@ -92,70 +93,70 @@ module Fun {c c' d d' : Level} ( : Category c c') (𝔻 : C
U : (F : .Object 𝔻.Object) Set _ U : (F : .Object 𝔻.Object) Set _
U F = {A B : .Object} [ A , B ] 𝔻 [ F A , F B ] U F = {A B : .Object} [ A , B ] 𝔻 [ F A , F B ]
module _ -- module _
(omap : .Object 𝔻.Object) -- (omap : .Object → 𝔻.Object)
(p : A.omap omap) -- (p : A.omap ≡ omap)
where
D : Set _
D = ( fmap : U omap)
( let
raw-B : RawFunctor 𝔻
raw-B = record { omap = omap ; fmap = fmap }
)
(isF-B' : IsFunctor 𝔻 raw-B)
( let
B' : Functor 𝔻
B' = record { raw = raw-B ; isFunctor = isF-B' }
)
(iso' : A B') PathP (λ i U (p i)) A.fmap fmap
-- D : Set _
-- D = PathP (λ i → U (p i)) A.fmap fmap
-- eeq : (λ f → A.fmap f) ≡ fmap
-- eeq = funExtImp (λ A → funExtImp (λ B → funExt (λ f → isofmap {A} {B} f)))
-- where -- where
-- module _ {X : .Object} {Y : .Object} (f : [ X , Y ]) where -- D : Set _
-- isofmap : A.fmap f ≡ fmap f -- D = ( fmap : U omap)
-- isofmap = {!ap!} -- → ( let
d : D A.omap refl -- raw-B : RawFunctor 𝔻
d = res -- raw-B = record { omap = omap ; fmap = fmap }
where -- )
module _ -- → (isF-B' : IsFunctor 𝔻 raw-B)
( fmap : U A.omap ) -- → ( let
( let -- B' : Functor 𝔻
raw-B : RawFunctor 𝔻 -- B' = record { raw = raw-B ; isFunctor = isF-B' }
raw-B = record { omap = A.omap ; fmap = fmap } -- )
) -- → (iso' : A ≊ B') → PathP (λ i → U (p i)) A.fmap fmap
( isF-B' : IsFunctor 𝔻 raw-B ) -- -- D : Set _
( let -- -- D = PathP (λ i → U (p i)) A.fmap fmap
B' : Functor 𝔻 -- -- eeq : (λ f → A.fmap f) ≡ fmap
B' = record { raw = raw-B ; isFunctor = isF-B' } -- -- eeq = funExtImp (λ A → funExtImp (λ B → funExt (λ f → isofmap {A} {B} f)))
) -- -- where
( iso' : A B' ) -- -- module _ {X : .Object} {Y : .Object} (f : [ X , Y ]) where
where -- -- isofmap : A.fmap f ≡ fmap f
module _ {X Y : .Object} (f : [ X , Y ]) where -- -- isofmap = {!ap!}
step : {!!} 𝔻.≊ {!!} -- d : D A.omap refl
step = {!!} -- d = res
resres : A.fmap {X} {Y} f fmap {X} {Y} f -- where
resres = {!!} -- module _
res : PathP (λ i U A.omap) A.fmap fmap -- ( fmap : U A.omap )
res i {X} {Y} f = resres f i -- ( let
-- raw-B : RawFunctor 𝔻
-- raw-B = record { omap = A.omap ; fmap = fmap }
-- )
-- ( isF-B' : IsFunctor 𝔻 raw-B )
-- ( let
-- B' : Functor 𝔻
-- B' = record { raw = raw-B ; isFunctor = isF-B' }
-- )
-- ( iso' : A ≊ B' )
-- where
-- module _ {X Y : .Object} (f : [ X , Y ]) where
-- step : {!!} 𝔻.≊ {!!}
-- step = {!!}
-- resres : A.fmap {X} {Y} f ≡ fmap {X} {Y} f
-- resres = {!!}
-- res : PathP (λ i → U A.omap) A.fmap fmap
-- res i {X} {Y} f = resres f i
fmapEq : PathP (λ i U (omapEq i)) A.fmap B.fmap -- fmapEq : PathP (λ i → U (omapEq i)) A.fmap B.fmap
fmapEq = pathJ D d B.omap omapEq B.fmap B.isFunctor iso -- fmapEq = pathJ D d B.omap omapEq B.fmap B.isFunctor iso
rawEq : A.raw B.raw -- rawEq : A.raw ≡ B.raw
rawEq i = record { omap = omapEq i ; fmap = fmapEq i } -- rawEq i = record { omap = omapEq i ; fmap = fmapEq i }
private
f : (A B) (A B)
f p = idToNatTrans p , idToNatTrans (sym p) , NaturalTransformation≡ A A (funExt (λ C {!!})) , {!!}
g : (A B) (A B)
g = Functor≡ rawEq
inv : AreInverses f g
inv = {!funExt λ p → ?!} , {!!}
-- private
-- f : (A ≡ B) → (A ≊ B)
-- f p = idToNatTrans p , idToNatTrans (sym p) , NaturalTransformation≡ A A (funExt (λ C → {!!})) , {!!}
-- g : (A ≊ B) → (A ≡ B)
-- g = Functor≡ ∘ rawEq
-- inv : AreInverses f g
-- inv = {!funExt λ p → ?!} , {!!}
postulate
iso : (A B) (A B) iso : (A B) (A B)
iso = f , g , inv -- iso = f , g , inv
univ : (A B) (A B) univ : (A B) (A B)
univ = fromIsomorphism _ _ iso univ = fromIsomorphism _ _ iso

View file

@ -0,0 +1,96 @@
{-# OPTIONS --cubical #-}
module Cat.Categories.Opposite where
open import Cat.Prelude
open import Cat.Equivalence
open import Cat.Category
-- | The opposite category
--
-- The opposite category is the category where the direction of the arrows are
-- flipped.
module _ {a b : Level} where
module _ ( : Category a b) where
private
module _ where
module = Category
opRaw : RawCategory a b
RawCategory.Object opRaw = .Object
RawCategory.Arrow opRaw = flip .Arrow
RawCategory.identity opRaw = .identity
RawCategory._<<<_ opRaw = ._>>>_
open RawCategory opRaw
isPreCategory : IsPreCategory opRaw
IsPreCategory.isAssociative isPreCategory = sym .isAssociative
IsPreCategory.isIdentity isPreCategory = swap .isIdentity
IsPreCategory.arrowsAreSets isPreCategory = .arrowsAreSets
open IsPreCategory isPreCategory
module _ {A B : .Object} where
open Σ (toIso _ _ (.univalent {A} {B}))
renaming (fst to idToIso* ; snd to inv*)
open AreInverses {f = .idToIso A B} {idToIso*} inv*
shuffle : A B A .≊ B
shuffle (f , g , inv) = g , f , inv
shuffle~ : A .≊ B A B
shuffle~ (f , g , inv) = g , f , inv
lem : (p : A B) idToIso A B p shuffle~ (.idToIso A B p)
lem p = isoEq refl
isoToId* : A B A B
isoToId* = idToIso* shuffle
inv : AreInverses (idToIso A B) isoToId*
inv =
( funExt (λ x begin
(isoToId* idToIso A B) x
≡⟨⟩
(idToIso* shuffle idToIso A B) x
≡⟨ cong (λ φ φ x) (cong (λ φ idToIso* shuffle φ) (funExt lem))
(idToIso* shuffle shuffle~ .idToIso A B) x
≡⟨⟩
(idToIso* .idToIso A B) x
≡⟨ (λ i verso-recto i x)
x )
, funExt (λ x begin
(idToIso A B idToIso* shuffle) x
≡⟨ cong (λ φ φ x) (cong (λ φ φ idToIso* shuffle) (funExt lem))
(shuffle~ .idToIso A B idToIso* shuffle) x
≡⟨ cong (λ φ φ x) (cong (λ φ shuffle~ φ shuffle) recto-verso)
(shuffle~ shuffle) x
≡⟨⟩
x )
)
isCategory : IsCategory opRaw
IsCategory.isPreCategory isCategory = isPreCategory
IsCategory.univalent isCategory
= univalenceFromIsomorphism (isoToId* , inv)
opposite : Category a b
Category.raw opposite = opRaw
Category.isCategory opposite = isCategory
-- As demonstrated here a side-effect of having no-eta-equality on constructors
-- means that we need to pick things apart to show that things are indeed
-- definitionally equal. I.e; a thing that would normally be provable in one
-- line now takes 13!! Admittedly it's a simple proof.
module _ { : Category a b} where
open Category
private
-- Since they really are definitionally equal we just need to pick apart
-- the data-type.
rawInv : Category.raw (opposite (opposite )) raw
RawCategory.Object (rawInv _) = Object
RawCategory.Arrow (rawInv _) = Arrow
RawCategory.identity (rawInv _) = identity
RawCategory._<<<_ (rawInv _) = _<<<_
oppositeIsInvolution : opposite (opposite )
oppositeIsInvolution = Category≡ rawInv

View file

@ -2,6 +2,7 @@
module Cat.Categories.Rel where module Cat.Categories.Rel where
open import Cat.Prelude hiding (Rel) open import Cat.Prelude hiding (Rel)
open import Cat.Equivalence
open import Cat.Category open import Cat.Category
@ -61,15 +62,9 @@ module _ {A B : Set} {S : Subset (A × B)} (ab : A × B) where
lem0 = (λ a'' a≡a'' a''b∈S (forwards backwards) (a'' , a≡a'' , a''b∈S) (a'' , a≡a'' , a''b∈S)) lem0 = (λ a'' a≡a'' a''b∈S (forwards backwards) (a'' , a≡a'' , a''b∈S) (a'' , a≡a'' , a''b∈S))
lem1 = (λ z₁ cong (\ z a , refl , z) (pathJprop (\ y _ y) z₁)) lem1 = (λ z₁ cong (\ z a , refl , z) (pathJprop (\ y _ y) z₁))
isequiv : isEquiv
(Σ[ a' A ] (a , a') Diag A × (a' , b) S)
((a , b) S)
backwards
isequiv y = gradLemma backwards forwards fwd-bwd bwd-fwd y
equi : (Σ[ a' A ] (a , a') Diag A × (a' , b) S) equi : (Σ[ a' A ] (a , a') Diag A × (a' , b) S)
(a , b) S (a , b) S
equi = backwards , isequiv equi = fromIsomorphism _ _ (backwards , forwards , funExt bwd-fwd , funExt fwd-bwd)
ident-r : (Σ[ a' A ] (a , a') Diag A × (a' , b) S) ident-r : (Σ[ a' A ] (a , a') Diag A × (a' , b) S)
(a , b) S (a , b) S
@ -95,15 +90,9 @@ module _ {A B : Set} {S : Subset (A × B)} (ab : A × B) where
lem0 = (λ b'' b≡b'' (ab''∈S : (a , b'') S) (forwards backwards) (b'' , ab''∈S , sym b≡b'') (b'' , ab''∈S , sym b≡b'')) lem0 = (λ b'' b≡b'' (ab''∈S : (a , b'') S) (forwards backwards) (b'' , ab''∈S , sym b≡b'') (b'' , ab''∈S , sym b≡b''))
lem1 = (λ ab''∈S cong (λ φ b , φ , refl) (pathJprop (λ y _ y) ab''∈S)) lem1 = (λ ab''∈S cong (λ φ b , φ , refl) (pathJprop (λ y _ y) ab''∈S))
isequiv : isEquiv
(Σ[ b' B ] (a , b') S × (b' , b) Diag B)
((a , b) S)
backwards
isequiv ab∈S = gradLemma backwards forwards bwd-fwd fwd-bwd ab∈S
equi : (Σ[ b' B ] (a , b') S × (b' , b) Diag B) equi : (Σ[ b' B ] (a , b') S × (b' , b) Diag B)
ab S ab S
equi = backwards , isequiv equi = fromIsomorphism _ _ (backwards , (forwards , funExt fwd-bwd , funExt bwd-fwd))
ident-l : (Σ[ b' B ] (a , b') S × (b' , b) Diag B) ident-l : (Σ[ b' B ] (a , b') S × (b' , b) Diag B)
ab S ab S
@ -133,15 +122,9 @@ module _ {A B C D : Set} {S : Subset (A × B)} {R : Subset (B × C)} {Q : Subset
bwd-fwd : (x : Q⊕⟨R⊕S⟩) (bwd fwd) x x bwd-fwd : (x : Q⊕⟨R⊕S⟩) (bwd fwd) x x
bwd-fwd x = refl bwd-fwd x = refl
isequiv : isEquiv
(Σ[ c C ] (Σ[ b B ] (a , b) S × (b , c) R) × (c , d) Q)
(Σ[ b B ] (a , b) S × (Σ[ c C ] (b , c) R × (c , d) Q))
fwd
isequiv = gradLemma fwd bwd fwd-bwd bwd-fwd
equi : (Σ[ c C ] (Σ[ b B ] (a , b) S × (b , c) R) × (c , d) Q) equi : (Σ[ c C ] (Σ[ b B ] (a , b) S × (b , c) R) × (c , d) Q)
(Σ[ b B ] (a , b) S × (Σ[ c C ] (b , c) R × (c , d) Q)) (Σ[ b B ] (a , b) S × (Σ[ c C ] (b , c) R × (c , d) Q))
equi = fwd , isequiv equi = fromIsomorphism _ _ (fwd , bwd , funExt bwd-fwd , funExt fwd-bwd)
-- isAssociativec : Q + (R + S) ≡ (Q + R) + S -- isAssociativec : Q + (R + S) ≡ (Q + R) + S
is-isAssociative : (Σ[ c C ] (Σ[ b B ] (a , b) S × (b , c) R) × (c , d) Q) is-isAssociative : (Σ[ c C ] (Σ[ b B ] (a , b) S × (b , c) R) × (c , d) Q)

View file

@ -3,11 +3,11 @@
module Cat.Categories.Sets where module Cat.Categories.Sets where
open import Cat.Prelude as P open import Cat.Prelude as P
open import Cat.Equivalence
open import Cat.Category open import Cat.Category
open import Cat.Category.Functor open import Cat.Category.Functor
open import Cat.Category.Product open import Cat.Category.Product
open import Cat.Equivalence open import Cat.Categories.Opposite
_⊙_ : {a b c : Level} {A : Set a} {B : Set b} {C : Set c} (A B) (B C) A C _⊙_ : {a b c : Level} {A : Set a} {B : Set b} {C : Set c} (A B) (B C) A C
eqA eqB = Equivalence.compose eqA eqB eqA eqB = Equivalence.compose eqA eqB

View file

@ -0,0 +1,170 @@
{-# OPTIONS --cubical --caching #-}
module Cat.Categories.Span where
open import Cat.Prelude as P hiding (_×_ ; fst ; snd)
open import Cat.Equivalence
open import Cat.Category
module _ {a b : Level} ( : Category a b)
(let module = Category ) (𝒜 : .Object) where
open P
private
module _ where
raw : RawCategory _ _
raw = record
{ Object = Σ[ X .Object ] .Arrow X 𝒜 × .Arrow X
; Arrow = λ{ (A , a0 , a1) (B , b0 , b1)
Σ[ f .Arrow A B ]
[ b0 f ] a0
× [ b1 f ] a1
}
; identity = λ{ {X , f , g} .identity {X} , .rightIdentity , .rightIdentity}
; _<<<_ = λ { {_ , a0 , a1} {_ , b0 , b1} {_ , c0 , c1} (f , f0 , f1) (g , g0 , g1)
(f .<<< g)
, (begin
[ c0 [ f g ] ] ≡⟨ .isAssociative
[ [ c0 f ] g ] ≡⟨ cong (λ φ [ φ g ]) f0
[ b0 g ] ≡⟨ g0
a0
)
, (begin
[ c1 [ f g ] ] ≡⟨ .isAssociative
[ [ c1 f ] g ] ≡⟨ cong (λ φ [ φ g ]) f1
[ b1 g ] ≡⟨ g1
a1
)
}
}
module _ where
open RawCategory raw
propEqs : {X' : Object}{Y' : Object} (let X , xa , xb = X') (let Y , ya , yb = Y')
(xy : .Arrow X Y) isProp ( [ ya xy ] xa × [ yb xy ] xb)
propEqs xs = propSig (.arrowsAreSets _ _) (\ _ .arrowsAreSets _ _)
arrowEq : {X Y : Object} {f g : Arrow X Y} fst f fst g f g
arrowEq {X} {Y} {f} {g} p = λ i p i , lemPropF propEqs p {snd f} {snd g} i
isAssociative : IsAssociative
isAssociative {f = f , f0 , f1} {g , g0 , g1} {h , h0 , h1} = arrowEq .isAssociative
isIdentity : IsIdentity identity
isIdentity {AA@(A , a0 , a1)} {BB@(B , b0 , b1)} {f , f0 , f1} = arrowEq .leftIdentity , arrowEq .rightIdentity
arrowsAreSets : ArrowsAreSets
arrowsAreSets {X , x0 , x1} {Y , y0 , y1}
= sigPresSet .arrowsAreSets λ a propSet (propEqs _)
isPreCat : IsPreCategory raw
IsPreCategory.isAssociative isPreCat = isAssociative
IsPreCategory.isIdentity isPreCat = isIdentity
IsPreCategory.arrowsAreSets isPreCat = arrowsAreSets
open IsPreCategory isPreCat
module _ {𝕏 𝕐 : Object} where
open Σ 𝕏 renaming (fst to X ; snd to x)
open Σ x renaming (fst to xa ; snd to xb)
open Σ 𝕐 renaming (fst to Y ; snd to y)
open Σ y renaming (fst to ya ; snd to yb)
open import Cat.Equivalence using (composeIso) renaming (_≅_ to _≅_)
-- The proof will be a sequence of isomorphisms between the
-- following 4 types:
T0 = ((X , xa , xb) (Y , ya , yb))
T1 = (Σ[ p (X Y) ] (PathP (λ i .Arrow (p i) 𝒜) xa ya) × (PathP (λ i .Arrow (p i) ) xb yb))
T2 = Σ (X .≊ Y) (λ iso
let p = .isoToId iso
in
( PathP (λ i .Arrow (p i) 𝒜) xa ya)
× PathP (λ i .Arrow (p i) ) xb yb
)
T3 = ((X , xa , xb) (Y , ya , yb))
step0 : T0 T1
step0
= (λ p cong fst p , cong-d (fst snd) p , cong-d (snd snd) p)
-- , (λ x → λ i → fst x i , (fst (snd x) i) , (snd (snd x) i))
, (λ{ (p , q , r) Σ≡ p λ i q i , r i})
, funExt (λ{ p refl})
, funExt (λ{ (p , q , r) refl})
step1 : T1 T2
step1
= symIso
(isoSigFst
{A = (X .≊ Y)}
{B = (X Y)}
(.groupoidObject _ _)
{Q = \ p (PathP (λ i .Arrow (p i) 𝒜) xa ya) × (PathP (λ i .Arrow (p i) ) xb yb)}
.isoToId
(symIso (_ , .asTypeIso {X} {Y}) .snd)
)
step2 : T2 T3
step2
= ( λ{ (iso@(f , f~ , inv-f) , p , q)
( f , sym (.domain-twist-sym iso p) , sym (.domain-twist-sym iso q))
, ( f~ , sym (.domain-twist iso p) , sym (.domain-twist iso q))
, arrowEq (fst inv-f)
, arrowEq (snd inv-f)
}
)
, (λ{ (f , f~ , inv-f , inv-f~)
let
iso : X .≊ Y
iso = fst f , fst f~ , cong fst inv-f , cong fst inv-f~
p : X Y
p = .isoToId iso
pA : .Arrow X 𝒜 .Arrow Y 𝒜
pA = cong (λ x .Arrow x 𝒜) p
pB : .Arrow X .Arrow Y
pB = cong (λ x .Arrow x ) p
k0 = begin
coe pB xb ≡⟨ .coe-dom iso
xb .<<< fst f~ ≡⟨ snd (snd f~)
yb
k1 = begin
coe pA xa ≡⟨ .coe-dom iso
xa .<<< fst f~ ≡⟨ fst (snd f~)
ya
in iso , coe-lem-inv k1 , coe-lem-inv k0})
, funExt (λ x lemSig
(λ x propSig prop0 (λ _ prop1))
_ _
(Σ≡ refl (.propIsomorphism _ _ _)))
, funExt (λ{ (f , _) lemSig propIsomorphism _ _ (Σ≡ refl (propEqs _ _ _))})
where
prop0 : {x} isProp (PathP (λ i .Arrow (.isoToId x i) 𝒜) xa ya)
prop0 {x} = pathJ (λ y p x isProp (PathP (λ i .Arrow (p i) 𝒜) xa x)) (λ x .arrowsAreSets _ _) Y (.isoToId x) ya
prop1 : {x} isProp (PathP (λ i .Arrow (.isoToId x i) ) xb yb)
prop1 {x} = pathJ (λ y p x isProp (PathP (λ i .Arrow (p i) ) xb x)) (λ x .arrowsAreSets _ _) Y (.isoToId x) yb
-- One thing to watch out for here is that the isomorphisms going forwards
-- must compose to give idToIso
iso
: ((X , xa , xb) (Y , ya , yb))
((X , xa , xb) (Y , ya , yb))
iso = step0 step1 step2
where
infixl 5 _⊙_
_⊙_ = composeIso
equiv1
: ((X , xa , xb) (Y , ya , yb))
((X , xa , xb) (Y , ya , yb))
equiv1 = _ , fromIso _ _ (snd iso)
univalent : Univalent
univalent = univalenceFrom≃ equiv1
isCat : IsCategory raw
IsCategory.isPreCategory isCat = isPreCat
IsCategory.univalent isCat = univalent
span : Category _ _
span = record
{ raw = raw
; isCategory = isCat
}

View file

@ -44,7 +44,7 @@ open Cat.Equivalence
-- about these. The laws defined are the types the propositions - not the -- about these. The laws defined are the types the propositions - not the
-- witnesses to them! -- witnesses to them!
record RawCategory (a b : Level) : Set (lsuc (a b)) where record RawCategory (a b : Level) : Set (lsuc (a b)) where
no-eta-equality -- no-eta-equality
field field
Object : Set a Object : Set a
Arrow : Object Object Set b Arrow : Object Object Set b
@ -55,14 +55,7 @@ record RawCategory (a b : Level) : Set (lsuc (a ⊔ b)) where
-- infixl 8 _>>>_ -- infixl 8 _>>>_
infixl 10 _<<<_ _>>>_ infixl 10 _<<<_ _>>>_
-- | Operations on data -- | Reverse arrow composition
domain : {a b : Object} Arrow a b Object
domain {a} _ = a
codomain : {a b : Object} Arrow a b Object
codomain {b = b} _ = b
_>>>_ : {A B C : Object} (Arrow A B) (Arrow B C) Arrow A C _>>>_ : {A B C : Object} (Arrow A B) (Arrow B C) Arrow A C
f >>> g = g <<< f f >>> g = g <<< f
@ -631,95 +624,3 @@ module _ {a b : Level} ( : Category a b) where
_[_∘_] : {A B C : Object} (g : Arrow B C) (f : Arrow A B) Arrow A C _[_∘_] : {A B C : Object} (g : Arrow B C) (f : Arrow A B) Arrow A C
_[_∘_] = _<<<_ _[_∘_] = _<<<_
-- | The opposite category
--
-- The opposite category is the category where the direction of the arrows are
-- flipped.
module Opposite {a b : Level} where
module _ ( : Category a b) where
private
module _ where
module = Category
opRaw : RawCategory a b
RawCategory.Object opRaw = .Object
RawCategory.Arrow opRaw = flip .Arrow
RawCategory.identity opRaw = .identity
RawCategory._<<<_ opRaw = ._>>>_
open RawCategory opRaw
isPreCategory : IsPreCategory opRaw
IsPreCategory.isAssociative isPreCategory = sym .isAssociative
IsPreCategory.isIdentity isPreCategory = swap .isIdentity
IsPreCategory.arrowsAreSets isPreCategory = .arrowsAreSets
open IsPreCategory isPreCategory
module _ {A B : .Object} where
open Σ (toIso _ _ (.univalent {A} {B}))
renaming (fst to idToIso* ; snd to inv*)
open AreInverses {f = .idToIso A B} {idToIso*} inv*
shuffle : A B A .≊ B
shuffle (f , g , inv) = g , f , inv
shuffle~ : A .≊ B A B
shuffle~ (f , g , inv) = g , f , inv
lem : (p : A B) idToIso A B p shuffle~ (.idToIso A B p)
lem p = isoEq refl
isoToId* : A B A B
isoToId* = idToIso* shuffle
inv : AreInverses (idToIso A B) isoToId*
inv =
( funExt (λ x begin
(isoToId* idToIso A B) x
≡⟨⟩
(idToIso* shuffle idToIso A B) x
≡⟨ cong (λ φ φ x) (cong (λ φ idToIso* shuffle φ) (funExt lem))
(idToIso* shuffle shuffle~ .idToIso A B) x
≡⟨⟩
(idToIso* .idToIso A B) x
≡⟨ (λ i verso-recto i x)
x )
, funExt (λ x begin
(idToIso A B idToIso* shuffle) x
≡⟨ cong (λ φ φ x) (cong (λ φ φ idToIso* shuffle) (funExt lem))
(shuffle~ .idToIso A B idToIso* shuffle) x
≡⟨ cong (λ φ φ x) (cong (λ φ shuffle~ φ shuffle) recto-verso)
(shuffle~ shuffle) x
≡⟨⟩
x )
)
isCategory : IsCategory opRaw
IsCategory.isPreCategory isCategory = isPreCategory
IsCategory.univalent isCategory
= univalenceFromIsomorphism (isoToId* , inv)
opposite : Category a b
Category.raw opposite = opRaw
Category.isCategory opposite = isCategory
-- As demonstrated here a side-effect of having no-eta-equality on constructors
-- means that we need to pick things apart to show that things are indeed
-- definitionally equal. I.e; a thing that would normally be provable in one
-- line now takes 13!! Admittedly it's a simple proof.
module _ { : Category a b} where
open Category
private
-- Since they really are definitionally equal we just need to pick apart
-- the data-type.
rawInv : Category.raw (opposite (opposite )) raw
RawCategory.Object (rawInv _) = Object
RawCategory.Arrow (rawInv _) = Arrow
RawCategory.identity (rawInv _) = identity
RawCategory._<<<_ (rawInv _) = _<<<_
oppositeIsInvolution : opposite (opposite )
oppositeIsInvolution = Category≡ rawInv
open Opposite public

View file

@ -28,186 +28,213 @@ import Cat.Category.Monad.Monoidal
import Cat.Category.Monad.Kleisli import Cat.Category.Monad.Kleisli
open import Cat.Categories.Fun open import Cat.Categories.Fun
module Monoidal = Cat.Category.Monad.Monoidal
module Kleisli = Cat.Category.Monad.Kleisli
-- | The monoidal- and kleisli presentation of monads are equivalent. -- | The monoidal- and kleisli presentation of monads are equivalent.
module _ {a b : Level} ( : Category a b) where module _ {a b : Level} ( : Category a b) where
open Cat.Category.NaturalTransformation using (NaturalTransformation ; propIsNatural) open Cat.Category.NaturalTransformation using (NaturalTransformation ; propIsNatural)
private private
module = Category module = Category
open using (Object ; Arrow ; identity ; _<<<_ ; _>>>_) open using (Object ; Arrow ; identity ; _<<<_ ; _>>>_)
module M = Monoidal
module K = Kleisli
module _ (m : M.RawMonad) where module Monoidal = Cat.Category.Monad.Monoidal
open M.RawMonad m module Kleisli = Cat.Category.Monad.Kleisli
forthRaw : K.RawMonad module _ (m : Monoidal.RawMonad) where
K.RawMonad.omap forthRaw = Romap open Monoidal.RawMonad m
K.RawMonad.pure forthRaw = pureT _
K.RawMonad.bind forthRaw = bind
module _ {raw : M.RawMonad} (m : M.IsMonad raw) where toKleisliRaw : Kleisli.RawMonad
private Kleisli.RawMonad.omap toKleisliRaw = Romap
module MI = M.IsMonad m Kleisli.RawMonad.pure toKleisliRaw = pure
forthIsMonad : K.IsMonad (forthRaw raw) Kleisli.RawMonad.bind toKleisliRaw = bind
K.IsMonad.isIdentity forthIsMonad = snd MI.isInverse
K.IsMonad.isNatural forthIsMonad = MI.isNatural
K.IsMonad.isDistributive forthIsMonad = MI.isDistributive
forth : M.Monad K.Monad module _ {raw : Monoidal.RawMonad} (m : Monoidal.IsMonad raw) where
Kleisli.Monad.raw (forth m) = forthRaw (M.Monad.raw m) open Monoidal.IsMonad m
Kleisli.Monad.isMonad (forth m) = forthIsMonad (M.Monad.isMonad m)
module _ (m : K.Monad) where open Kleisli.RawMonad (toKleisliRaw raw) using (_>=>_)
open K.Monad m toKleisliIsMonad : Kleisli.IsMonad (toKleisliRaw raw)
Kleisli.IsMonad.isIdentity toKleisliIsMonad = begin
bind pure ≡⟨⟩
join <<< (fmap pure) ≡⟨ snd isInverse
identity
Kleisli.IsMonad.isNatural toKleisliIsMonad f = begin
pure >=> f ≡⟨⟩
pure >>> bind f ≡⟨⟩
bind f <<< pure ≡⟨⟩
(join <<< fmap f) <<< pure ≡⟨ isNatural f
f
Kleisli.IsMonad.isDistributive toKleisliIsMonad f g = begin
bind g >>> bind f ≡⟨⟩
(join <<< fmap f) <<< (join <<< fmap g) ≡⟨ isDistributive f g
join <<< fmap (join <<< fmap f <<< g) ≡⟨⟩
bind (g >=> f)
-- Kleisli.IsMonad.isDistributive toKleisliIsMonad = isDistributive
backRaw : M.RawMonad toKleisli : Monoidal.Monad Kleisli.Monad
M.RawMonad.R backRaw = R Kleisli.Monad.raw (toKleisli m) = toKleisliRaw (Monoidal.Monad.raw m)
M.RawMonad.pureNT backRaw = pureNT Kleisli.Monad.isMonad (toKleisli m) = toKleisliIsMonad (Monoidal.Monad.isMonad m)
M.RawMonad.joinNT backRaw = joinNT
private module _ (m : Kleisli.Monad) where
open M.RawMonad backRaw renaming open Kleisli.Monad m
toMonoidalRaw : Monoidal.RawMonad
Monoidal.RawMonad.R toMonoidalRaw = R
Monoidal.RawMonad.pureNT toMonoidalRaw = pureNT
Monoidal.RawMonad.joinNT toMonoidalRaw = joinNT
open Monoidal.RawMonad toMonoidalRaw renaming
( join to join* ( join to join*
; pure to pure* ; pure to pure*
; bind to bind* ; bind to bind*
; fmap to fmap* ; fmap to fmap*
) ) using ()
module R = Functor (M.RawMonad.R backRaw)
backIsMonad : M.IsMonad backRaw toMonoidalIsMonad : Monoidal.IsMonad toMonoidalRaw
M.IsMonad.isAssociative backIsMonad = begin Monoidal.IsMonad.isAssociative toMonoidalIsMonad = begin
join* <<< R.fmap join* ≡⟨⟩ join* <<< fmap join* ≡⟨⟩
join <<< fmap join ≡⟨ isNaturalForeign join <<< fmap join ≡⟨ isNaturalForeign
join <<< join join <<< join
M.IsMonad.isInverse backIsMonad {X} = inv-l , inv-r Monoidal.IsMonad.isInverse toMonoidalIsMonad {X} = inv-l , inv-r
where where
inv-l = begin inv-l = begin
join <<< pure ≡⟨ fst isInverse join <<< pure ≡⟨ fst isInverse
identity identity
inv-r = begin inv-r = begin
joinT X <<< R.fmap (pureT X) ≡⟨⟩ join* <<< fmap* pure* ≡⟨⟩
join <<< fmap pure ≡⟨ snd isInverse join <<< fmap pure ≡⟨ snd isInverse
identity identity
back : K.Monad M.Monad toMonoidal : Kleisli.Monad Monoidal.Monad
Monoidal.Monad.raw (back m) = backRaw m Monoidal.Monad.raw (toMonoidal m) = toMonoidalRaw m
Monoidal.Monad.isMonad (back m) = backIsMonad m Monoidal.Monad.isMonad (toMonoidal m) = toMonoidalIsMonad m
module _ (m : K.Monad) where module _ (m : Kleisli.Monad) where
private private
open K.Monad m open Kleisli.Monad m
bindEq : {X Y} bindEq : {X Y}
K.RawMonad.bind (forthRaw (backRaw m)) {X} {Y} Kleisli.RawMonad.bind (toKleisliRaw (toMonoidalRaw m)) {X} {Y}
K.RawMonad.bind (K.Monad.raw m) bind
bindEq {X} {Y} = begin bindEq {X} {Y} = funExt lem
K.RawMonad.bind (forthRaw (backRaw m)) ≡⟨⟩
(λ f join <<< fmap f) ≡⟨⟩
(λ f bind (f >>> pure) >>> bind identity) ≡⟨ funExt lem
(λ f bind f) ≡⟨⟩
bind
where where
lem : (f : Arrow X (omap Y)) lem : (f : Arrow X (omap Y))
bind (f >>> pure) >>> bind identity bind (f >>> pure) >>> bind identity
bind f bind f
lem f = begin lem f = begin
join <<< fmap f
≡⟨⟩
bind (f >>> pure) >>> bind identity bind (f >>> pure) >>> bind identity
≡⟨ isDistributive _ _ ≡⟨ isDistributive _ _
bind ((f >>> pure) >=> identity)
≡⟨⟩
bind ((f >>> pure) >>> bind identity) bind ((f >>> pure) >>> bind identity)
≡⟨ cong bind .isAssociative ≡⟨ cong bind .isAssociative
bind (f >>> (pure >>> bind identity)) bind (f >>> (pure >>> bind identity))
≡⟨⟩
bind (f >>> (pure >=> identity))
≡⟨ cong (λ φ bind (f >>> φ)) (isNatural _) ≡⟨ cong (λ φ bind (f >>> φ)) (isNatural _)
bind (f >>> identity) bind (f >>> identity)
≡⟨ cong bind .leftIdentity ≡⟨ cong bind .leftIdentity
bind f bind f
forthRawEq : forthRaw (backRaw m) K.Monad.raw m toKleisliRawEq : toKleisliRaw (toMonoidalRaw m) Kleisli.Monad.raw m
K.RawMonad.omap (forthRawEq _) = omap Kleisli.RawMonad.omap (toKleisliRawEq i) = (begin
K.RawMonad.pure (forthRawEq _) = pure Kleisli.RawMonad.omap (toKleisliRaw (toMonoidalRaw m)) ≡⟨⟩
K.RawMonad.bind (forthRawEq i) = bindEq i Monoidal.RawMonad.Romap (toMonoidalRaw m) ≡⟨⟩
omap
) i
Kleisli.RawMonad.pure (toKleisliRawEq i) = (begin
Kleisli.RawMonad.pure (toKleisliRaw (toMonoidalRaw m)) ≡⟨⟩
Monoidal.RawMonad.pure (toMonoidalRaw m) ≡⟨⟩
pure
) i
Kleisli.RawMonad.bind (toKleisliRawEq i) = bindEq i
fortheq : (m : K.Monad) forth (back m) m toKleislieq : (m : Kleisli.Monad) toKleisli (toMonoidal m) m
fortheq m = K.Monad≡ (forthRawEq m) toKleislieq m = Kleisli.Monad≡ (toKleisliRawEq m)
module _ (m : M.Monad) where module _ (m : Monoidal.Monad) where
private private
open M.Monad m open Monoidal.Monad m
module KM = K.Monad (forth m) -- module KM = Kleisli.Monad (toKleisli m)
open Kleisli.Monad (toKleisli m) renaming
( bind to bind* ; omap to omap* ; join to join*
; fmap to fmap* ; pure to pure* ; R to R*)
using ()
module R = Functor R module R = Functor R
omapEq : KM.omap Romap omapEq : omap* Romap
omapEq = refl omapEq = refl
bindEq : {X Y} {f : Arrow X (Romap Y)} KM.bind f bind f bindEq : {X Y} {f : Arrow X (Romap Y)} bind* f bind f
bindEq {X} {Y} {f} = begin bindEq {X} {Y} {f} = begin
KM.bind f ≡⟨⟩ bind* f ≡⟨⟩
joinT Y <<< fmap f ≡⟨⟩ join <<< fmap f ≡⟨⟩
bind f bind f
joinEq : {X} KM.join joinT X joinEq : {X} join* joinT X
joinEq {X} = begin joinEq {X} = begin
KM.join ≡⟨⟩ join* ≡⟨⟩
KM.bind identity ≡⟨⟩ bind* identity ≡⟨⟩
bind identity ≡⟨⟩ bind identity ≡⟨⟩
joinT X <<< fmap identity ≡⟨ cong (λ φ _ <<< φ) R.isIdentity join <<< fmap identity ≡⟨ cong (λ φ _ <<< φ) R.isIdentity
joinT X <<< identity ≡⟨ .rightIdentity join <<< identity ≡⟨ .rightIdentity
joinT X join
fmapEq : {A B} KM.fmap {A} {B} fmap fmapEq : {A B} fmap* {A} {B} fmap
fmapEq {A} {B} = funExt (λ f begin fmapEq {A} {B} = funExt (λ f begin
KM.fmap f ≡⟨⟩ fmap* f ≡⟨⟩
KM.bind (f >>> KM.pure) ≡⟨⟩ bind* (f >>> pure*) ≡⟨⟩
bind (f >>> pureT _) ≡⟨⟩ bind (f >>> pure) ≡⟨⟩
fmap (f >>> pureT B) >>> joinT B ≡⟨⟩ fmap (f >>> pure) >>> join ≡⟨⟩
fmap (f >>> pureT B) >>> joinT B ≡⟨ cong (λ φ φ >>> joinT B) R.isDistributive fmap (f >>> pure) >>> join ≡⟨ cong (λ φ φ >>> joinT B) R.isDistributive
fmap f >>> fmap (pureT B) >>> joinT B ≡⟨ .isAssociative fmap f >>> fmap pure >>> join ≡⟨ .isAssociative
joinT B <<< fmap (pureT B) <<< fmap f ≡⟨ cong (λ φ φ <<< fmap f) (snd isInverse) join <<< fmap pure <<< fmap f ≡⟨ cong (λ φ φ <<< fmap f) (snd isInverse)
identity <<< fmap f ≡⟨ .leftIdentity identity <<< fmap f ≡⟨ .leftIdentity
fmap f fmap f
) )
rawEq : Functor.raw KM.R Functor.raw R rawEq : Functor.raw R* Functor.raw R
RawFunctor.omap (rawEq i) = omapEq i RawFunctor.omap (rawEq i) = omapEq i
RawFunctor.fmap (rawEq i) = fmapEq i RawFunctor.fmap (rawEq i) = fmapEq i
Req : M.RawMonad.R (backRaw (forth m)) R Req : Monoidal.RawMonad.R (toMonoidalRaw (toKleisli m)) R
Req = Functor≡ rawEq Req = Functor≡ rawEq
pureTEq : M.RawMonad.pureT (backRaw (forth m)) pureT pureTEq : Monoidal.RawMonad.pureT (toMonoidalRaw (toKleisli m)) pureT
pureTEq = funExt (λ X refl) pureTEq = refl
pureNTEq : (λ i NaturalTransformation Functors.identity (Req i)) pureNTEq : (λ i NaturalTransformation Functors.identity (Req i))
[ M.RawMonad.pureNT (backRaw (forth m)) pureNT ] [ Monoidal.RawMonad.pureNT (toMonoidalRaw (toKleisli m)) pureNT ]
pureNTEq = lemSigP (λ i propIsNatural Functors.identity (Req i)) _ _ pureTEq pureNTEq = lemSigP (λ i propIsNatural Functors.identity (Req i)) _ _ pureTEq
joinTEq : M.RawMonad.joinT (backRaw (forth m)) joinT joinTEq : Monoidal.RawMonad.joinT (toMonoidalRaw (toKleisli m)) joinT
joinTEq = funExt (λ X begin joinTEq = funExt (λ X begin
M.RawMonad.joinT (backRaw (forth m)) X ≡⟨⟩ Monoidal.RawMonad.joinT (toMonoidalRaw (toKleisli m)) X ≡⟨⟩
KM.join ≡⟨⟩ join* ≡⟨⟩
joinT X <<< fmap identity ≡⟨ cong (λ φ joinT X <<< φ) R.isIdentity join <<< fmap identity ≡⟨ cong (λ φ join <<< φ) R.isIdentity
joinT X <<< identity ≡⟨ .rightIdentity join <<< identity ≡⟨ .rightIdentity
joinT X ) join )
joinNTEq : (λ i NaturalTransformation F[ Req i Req i ] (Req i)) joinNTEq : (λ i NaturalTransformation F[ Req i Req i ] (Req i))
[ M.RawMonad.joinNT (backRaw (forth m)) joinNT ] [ Monoidal.RawMonad.joinNT (toMonoidalRaw (toKleisli m)) joinNT ]
joinNTEq = lemSigP (λ i propIsNatural F[ Req i Req i ] (Req i)) _ _ joinTEq joinNTEq = lemSigP (λ i propIsNatural F[ Req i Req i ] (Req i)) _ _ joinTEq
backRawEq : backRaw (forth m) M.Monad.raw m toMonoidalRawEq : toMonoidalRaw (toKleisli m) Monoidal.Monad.raw m
M.RawMonad.R (backRawEq i) = Req i Monoidal.RawMonad.R (toMonoidalRawEq i) = Req i
M.RawMonad.pureNT (backRawEq i) = pureNTEq i Monoidal.RawMonad.pureNT (toMonoidalRawEq i) = pureNTEq i
M.RawMonad.joinNT (backRawEq i) = joinNTEq i Monoidal.RawMonad.joinNT (toMonoidalRawEq i) = joinNTEq i
backeq : (m : M.Monad) back (forth m) m toMonoidaleq : (m : Monoidal.Monad) toMonoidal (toKleisli m) m
backeq m = M.Monad≡ (backRawEq m) toMonoidaleq m = Monoidal.Monad≡ (toMonoidalRawEq m)
eqv : isEquiv M.Monad K.Monad forth
eqv = gradLemma forth back fortheq backeq
open import Cat.Equivalence open import Cat.Equivalence
Monoidal≊Kleisli : M.Monad K.Monad Monoidal≊Kleisli : Monoidal.Monad Kleisli.Monad
Monoidal≊Kleisli = forth , back , funExt backeq , funExt fortheq Monoidal≊Kleisli = toKleisli , toMonoidal , funExt toMonoidaleq , funExt toKleislieq
Monoidal≡Kleisli : M.Monad K.Monad Monoidal≡Kleisli : Monoidal.Monad Kleisli.Monad
Monoidal≡Kleisli = isoToPath Monoidal≊Kleisli Monoidal≡Kleisli = isoToPath Monoidal≊Kleisli
grpdKleisli : isGrpd Kleisli.Monad
grpdKleisli = Kleisli.grpdMonad
grpdMonoidal : isGrpd Monoidal.Monad
grpdMonoidal = subst {P = isGrpd}
(sym Monoidal≡Kleisli) grpdKleisli

View file

@ -5,6 +5,7 @@ The Kleisli formulation of monads
open import Agda.Primitive open import Agda.Primitive
open import Cat.Prelude open import Cat.Prelude
open import Cat.Equivalence
open import Cat.Category open import Cat.Category
open import Cat.Category.Functor as F open import Cat.Category.Functor as F
@ -230,6 +231,7 @@ record IsMonad (raw : RawMonad) : Set where
m m
record Monad : Set where record Monad : Set where
no-eta-equality
field field
raw : RawMonad raw : RawMonad
isMonad : IsMonad raw isMonad : IsMonad raw
@ -264,3 +266,82 @@ module _ {m n : Monad} (eq : Monad.raw m ≡ Monad.raw n) where
Monad≡ : m n Monad≡ : m n
Monad.raw (Monad≡ i) = eq i Monad.raw (Monad≡ i) = eq i
Monad.isMonad (Monad≡ i) = eqIsMonad i Monad.isMonad (Monad≡ i) = eqIsMonad i
module _ where
private
module _ (x y : RawMonad) (p q : x y) (a b : p q) where
eq0-helper : isGrpd (Object Object)
eq0-helper = grpdPi (λ a .groupoidObject)
eq0 : cong (cong RawMonad.omap) a cong (cong RawMonad.omap) b
eq0 = eq0-helper
(RawMonad.omap x) (RawMonad.omap y)
(cong RawMonad.omap p) (cong RawMonad.omap q)
(cong (cong RawMonad.omap) a) (cong (cong RawMonad.omap) b)
eq1-helper : (omap : Object Object) isGrpd ({X : Object} [ X , omap X ])
eq1-helper f = grpdPiImpl (setGrpd .arrowsAreSets)
postulate
eq1 : PathP (λ i PathP
(λ j
PathP (λ k {X : Object} [ X , eq0 i j k X ])
(RawMonad.pure x) (RawMonad.pure y))
(λ i RawMonad.pure (p i)) (λ i RawMonad.pure (q i)))
(cong-d (cong-d RawMonad.pure) a) (cong-d (cong-d RawMonad.pure) b)
RawMonad' : Set _
RawMonad' = Σ (Object Object) (λ omap
({X : Object} [ X , omap X ])
× ({X Y : Object} [ X , omap Y ] [ omap X , omap Y ])
)
grpdRawMonad' : isGrpd RawMonad'
grpdRawMonad' = grpdSig (grpdPi (λ _ .groupoidObject)) λ _ grpdSig (grpdPiImpl (setGrpd .arrowsAreSets)) (λ _ grpdPiImpl (grpdPiImpl (grpdPi (λ _ setGrpd .arrowsAreSets))))
toRawMonad : RawMonad' RawMonad
RawMonad.omap (toRawMonad (a , b , c)) = a
RawMonad.pure (toRawMonad (a , b , c)) = b
RawMonad.bind (toRawMonad (a , b , c)) = c
IsMonad' : RawMonad' Set _
IsMonad' raw = M.IsIdentity × M.IsNatural × M.IsDistributive
where
module M = RawMonad (toRawMonad raw)
grpdIsMonad' : (m : RawMonad') isGrpd (IsMonad' m)
grpdIsMonad' m = grpdSig (propGrpd (propIsIdentity (toRawMonad m)))
λ _ grpdSig (propGrpd (propIsNatural (toRawMonad m)))
λ _ propGrpd (propIsDistributive (toRawMonad m))
Monad' = Σ RawMonad' IsMonad'
grpdMonad' = grpdSig grpdRawMonad' grpdIsMonad'
toMonad : Monad' Monad
Monad.raw (toMonad x) = toRawMonad (fst x)
isIdentity (Monad.isMonad (toMonad x)) = fst (snd x)
isNatural (Monad.isMonad (toMonad x)) = fst (snd (snd x))
isDistributive (Monad.isMonad (toMonad x)) = snd (snd (snd x))
fromMonad : Monad Monad'
fromMonad m = (M.omap , M.pure , M.bind)
, M.isIdentity , M.isNatural , M.isDistributive
where
module M = Monad m
e : Monad' Monad
e = fromIsomorphism _ _ (toMonad , fromMonad , (funExt λ _ refl) , funExt eta-refl)
where
-- Monads don't have eta-equality
eta-refl : (x : Monad) toMonad (fromMonad x) x
eta-refl =
(λ x λ
{ i .Monad.raw Monad.raw x
; i .Monad.isMonad Monad.isMonad x}
)
grpdMonad : isGrpd Monad
grpdMonad = equivPreservesNType
{n = (S (S (S ⟨-2⟩)))}
e grpdMonad'
where
open import Cubical.NType

View file

@ -18,7 +18,7 @@ private
open Category using (Object ; Arrow ; identity ; _<<<_) open Category using (Object ; Arrow ; identity ; _<<<_)
open import Cat.Category.NaturalTransformation open import Cat.Category.NaturalTransformation
using (NaturalTransformation ; Transformation ; Natural) using (NaturalTransformation ; Transformation ; Natural ; NaturalTransformation≡)
record RawMonad : Set where record RawMonad : Set where
field field
@ -78,15 +78,39 @@ record IsMonad (raw : RawMonad) : Set where
isNatural : IsNatural isNatural : IsNatural
isNatural {X} {Y} f = begin isNatural {X} {Y} f = begin
joinT Y <<< R.fmap f <<< pureT X ≡⟨ sym .isAssociative join <<< fmap f <<< pure ≡⟨ sym .isAssociative
joinT Y <<< (R.fmap f <<< pureT X) ≡⟨ cong (λ φ joinT Y <<< φ) (sym (pureN f)) join <<< (fmap f <<< pure) ≡⟨ cong (λ φ join <<< φ) (sym (pureN f))
joinT Y <<< (pureT (R.omap Y) <<< f) ≡⟨ .isAssociative join <<< (pure <<< f) ≡⟨ .isAssociative
joinT Y <<< pureT (R.omap Y) <<< f ≡⟨ cong (λ φ φ <<< f) (fst isInverse) join <<< pure <<< f ≡⟨ cong (λ φ φ <<< f) (fst isInverse)
identity <<< f ≡⟨ .leftIdentity identity <<< f ≡⟨ .leftIdentity
f f
isDistributive : IsDistributive isDistributive : IsDistributive
isDistributive {X} {Y} {Z} g f = sym aux isDistributive {X} {Y} {Z} g f = begin
join <<< fmap g <<< (join <<< fmap f)
≡⟨ Category.isAssociative
join <<< fmap g <<< join <<< fmap f
≡⟨ cong (_<<< fmap f) (sym .isAssociative)
(join <<< (fmap g <<< join)) <<< fmap f
≡⟨ cong (λ φ φ <<< fmap f) (cong (_<<<_ join) (sym (joinN g)))
(join <<< (join <<< R².fmap g)) <<< fmap f
≡⟨ cong (_<<< fmap f) .isAssociative
((join <<< join) <<< R².fmap g) <<< fmap f
≡⟨⟩
join <<< join <<< R².fmap g <<< fmap f
≡⟨ sym .isAssociative
(join <<< join) <<< (R².fmap g <<< fmap f)
≡⟨ cong (λ φ φ <<< (R².fmap g <<< fmap f)) (sym isAssociative)
(join <<< fmap join) <<< (R².fmap g <<< fmap f)
≡⟨ sym .isAssociative
join <<< (fmap join <<< (R².fmap g <<< fmap f))
≡⟨ cong (_<<<_ join) .isAssociative
join <<< (fmap join <<< R².fmap g <<< fmap f)
≡⟨⟩
join <<< (fmap join <<< fmap (fmap g) <<< fmap f)
≡⟨ cong (λ φ join <<< φ) (sym distrib3)
join <<< fmap (join <<< fmap g <<< f)
where where
module R² = Functor F[ R R ] module R² = Functor F[ R R ]
distrib3 : {A B C D} {a : Arrow C D} {b : Arrow B C} {c : Arrow A B} distrib3 : {A B C D} {a : Arrow C D} {b : Arrow B C} {c : Arrow A B}
@ -96,33 +120,9 @@ record IsMonad (raw : RawMonad) : Set where
R.fmap (a <<< b <<< c) ≡⟨ R.isDistributive R.fmap (a <<< b <<< c) ≡⟨ R.isDistributive
R.fmap (a <<< b) <<< R.fmap c ≡⟨ cong (_<<< _) R.isDistributive R.fmap (a <<< b) <<< R.fmap c ≡⟨ cong (_<<< _) R.isDistributive
R.fmap a <<< R.fmap b <<< R.fmap c R.fmap a <<< R.fmap b <<< R.fmap c
aux = begin
joinT Z <<< R.fmap (joinT Z <<< R.fmap g <<< f)
≡⟨ cong (λ φ joinT Z <<< φ) distrib3
joinT Z <<< (R.fmap (joinT Z) <<< R.fmap (R.fmap g) <<< R.fmap f)
≡⟨⟩
joinT Z <<< (R.fmap (joinT Z) <<< R².fmap g <<< R.fmap f)
≡⟨ cong (_<<<_ (joinT Z)) (sym .isAssociative)
joinT Z <<< (R.fmap (joinT Z) <<< (R².fmap g <<< R.fmap f))
≡⟨ .isAssociative
(joinT Z <<< R.fmap (joinT Z)) <<< (R².fmap g <<< R.fmap f)
≡⟨ cong (λ φ φ <<< (R².fmap g <<< R.fmap f)) isAssociative
(joinT Z <<< joinT (R.omap Z)) <<< (R².fmap g <<< R.fmap f)
≡⟨ .isAssociative
joinT Z <<< joinT (R.omap Z) <<< R².fmap g <<< R.fmap f
≡⟨⟩
((joinT Z <<< joinT (R.omap Z)) <<< R².fmap g) <<< R.fmap f
≡⟨ cong (_<<< R.fmap f) (sym .isAssociative)
(joinT Z <<< (joinT (R.omap Z) <<< R².fmap g)) <<< R.fmap f
≡⟨ cong (λ φ φ <<< R.fmap f) (cong (_<<<_ (joinT Z)) (joinN g))
(joinT Z <<< (R.fmap g <<< joinT Y)) <<< R.fmap f
≡⟨ cong (_<<< R.fmap f) .isAssociative
joinT Z <<< R.fmap g <<< joinT Y <<< R.fmap f
≡⟨ sym (Category.isAssociative )
joinT Z <<< R.fmap g <<< (joinT Y <<< R.fmap f)
record Monad : Set where record Monad : Set where
no-eta-equality
field field
raw : RawMonad raw : RawMonad
isMonad : IsMonad raw isMonad : IsMonad raw

View file

@ -1,15 +1,18 @@
{- {-
This module provides construction 2.3 in [voe] This module provides construction 2.3 in [voe]
-} -}
{-# OPTIONS --cubical --caching #-} {-# OPTIONS --cubical #-}
module Cat.Category.Monad.Voevodsky where module Cat.Category.Monad.Voevodsky where
open import Cat.Prelude open import Cat.Prelude
open import Cat.Equivalence
open import Cat.Category open import Cat.Category
open import Cat.Category.Functor as F open import Cat.Category.Functor as F
import Cat.Category.NaturalTransformation import Cat.Category.NaturalTransformation
open import Cat.Category.Monad open import Cat.Category.Monad
import Cat.Category.Monad.Monoidal as Monoidal
import Cat.Category.Monad.Kleisli as Kleisli
open import Cat.Categories.Fun open import Cat.Categories.Fun
open import Cat.Equivalence open import Cat.Equivalence
@ -24,6 +27,7 @@ module voe {a b : Level} ( : Category a b) where
module §2-3 (omap : Object Object) (pure : {X : Object} Arrow X (omap X)) where module §2-3 (omap : Object Object) (pure : {X : Object} Arrow X (omap X)) where
record §1 : Set where record §1 : Set where
no-eta-equality
open M open M
field field
@ -74,12 +78,11 @@ module voe {a b : Level} ( : Category a b) where
isMonad : IsMonad rawMnd isMonad : IsMonad rawMnd
toMonad : Monad toMonad : Monad
toMonad = record toMonad .Monad.raw = rawMnd
{ raw = rawMnd toMonad .Monad.isMonad = isMonad
; isMonad = isMonad
}
record §2 : Set where record §2 : Set where
no-eta-equality
open K open K
field field
@ -96,28 +99,24 @@ module voe {a b : Level} ( : Category a b) where
isMonad : IsMonad rawMnd isMonad : IsMonad rawMnd
toMonad : Monad toMonad : Monad
toMonad = record toMonad .Monad.raw = rawMnd
{ raw = rawMnd toMonad .Monad.isMonad = isMonad
; isMonad = isMonad
}
§1-fromMonad : (m : M.Monad) §2-3.§1 (M.Monad.Romap m) (λ {X} M.Monad.pureT m X) module _ (m : M.Monad) where
§1-fromMonad m = record
{ fmap = Functor.fmap R
; RisFunctor = Functor.isFunctor R
; pureN = pureN
; join = λ {X} joinT X
; joinN = joinN
; isMonad = M.Monad.isMonad m
}
where
open M.Monad m open M.Monad m
§1-fromMonad : §2-3.§1 (M.Monad.Romap m) (λ {X} M.Monad.pureT m X)
§1-fromMonad .§2-3.§1.fmap = Functor.fmap R
§1-fromMonad .§2-3.§1.RisFunctor = Functor.isFunctor R
§1-fromMonad .§2-3.§1.pureN = pureN
§1-fromMonad .§2-3.§1.join {X} = joinT X
§1-fromMonad .§2-3.§1.joinN = joinN
§1-fromMonad .§2-3.§1.isMonad = M.Monad.isMonad m
§2-fromMonad : (m : K.Monad) §2-3.§2 (K.Monad.omap m) (K.Monad.pure m) §2-fromMonad : (m : K.Monad) §2-3.§2 (K.Monad.omap m) (K.Monad.pure m)
§2-fromMonad m = record §2-fromMonad m .§2-3.§2.bind = K.Monad.bind m
{ bind = K.Monad.bind m §2-fromMonad m .§2-3.§2.isMonad = K.Monad.isMonad m
; isMonad = K.Monad.isMonad m
}
-- | In the following we seek to transform the equivalence `Monoidal≃Kleisli` -- | In the following we seek to transform the equivalence `Monoidal≃Kleisli`
-- | to talk about voevodsky's construction. -- | to talk about voevodsky's construction.
@ -145,67 +144,103 @@ module voe {a b : Level} ( : Category a b) where
forthEq : m (forth back) m m forthEq : m (forth back) m m
forthEq m = begin forthEq m = begin
(forth back) m ≡⟨⟩ §2-fromMonad
-- In full gory detail: (Monoidal→Kleisli
( §2-fromMonad (§2-3.§1.toMonad
Monoidal→Kleisli (§1-fromMonad (Kleisli→Monoidal (§2-3.§2.toMonad m)))))
§2-3.§1.toMonad ≡⟨ cong-d (§2-fromMonad Monoidal→Kleisli) (lemmaz (Kleisli→Monoidal (§2-3.§2.toMonad m)))
§1-fromMonad §2-fromMonad
Kleisli→Monoidal ((Monoidal→Kleisli Kleisli→Monoidal)
§2-3.§2.toMonad (§2-3.§2.toMonad m))
) m ≡⟨⟩ -- fromMonad and toMonad are inverses -- Below is the fully normalized goal and context with
( §2-fromMonad -- `funExt` made abstract.
Monoidal→Kleisli --
Kleisli→Monoidal -- Goal: PathP (λ _ → §2-3.§2 omap (λ {z} → pure))
§2-3.§2.toMonad -- (§2-fromMonad
) m ≡⟨ cong (λ φ φ m) t -- (.Cat.Category.Monad.toKleisli
-- Monoidal→Kleisli and Kleisli→Monoidal are inverses -- (.Cat.Category.Monad.toMonoidal (§2-3.§2.toMonad m))))
-- I should be able to prove this using congruence and `lem` below. -- (§2-fromMonad (§2-3.§2.toMonad m))
( §2-fromMonad -- Have: PathP
§2-3.§2.toMonad -- (λ i →
) m ≡⟨⟩ -- §2-3.§2 K.IsMonad.omap
( §2-fromMonad -- (K.RawMonad.pure
§2-3.§2.toMonad -- (K.Monad.raw
) m ≡⟨⟩ -- fromMonad and toMonad are inverses -- (funExt (λ m₁ → K.Monad≡ (.Cat.Category.Monad.toKleisliRawEq m₁))
-- i (§2-3.§2.toMonad m)))))
-- (§2-fromMonad
-- (.Cat.Category.Monad.toKleisli
-- (.Cat.Category.Monad.toMonoidal (§2-3.§2.toMonad m))))
-- (§2-fromMonad (§2-3.§2.toMonad m))
≡⟨ ( cong-d {x = Monoidal→Kleisli Kleisli→Monoidal} {y = idFun K.Monad} (\ φ §2-fromMonad (φ (§2-3.§2.toMonad m))) re-ve)
(§2-fromMonad §2-3.§2.toMonad) m
≡⟨ lemma
m m
where where
t' : ((Monoidal→Kleisli Kleisli→Monoidal) §2-3.§2.toMonad {omap} {pure}) lemma : (§2-fromMonad §2-3.§2.toMonad) m m
§2-3.§2.toMonad §2-3.§2.bind (lemma i) = §2-3.§2.bind m
t' = cong (\ φ φ §2-3.§2.toMonad) re-ve §2-3.§2.isMonad (lemma i) = §2-3.§2.isMonad m
t : (§2-fromMonad (Monoidal→Kleisli Kleisli→Monoidal) §2-3.§2.toMonad {omap} {pure}) lemmaz : m §2-3.§1.toMonad (§1-fromMonad m) m
(§2-fromMonad §2-3.§2.toMonad) M.Monad.raw (lemmaz m i) = M.Monad.raw m
t = cong-d (\ f §2-fromMonad f) t' M.Monad.isMonad (lemmaz m i) = M.Monad.isMonad m
u : (§2-fromMonad (Monoidal→Kleisli Kleisli→Monoidal) §2-3.§2.toMonad) m
(§2-fromMonad §2-3.§2.toMonad) m
u = cong (\ φ φ m) t
backEq : m (back forth) m m backEq : m (back forth) m m
backEq m = begin backEq m = begin
(back forth) m ≡⟨⟩ §1-fromMonad
( §1-fromMonad (Kleisli→Monoidal
Kleisli→Monoidal (§2-3.§2.toMonad
§2-3.§2.toMonad (§2-fromMonad (Monoidal→Kleisli (§2-3.§1.toMonad m)))))
§2-fromMonad ≡⟨ cong-d (§1-fromMonad Kleisli→Monoidal) (lemma (Monoidal→Kleisli (§2-3.§1.toMonad m)))
Monoidal→Kleisli §1-fromMonad
§2-3.§1.toMonad ((Kleisli→Monoidal Monoidal→Kleisli)
) m ≡⟨⟩ -- fromMonad and toMonad are inverses (§2-3.§1.toMonad m))
( §1-fromMonad -- Below is the fully normalized `agda2-goal-and-context`
Kleisli→Monoidal -- with `funExt` made abstract.
Monoidal→Kleisli --
§2-3.§1.toMonad -- Goal: PathP (λ _ → §2-3.§1 omap (λ {X} → pure))
) m ≡⟨ cong (λ φ φ m) t -- Monoidal→Kleisli and Kleisli→Monoidal are inverses -- (§1-fromMonad
( §1-fromMonad -- (.Cat.Category.Monad.toMonoidal
§2-3.§1.toMonad -- (.Cat.Category.Monad.toKleisli (§2-3.§1.toMonad m))))
) m ≡⟨⟩ -- fromMonad and toMonad are inverses -- (§1-fromMonad (§2-3.§1.toMonad m))
-- Have: PathP
-- (λ i →
-- §2-3.§1
-- (RawFunctor.omap
-- (Functor.raw
-- (M.RawMonad.R
-- (M.Monad.raw
-- (funExt
-- (λ m₁ → M.Monad≡ (.Cat.Category.Monad.toMonoidalRawEq m₁)) i
-- (§2-3.§1.toMonad m))))))
-- (λ {X} →
-- fst
-- (M.RawMonad.pureNT
-- (M.Monad.raw
-- (funExt
-- (λ m₁ → M.Monad≡ (.Cat.Category.Monad.toMonoidalRawEq m₁)) i
-- (§2-3.§1.toMonad m))))
-- X))
-- (§1-fromMonad
-- (.Cat.Category.Monad.toMonoidal
-- (.Cat.Category.Monad.toKleisli (§2-3.§1.toMonad m))))
-- (§1-fromMonad (§2-3.§1.toMonad m))
≡⟨ (cong-d (\ φ §1-fromMonad (φ (§2-3.§1.toMonad m))) ve-re)
§1-fromMonad (§2-3.§1.toMonad m)
≡⟨ lemmaz
m m
where where
t : §1-fromMonad Kleisli→Monoidal Monoidal→Kleisli §2-3.§1.toMonad lemmaz : §1-fromMonad (§2-3.§1.toMonad m) m
§1-fromMonad §2-3.§1.toMonad §2-3.§1.fmap (lemmaz i) = §2-3.§1.fmap m
-- Why does `re-ve` not satisfy this goal? §2-3.§1.join (lemmaz i) = §2-3.§1.join m
t i m = §1-fromMonad (ve-re i (§2-3.§1.toMonad m)) §2-3.§1.RisFunctor (lemmaz i) = §2-3.§1.RisFunctor m
§2-3.§1.pureN (lemmaz i) = §2-3.§1.pureN m
voe-isEquiv : isEquiv (§2-3.§1 omap pure) (§2-3.§2 omap pure) forth §2-3.§1.joinN (lemmaz i) = §2-3.§1.joinN m
voe-isEquiv = gradLemma forth back forthEq backEq §2-3.§1.isMonad (lemmaz i) = §2-3.§1.isMonad m
lemma : m §2-3.§2.toMonad (§2-fromMonad m) m
K.Monad.raw (lemma m i) = K.Monad.raw m
K.Monad.isMonad (lemma m i) = K.Monad.isMonad m
equiv-2-3 : §2-3.§1 omap pure §2-3.§2 omap pure equiv-2-3 : §2-3.§1 omap pure §2-3.§2 omap pure
equiv-2-3 = forth , voe-isEquiv equiv-2-3 = fromIsomorphism _ _
( forth , back
, funExt backEq , funExt forthEq
)

View file

@ -11,7 +11,7 @@ module _ {a b : Level} ( : Category a b) where
module _ (A B : Object) where module _ (A B : Object) where
record RawProduct : Set (a b) where record RawProduct : Set (a b) where
no-eta-equality -- no-eta-equality
field field
object : Object object : Object
fst : [ object , A ] fst : [ object , A ]
@ -55,16 +55,18 @@ module _ {a b : Level} ( : Category a b) where
× [ g snd ] × [ g snd ]
] ]
module _ {a b : Level} { : Category a b} {A B : Category.Object } where module _ {a b : Level} { : Category a b}
(let module = Category ) {𝒜 : .Object} where
private private
open Category module _ (raw : RawProduct 𝒜 ) where
module _ (raw : RawProduct A B) where private
module _ (x y : IsProduct A B raw) where open Category hiding (raw)
module _ (x y : IsProduct 𝒜 raw) where
private private
module x = IsProduct x module x = IsProduct x
module y = IsProduct y module y = IsProduct y
module _ {X : Object} (f : [ X , A ]) (g : [ X , B ]) where module _ {X : Object} (f : [ X , 𝒜 ]) (g : [ X , ]) where
module _ (f×g : Arrow X y.object) where module _ (f×g : Arrow X y.object) where
help : isProp ({y} ( [ y.fst y ] f) P.× ( [ y.snd y ] g) f×g y) help : isProp ({y} ( [ y.fst y ] f) P.× ( [ y.snd y ] g) f×g y)
help = propPiImpl (λ _ propPi (λ _ arrowsAreSets _ _)) help = propPiImpl (λ _ propPi (λ _ arrowsAreSets _ _))
@ -77,188 +79,22 @@ module _ {a b : Level} { : Category a b} {A B : Category.Object
propIsProduct' : x y propIsProduct' : x y
propIsProduct' i = record { ump = λ f g prodAux f g i } propIsProduct' i = record { ump = λ f g prodAux f g i }
propIsProduct : isProp (IsProduct A B raw) propIsProduct : isProp (IsProduct 𝒜 raw)
propIsProduct = propIsProduct' propIsProduct = propIsProduct'
Product≡ : {x y : Product A B} (Product.raw x Product.raw y) x y Product≡ : {x y : Product 𝒜 } (Product.raw x Product.raw y) x y
Product≡ {x} {y} p i = record { raw = p i ; isProduct = q i } Product≡ {x} {y} p i = record { raw = p i ; isProduct = q i }
where where
q : (λ i IsProduct A B (p i)) [ Product.isProduct x Product.isProduct y ] q : (λ i IsProduct 𝒜 (p i)) [ Product.isProduct x Product.isProduct y ]
q = lemPropF propIsProduct p q = lemPropF propIsProduct p
module Try0 {a b : Level} { : Category a b}
(let module = Category ) {𝒜 : .Object} where
open P open P
open import Cat.Categories.Span
module _ where open Category (span 𝒜 )
raw : RawCategory _ _
raw = record
{ Object = Σ[ X .Object ] .Arrow X 𝒜 × .Arrow X
; Arrow = λ{ (A , a0 , a1) (B , b0 , b1)
Σ[ f .Arrow A B ]
[ b0 f ] a0
× [ b1 f ] a1
}
; identity = λ{ {X , f , g} .identity {X} , .rightIdentity , .rightIdentity}
; _<<<_ = λ { {_ , a0 , a1} {_ , b0 , b1} {_ , c0 , c1} (f , f0 , f1) (g , g0 , g1)
(f .<<< g)
, (begin
[ c0 [ f g ] ] ≡⟨ .isAssociative
[ [ c0 f ] g ] ≡⟨ cong (λ φ [ φ g ]) f0
[ b0 g ] ≡⟨ g0
a0
)
, (begin
[ c1 [ f g ] ] ≡⟨ .isAssociative
[ [ c1 f ] g ] ≡⟨ cong (λ φ [ φ g ]) f1
[ b1 g ] ≡⟨ g1
a1
)
}
}
module _ where
open RawCategory raw
propEqs : {X' : Object}{Y' : Object} (let X , xa , xb = X') (let Y , ya , yb = Y')
(xy : .Arrow X Y) isProp ( [ ya xy ] xa × [ yb xy ] xb)
propEqs xs = propSig (.arrowsAreSets _ _) (\ _ .arrowsAreSets _ _)
arrowEq : {X Y : Object} {f g : Arrow X Y} fst f fst g f g
arrowEq {X} {Y} {f} {g} p = λ i p i , lemPropF propEqs p {snd f} {snd g} i
private
isAssociative : IsAssociative
isAssociative {f = f , f0 , f1} {g , g0 , g1} {h , h0 , h1} = arrowEq .isAssociative
isIdentity : IsIdentity identity
isIdentity {AA@(A , a0 , a1)} {BB@(B , b0 , b1)} {f , f0 , f1} = arrowEq .leftIdentity , arrowEq .rightIdentity
arrowsAreSets : ArrowsAreSets
arrowsAreSets {X , x0 , x1} {Y , y0 , y1}
= sigPresSet .arrowsAreSets λ a propSet (propEqs _)
isPreCat : IsPreCategory raw
IsPreCategory.isAssociative isPreCat = isAssociative
IsPreCategory.isIdentity isPreCat = isIdentity
IsPreCategory.arrowsAreSets isPreCat = arrowsAreSets
open IsPreCategory isPreCat
module _ {𝕏 𝕐 : Object} where
open Σ 𝕏 renaming (fst to X ; snd to x)
open Σ x renaming (fst to xa ; snd to xb)
open Σ 𝕐 renaming (fst to Y ; snd to y)
open Σ y renaming (fst to ya ; snd to yb)
open import Cat.Equivalence using (composeIso) renaming (_≅_ to _≅_)
step0
: ((X , xa , xb) (Y , ya , yb))
(Σ[ p (X Y) ] (PathP (λ i .Arrow (p i) 𝒜) xa ya) × (PathP (λ i .Arrow (p i) ) xb yb))
step0
= (λ p cong fst p , cong-d (fst snd) p , cong-d (snd snd) p)
-- , (λ x → λ i → fst x i , (fst (snd x) i) , (snd (snd x) i))
, (λ{ (p , q , r) Σ≡ p λ i q i , r i})
, funExt (λ{ p refl})
, funExt (λ{ (p , q , r) refl})
step1
: (Σ[ p (X Y) ] (PathP (λ i .Arrow (p i) 𝒜) xa ya) × (PathP (λ i .Arrow (p i) ) xb yb))
Σ (X .≊ Y) (λ iso
let p = .isoToId iso
in
( PathP (λ i .Arrow (p i) 𝒜) xa ya)
× PathP (λ i .Arrow (p i) ) xb yb
)
step1
= symIso
(isoSigFst
{A = (X .≊ Y)}
{B = (X Y)}
(.groupoidObject _ _)
{Q = \ p (PathP (λ i .Arrow (p i) 𝒜) xa ya) × (PathP (λ i .Arrow (p i) ) xb yb)}
.isoToId
(symIso (_ , .asTypeIso {X} {Y}) .snd)
)
step2
: Σ (X .≊ Y) (λ iso
let p = .isoToId iso
in
( PathP (λ i .Arrow (p i) 𝒜) xa ya)
× PathP (λ i .Arrow (p i) ) xb yb
)
((X , xa , xb) (Y , ya , yb))
step2
= ( λ{ (iso@(f , f~ , inv-f) , p , q)
( f , sym (.domain-twist-sym iso p) , sym (.domain-twist-sym iso q))
, ( f~ , sym (.domain-twist iso p) , sym (.domain-twist iso q))
, arrowEq (fst inv-f)
, arrowEq (snd inv-f)
}
)
, (λ{ (f , f~ , inv-f , inv-f~)
let
iso : X .≊ Y
iso = fst f , fst f~ , cong fst inv-f , cong fst inv-f~
p : X Y
p = .isoToId iso
pA : .Arrow X 𝒜 .Arrow Y 𝒜
pA = cong (λ x .Arrow x 𝒜) p
pB : .Arrow X .Arrow Y
pB = cong (λ x .Arrow x ) p
k0 = begin
coe pB xb ≡⟨ .coe-dom iso
xb .<<< fst f~ ≡⟨ snd (snd f~)
yb
k1 = begin
coe pA xa ≡⟨ .coe-dom iso
xa .<<< fst f~ ≡⟨ fst (snd f~)
ya
in iso , coe-lem-inv k1 , coe-lem-inv k0})
, funExt (λ x lemSig
(λ x propSig prop0 (λ _ prop1))
_ _
(Σ≡ refl (.propIsomorphism _ _ _)))
, funExt (λ{ (f , _) lemSig propIsomorphism _ _ (Σ≡ refl (propEqs _ _ _))})
where
prop0 : {x} isProp (PathP (λ i .Arrow (.isoToId x i) 𝒜) xa ya)
prop0 {x} = pathJ (λ y p x isProp (PathP (λ i .Arrow (p i) 𝒜) xa x)) (λ x .arrowsAreSets _ _) Y (.isoToId x) ya
prop1 : {x} isProp (PathP (λ i .Arrow (.isoToId x i) ) xb yb)
prop1 {x} = pathJ (λ y p x isProp (PathP (λ i .Arrow (p i) ) xb x)) (λ x .arrowsAreSets _ _) Y (.isoToId x) yb
-- One thing to watch out for here is that the isomorphisms going forwards
-- must compose to give idToIso
iso
: ((X , xa , xb) (Y , ya , yb))
((X , xa , xb) (Y , ya , yb))
iso = step0 step1 step2
where
infixl 5 _⊙_
_⊙_ = composeIso
equiv1
: ((X , xa , xb) (Y , ya , yb))
((X , xa , xb) (Y , ya , yb))
equiv1 = _ , fromIso _ _ (snd iso)
univalent : Univalent
univalent = univalenceFrom≃ equiv1
isCat : IsCategory raw
IsCategory.isPreCategory isCat = isPreCat
IsCategory.univalent isCat = univalent
cat : Category _ _
cat = record
{ raw = raw
; isCategory = isCat
}
open Category cat
lemma : Terminal Product 𝒜 lemma : Terminal Product 𝒜
lemma = fromIsomorphism Terminal (Product 𝒜 ) (f , g , inv) lemma = fromIsomorphism Terminal (Product 𝒜 ) (f , g , inv)
-- C-x 8 RET MATHEMATICAL BOLD SCRIPT CAPITAL A
-- 𝒜
where where
f : Terminal Product 𝒜 f : Terminal Product 𝒜
f ((X , x0 , x1) , uniq) = p f ((X , x0 , x1) , uniq) = p
@ -348,10 +184,7 @@ module _ {a b : Level} { : Category a b} {A B : Category.Object
module y = HasProducts y module y = HasProducts y
productEq : x.product y.product productEq : x.product y.product
productEq = funExt λ A funExt λ B Try0.propProduct _ _ productEq = funExt λ A funExt λ B propProduct _ _
propHasProducts : isProp (HasProducts ) propHasProducts : isProp (HasProducts )
propHasProducts x y i = record { product = productEq x y i } propHasProducts x y i = record { product = productEq x y i }
fmap≡ : {A : Set} {a0 a1 : A} {B : Set} (f : A B) Path a0 a1 Path (f a0) (f a1)
fmap≡ = cong

View file

@ -9,9 +9,9 @@ open import Cat.Category.Functor
open import Cat.Category.NaturalTransformation open import Cat.Category.NaturalTransformation
renaming (module Properties to F) renaming (module Properties to F)
using () using ()
open import Cat.Categories.Opposite
open import Cat.Categories.Fun using (module Fun)
open import Cat.Categories.Sets hiding (presheaf) open import Cat.Categories.Sets hiding (presheaf)
open import Cat.Categories.Fun using (module Fun)
-- There is no (small) category of categories. So we won't use _⇑_ from -- There is no (small) category of categories. So we won't use _⇑_ from
-- `HasExponential` -- `HasExponential`

View file

@ -105,3 +105,38 @@ module _ { : Level} {A : Set } where
ntypeCumulative : {n m} n ≤′ m HasLevel n ⟩₋₂ A HasLevel m ⟩₋₂ A ntypeCumulative : {n m} n ≤′ m HasLevel n ⟩₋₂ A HasLevel m ⟩₋₂ A
ntypeCumulative {m} ≤′-refl lvl = lvl ntypeCumulative {m} ≤′-refl lvl = lvl
ntypeCumulative {n} {suc m} (≤′-step le) lvl = HasLevel+1 m ⟩₋₂ (ntypeCumulative le lvl) ntypeCumulative {n} {suc m} (≤′-step le) lvl = HasLevel+1 m ⟩₋₂ (ntypeCumulative le lvl)
grpdPi : {b : Level} {B : A Set b}
((a : A) isGrpd (B a)) isGrpd ((a : A) (B a))
grpdPi = piPresNType (S (S (S ⟨-2⟩)))
grpdPiImpl : {b : Level} {B : A Set b}
({a : A} isGrpd (B a)) isGrpd ({a : A} (B a))
grpdPiImpl {B = B} g = equivPreservesNType {A = Expl} {B = Impl} {n = one} e (grpdPi (λ a g))
where
one = (S (S (S ⟨-2⟩)))
t : ({a : A} HasLevel one (B a))
t = g
Impl = {a : A} B a
Expl = (a : A) B a
expl : Impl Expl
expl f a = f {a}
impl : Expl Impl
impl f {a} = f a
e : Expl Impl
e = impl , (gradLemma impl expl (λ f refl) (λ f refl))
setGrpd : isSet A isGrpd A
setGrpd = ntypeCumulative
{suc (suc zero)} {suc (suc (suc zero))}
(≤′-step ≤′-refl)
propGrpd : isProp A isGrpd A
propGrpd = ntypeCumulative
{suc zero} {suc (suc (suc zero))}
(≤′-step (≤′-step ≤′-refl))
module _ {a b : Level} {A : Set a} {B : A Set b} where
open TLevel
grpdSig : isGrpd A ( a isGrpd (B a)) isGrpd (Σ A B)
grpdSig = sigPresNType {n = S (S (S ⟨-2⟩))}