Various changes proposed by Andreas

This commit is contained in:
Frederik Hanghøj Iversen 2018-05-15 16:08:29 +02:00
parent 4d73514ab5
commit aced19e990
10 changed files with 343 additions and 172 deletions

1
.gitignore vendored Normal file
View file

@ -0,0 +1 @@
html/

View file

@ -3,3 +3,9 @@ build: src/**.agda
clean: clean:
find src -name "*.agdai" -type f -delete find src -name "*.agdai" -type f -delete
html:
agda --html src/Cat.agda
upload: html
scp -r html/ remote11.chalmers.se:www/cat/doc/

View file

@ -1,21 +1,23 @@
\chapter*{Abstract} \chapter*{Abstract}
The usual notion of propositional equality in intensional type-theory is The usual notion of propositional equality in intensional type-theory
restrictive. For instance it does not admit functional extensionality or is restrictive. For instance it does not admit functional
univalence. This poses a severe limitation on both what is \emph{provable} and extensionality or univalence. This poses a severe limitation on both
the \emph{re-usability} of proofs. Recent developments have, however, resulted what is \emph{provable} and the \emph{re-usability} of proofs. Recent
in cubical type theory which permits a constructive proof of these two important developments have, however, resulted in cubical type theory which
notions. The programming language Agda has been extended with capabilities for permits a constructive proof of these two important notions. The
working in such a cubical setting. This thesis will explore the usefulness of programming language Agda has been extended with capabilities for
this extension in the context of category theory. working in such a cubical setting. This thesis will explore the
usefulness of this extension in the context of category theory.
The thesis will motivate and explain why propositional equality in cubical Agda The thesis will motivate and explain why propositional equality in
is more expressive than in standard Agda. Alternative approaches to Cubical Agda cubical Agda is more expressive than in standard Agda. Alternative
will be presented and their pros and cons will be explained. It will emphasize approaches to Cubical Agda will be presented and their pros and cons
why it is useful to have a constructive interpretation of univalence. As an will be explained. It will emphasize why it is useful to have a
example of this two formulations of monads will be presented: Namely monaeds in constructive interpretation of univalence. As an example of this two
the monoidal form an monads in the Kleisli form. formulations of monads will be presented: Namely monads in the
monoidal form and monads in the Kleisli form.
Finally the thesis will explain the challenges that a developer will face when Finally the thesis will explain the challenges that a developer will
working with cubical Agda and give some techniques to overcome these face when working with cubical Agda and give some techniques to
difficulties. It will also try to suggest how furhter work can help allievate overcome these difficulties. It will also try to suggest how further
some of these challenges. work can help alleviate some of these challenges.

View file

@ -9,7 +9,7 @@ admissible. Cubical Agda is more expressive, but there are certain issues that
arise that are not present in standard Agda. For one thing ITT and standard Agda arise that are not present in standard Agda. For one thing ITT and standard Agda
enjoys Uniqueness of Identity Proofs (UIP). This is not the case in Cubical enjoys Uniqueness of Identity Proofs (UIP). This is not the case in Cubical
Agda. In stead there exists a hierarchy of types with increasing Agda. In stead there exists a hierarchy of types with increasing
\nomen{homotopical structure}. It turns out to be useful to built the \nomen{homotopical structure}{homotopy levels}. It turns out to be useful to built the
formalization with this hierarchy in mind as it can simplify proofs formalization with this hierarchy in mind as it can simplify proofs
considerably. Another issue one must overcome in Cubical Agda is when a type has considerably. Another issue one must overcome in Cubical Agda is when a type has
a field whose type depends on a previous field. In this case paths between such a field whose type depends on a previous field. In this case paths between such

View file

@ -1,13 +1,13 @@
\chapter{Cubical Agda} \chapter{Cubical Agda}
\section{Propositional equality} \section{Propositional equality}
Judgmental equality in Agda is a feature of the type-system. Its something that Judgmental equality in Agda is a feature of the type system. Its something that
can be checked automatically by the type-checker: In the example from the can be checked automatically by the type checker: In the example from the
introduction $n + 0$ can be judged to be equal to $n$ simply by expanding the introduction $n + 0$ can be judged to be equal to $n$ simply by expanding the
definition of $+$. definition of $+$.
On the other hand, propositional equality is something defined within the On the other hand, propositional equality is something defined within the
language itself. Propositional equality cannot be derived automatically. The language itself. Propositional equality cannot be derived automatically. The
normal definition of judgmental equality is an inductive data-type. Cubical Agda normal definition of judgmental equality is an inductive data type. Cubical Agda
discards this type in favor of a new primitives that has certain computational discards this type in favor of a new primitives that has certain computational
properties exclusive to it. properties exclusive to it.
@ -22,15 +22,13 @@ two points of $A$; $a_0, a_1 \tp A$ we can form the type:
a_0 \equiv a_1 \tp \MCU a_0 \equiv a_1 \tp \MCU
\end{align} \end{align}
% %
In Agda this is defined as an inductive data-type with the single constructor In Agda this is defined as an inductive data type with the single constructor
for any $a \tp A$: for any $a \tp A$:
% %
\begin{align} \begin{align}
\refl \tp a \equiv a \refl \tp a \equiv a
\end{align} \end{align}
% %
For any $a \tp A$.
There also exist a related notion of \emph{heterogeneous} equality which allows There also exist a related notion of \emph{heterogeneous} equality which allows
for equating points of different types. In this case given two types $A, B \tp for equating points of different types. In this case given two types $A, B \tp
\MCU$ and two points $a \tp A$, $b \tp B$ we can construct the type: \MCU$ and two points $a \tp A$, $b \tp B$ we can construct the type:
@ -39,7 +37,7 @@ for equating points of different types. In this case given two types $A, B \tp
a \cong b \tp \MCU a \cong b \tp \MCU
\end{align} \end{align}
% %
This is likewise defined as an inductive data-type with a single constructors This is likewise defined as an inductive data type with a single constructors
for any $a \tp A$: for any $a \tp A$:
% %
\begin{align} \begin{align}
@ -56,17 +54,17 @@ Judgmental equality in Cubical Agda is encapsulated with the type:
\Path \tp (P \tp I → \MCU) → P\ 0 → P\ 1 → \MCU \Path \tp (P \tp I → \MCU) → P\ 0 → P\ 1 → \MCU
\end{equation} \end{equation}
% %
$I$ is a special data-type (\TODO{that also has special computational properties $I$ is a special data type (\TODO{that also has special computational properties
AFAIK}) called the index set. $I$ can be thought of simply as the interval on AFAIK}) called the index set. $I$ can be thought of simply as the interval on
the real numbers from $0$ to $1$. $P$ is a family of types over the index set the real numbers from $0$ to $1$. $P$ is a family of types over the index set
$I$. I will sometimes refer to $P$ as the ``path-space'' of some path $p \tp $I$. I will sometimes refer to $P$ as the \nomenindex{path space} of some path $p \tp
\Path\ P\ a\ b$. By this token $P\ 0$ then corresponds to the type at the \Path\ P\ a\ b$. By this token $P\ 0$ then corresponds to the type at the
left-endpoint and $P\ 1$ as the type at the right-endpoint. The type is called left-endpoint and $P\ 1$ as the type at the right-endpoint. The type is called
$\Path$ because it is connected with paths in homotopy theory. The intuition $\Path$ because it is connected with paths in homotopy theory. The intuition
behind this is that $\Path$ describes paths in $\MCU$ -- i.e. between types. For behind this is that $\Path$ describes paths in $\MCU$ -- i.e. between types. For
a path $p$ for the point $p\ i$ the index $i$ describes how far along the path a path $p$ for the point $p\ i$ the index $i$ describes how far along the path
one has moved. An inhabitant of $\Path\ P\ a_0\ a_1$ is a (dependent-) function, one has moved. An inhabitant of $\Path\ P\ a_0\ a_1$ is a (dependent-) function,
$p$, from the index-space to the path-space: $p$, from the index-space to the path space:
% %
$$ $$
p \tp \prod_{i \tp I} P\ i p \tp \prod_{i \tp I} P\ i
@ -80,16 +78,17 @@ endpoints. I.e.:
p\ 1 & = a_1 p\ 1 & = a_1
\end{align*} \end{align*}
% %
The notion of ``homogeneous equalities'' is recovered when $P$ does not depend The notion of \nomenindex{homogeneous equalities} is recovered when $P$ does not
on its argument: depend on its argument. That is for $A \tp \MCU$, $a_0, a_1 \tp A$ the
homogenous equality between $a_0$ and $a_1$ is the type:
% %
$$ $$
a_0 \equiv a_1 \defeq \Path\ (\lambda i \to A)\ a_0\ a_1 a_0 \equiv a_1 \defeq \Path\ (\lambda i \to A)\ a_0\ a_1
$$ $$
% %
For $A \tp \MCU$, $a_0, a_1 \tp A$. I will generally prefer to use the notation I will generally prefer to use the notation
$a \equiv b$ when talking about non-dependent paths and use the notation $a \equiv b$ when talking about non-dependent paths and use the notation
$\Path\ (\lambda i \to P\ i)\ a\ b$ when the path-space is of particular $\Path\ (\lambda i \to P\ i)\ a\ b$ when the path space is of particular
interest. interest.
With this definition we can also recover reflexivity. That is, for any $A \tp With this definition we can also recover reflexivity. That is, for any $A \tp
@ -102,7 +101,7 @@ With this definition we can also recover reflexivity. That is, for any $A \tp
\end{aligned} \end{aligned}
\end{equation} \end{equation}
% %
Here the path-space is $P \defeq \lambda i \to A$ and it satsifies $P\ i = A$ Here the path space is $P \defeq \lambda i \to A$ and it satsifies $P\ i = A$
definitionally. So to inhabit it, is to give a path $I \to A$ which is definitionally. So to inhabit it, is to give a path $I \to A$ which is
judgmentally $a$ at either endpoint. This is satisfied by the constant path; judgmentally $a$ at either endpoint. This is satisfied by the constant path;
i.e. the path that stays at $a$ at any index $i$. i.e. the path that stays at $a$ at any index $i$.
@ -143,11 +142,12 @@ With this we can now prove the desired equality $f \equiv g$ from section
% %
Paths have some other important properties, but they are not the focus of Paths have some other important properties, but they are not the focus of
this thesis. \TODO{Refer the reader somewhere for more info.} this thesis. \TODO{Refer the reader somewhere for more info.}
%
\section{Homotopy levels} \section{Homotopy levels}
In ITT all equality proofs are identical (in a closed context). This means that, In ITT all equality proofs are identical (in a closed context). This means that,
in some sense, any two inhabitants of $a \equiv b$ are ``equally good'' -- they in some sense, any two inhabitants of $a \equiv b$ are ``equally good'' -- they
do not have any interesting structure. This is referred to as Uniqueness of do not have any interesting structure. This is referred to as Uniqueness of
Identity Proofs (UIP). Unfortunately it is not possible to have a type-theory Identity Proofs (UIP). Unfortunately it is not possible to have a type theory
with both univalence and UIP. In stead we have a hierarchy of types with an with both univalence and UIP. In stead we have a hierarchy of types with an
increasing amount of homotopic structure. At the bottom of this hierarchy we increasing amount of homotopic structure. At the bottom of this hierarchy we
have the set of contractible types: have the set of contractible types:
@ -162,7 +162,7 @@ have the set of contractible types:
\end{equation} \end{equation}
% %
The first component of $\isContr\ A$ is called ``the center of contraction''. The first component of $\isContr\ A$ is called ``the center of contraction''.
Under the propositions-as-types interpretation of type-theory $\isContr\ A$ can Under the propositions-as-types interpretation of type theory $\isContr\ A$ can
be thought of as ``the true proposition $A$''. And indeed $\top$ is be thought of as ``the true proposition $A$''. And indeed $\top$ is
contractible: contractible:
\begin{equation*} \begin{equation*}
@ -181,7 +181,7 @@ The next step in the hierarchy is the set of mere propositions:
\end{aligned} \end{aligned}
\end{equation} \end{equation}
% %
$\isProp\ A$ can be thought of as the set of true and false propositions. And One can think of $\isProp\ A$ as the set of true and false propositions. And
indeed both $\top$ and $\bot$ are propositions: indeed both $\top$ and $\bot$ are propositions:
% %
\begin{align*} \begin{align*}
@ -189,9 +189,9 @@ indeed both $\top$ and $\bot$ are propositions:
λ\varnothing\ \varnothing & \tp \isProp\ λ\varnothing\ \varnothing & \tp \isProp\
\end{align*} \end{align*}
% %
$\varnothing$ is used here to denote an impossible pattern. It is a theorem that The term $\varnothing$ is used here to denote an impossible pattern. It is a
if a mere proposition $A$ is inhabited, then so is it contractible. If it is not theorem that if a mere proposition $A$ is inhabited, then so is it contractible.
inhabited it is equivalent to the empty-type (or false If it is not inhabited it is equivalent to the empty-type (or false
proposition).\TODO{Cite!!} proposition).\TODO{Cite!!}
I will refer to a type $A \tp \MCU$ as a \emph{mere} proposition if I want to I will refer to a type $A \tp \MCU$ as a \emph{mere} proposition if I want to
@ -225,8 +225,9 @@ As the reader may have guessed the next step in the hierarchy is the type:
% %
And so it continues. In fact we can generalize this family of types by indexing And so it continues. In fact we can generalize this family of types by indexing
them with a natural number. For historical reasons, though, the bottom of the them with a natural number. For historical reasons, though, the bottom of the
hierarchy, the contractible types, is said to be a \nomen{-2-type}, propositions hierarchy, the contractible types, is said to be a \nomen{-2-type}{homotopy
are \nomen{-1-types}, (homotopical) sets are \nomen{0-types} and so on\ldots levels}, propositions are \nomen{-1-types}{homotopy levels}, (homotopical)
sets are \nomen{0-types}{homotopy levels} and so on\ldots
Just as with paths, homotopical sets are not at the center of focus for this Just as with paths, homotopical sets are not at the center of focus for this
thesis. But I mention here some properties that will be relevant for this thesis. But I mention here some properties that will be relevant for this
@ -253,8 +254,8 @@ of these theorems here, as they will be used later in chapter
\subsection{Path induction} \subsection{Path induction}
\label{sec:pathJ} \label{sec:pathJ}
The induction principle for paths intuitively gives us a way to reason about a The induction principle for paths intuitively gives us a way to reason about a
type-family indexed by a path by only considering if said path is $\refl$ (the type family indexed by a path by only considering if said path is $\refl$ (the
``base-case''). For \emph{based path induction}, that equality is \emph{based} \nomen{base case}{path induction}). For \emph{based path induction}, that equality is \emph{based}
at some element $a \tp A$. at some element $a \tp A$.
Let a type $A \tp \MCU$ and an element of the type $a \tp A$ be given. $a$ is said to be the base of the induction. Given a family of types: Let a type $A \tp \MCU$ and an element of the type $a \tp A$ be given. $a$ is said to be the base of the induction. Given a family of types:
@ -292,7 +293,7 @@ D & \tp \prod_{b' \tp A} \prod_{p \tp a ≡ b'} \MCU \\
D\ b'\ p' & \defeq \var{sym}\ (\var{sym}\ p') ≡ p' D\ b'\ p' & \defeq \var{sym}\ (\var{sym}\ p') ≡ p'
\end{align*} \end{align*}
% %
The base-case will then be: The base case will then be:
% %
\begin{align*} \begin{align*}
d & \tp \var{sym}\ (\var{sym}\ \refl) ≡ \refl \\ d & \tp \var{sym}\ (\var{sym}\ \refl) ≡ \refl \\
@ -326,7 +327,7 @@ over the family:
T\ d'\ r' & \defeq \trans\ p\ (\trans\ q\ r') ≡ \trans\ (\trans\ p\ q)\ r' T\ d'\ r' & \defeq \trans\ p\ (\trans\ q\ r') ≡ \trans\ (\trans\ p\ q)\ r'
\end{align*} \end{align*}
% %
So the base-case is proven with $t$ which is defined as: So the base case is proven with $t$ which is defined as:
% %
\begin{align*} \begin{align*}
\trans\ p\ (\trans\ q\ \refl) & \trans\ p\ (\trans\ q\ \refl) &
@ -342,7 +343,7 @@ conclusion \ref{eq:cum-trans} is inhabited by the term:
\pathJ\ T\ t\ d\ r \pathJ\ T\ t\ d\ r
\end{align*} \end{align*}
% %
We shall see another application on path-induction in \ref{eq:pathJ-example}. We shall see another application on path induction in \ref{eq:pathJ-example}.
\subsection{Paths over propositions} \subsection{Paths over propositions}
\label{sec:lemPropF} \label{sec:lemPropF}
@ -357,7 +358,7 @@ a heterogeneous path between any two elements of $p_0 \tp P\ a_0$ and $p_1 \tp
P\ a_1$: P\ a_1$:
% %
$$ $$
\lemPropF\ \var{propP}\ p \defeq \Path\ (\lambda\; i \mto P\ (p\ i))\ p_0\ p_1 \lemPropF\ \var{propP}\ p \tp \Path\ (\lambda\; i \mto P\ (p\ i))\ p_0\ p_1
$$ $$
% %
This is quite a mouthful. So let me try to show how this is a very general and This is quite a mouthful. So let me try to show how this is a very general and

View file

@ -57,8 +57,8 @@ composition and identity, laws; preservation of identity and composition) plus
the extra condition that it is univalent - namely that you can get an equality the extra condition that it is univalent - namely that you can get an equality
of two objects from an isomorphism. of two objects from an isomorphism.
I make no distinction between a pre-category and a real category (as in the I make no distinction between a pre category and a real category (as in the
[HoTT]-sense). A pre-category in my implementation would be a category sans the [HoTT]-sense). A pre category in my implementation would be a category sans the
witness to univalence. witness to univalence.
I also prove that being a category is a proposition. This gives rise to an I also prove that being a category is a proposition. This gives rise to an

View file

@ -2,33 +2,44 @@
\label{ch:implementation} \label{ch:implementation}
This implementation formalizes the following concepts: This implementation formalizes the following concepts:
% %
\begin{enumerate}[i.] \newcommand{\sourcebasepath}{http://web.student.chalmers.se/~hanghj/cat/doc/html/}
\item Categories \newcommand{\sourcelink}[1]{\href{\sourcebasepath#1.html}{\texttt{#1}}}
\item Functors \begin{center}
\item Products \begin{tabular}{ l l }
\item Exponentials Name & Link \\
\item Cartesian closed categories \hline
\item Natural transformations Categories & \sourcelink{Cat.Category} \\
\item Yoneda embedding Functors & \sourcelink{Cat.Category.Functor} \\
\item Monads Products & \sourcelink{Cat.Category.Products} \\
\item Categories Exponentials & \sourcelink{Cat.Category.Exponentials} \\
\begin{enumerate}[i.] Cartesian closed categories & \sourcelink{Cat.Category.CartesianClosed} \\
\item Opposite category Natural transformations & \sourcelink{Cat.Category.NaturalTransformation} \\
\item Category of sets Yoneda embedding & \sourcelink{Cat.Category.Yoneda} \\
\item ``Pair category'' Monads & \sourcelink{Cat.Category.Monads} \\
\end{enumerate} %% Categories & \null \\
\end{enumerate} %%
Opposite category &
\href{\sourcebasepath Cat.Category.html#22744}{\texttt{Cat.Category.Opposite}} \\
Category of sets & \sourcelink{Cat.Categories.Sets} \\
Span category &
\href{\sourcebasepath Cat.Category.Product.html#2919}{\texttt{Cat.Category.Product.Span}} \\
%%
\end{tabular}
\end{center}
% %
Furthermore the following items have been partly formalized: Furthermore the following items have been partly formalized:
% %
\begin{enumerate}[i.] \begin{center}
\item The (higher) category of categories. \begin{tabular}{ l l }
\item Category of relations Name & Link \\
\item Category of functors and natural transformations -- only as a precategory \hline
\item Free category Category of categories & \sourcelink{Cat.Categories.Cat} \\
\item Monoidal objects Category of relations & \sourcelink{Cat.Categories.Rel} \\
\item Monoidal categories Category of functors & \sourcelink{Cat.Categories.Fun} \\
\end{enumerate} Free category & \sourcelink{Cat.Categories.Free} \\
Monoids & \sourcelink{Cat.Category.Monoid} \\
\end{tabular}
\end{center}
% %
As well as a range of various results about these. E.g. I have shown that the As well as a range of various results about these. E.g. I have shown that the
category of sets has products. In the following I aim to demonstrate some of the category of sets has products. In the following I aim to demonstrate some of the
@ -70,22 +81,22 @@ constituents of a category and can be found in typical mathematical expositions
on the topic. We, however, impose one further requirement on what it means to be on the topic. We, however, impose one further requirement on what it means to be
a category, namely that the type of arrows form a set. a category, namely that the type of arrows form a set.
Such categories are called \nomen{1-categories}. It is possible to relax this Such categories are called \nomen{1-categories}{1-category}. It is possible to relax
requirement. This would lead to the notion of higher categories (\cite[p. this requirement. This would lead to the notion of higher categories (\cite[p.
307]{hott-2013}). For the purpose of this project, however, this report will 307]{hott-2013}). For the purpose of this project, however, this report will
restrict itself to 1-categories. Making based on higher categories would be a restrict itself to 1-categories\index{1-category}. Generalizing this work to
very natural possible extension of this work. higher categories would be a very natural possible extension of this work.
Raw categories satisfying all of the above requirements are called a Raw categories satisfying all of the above requirements are called a
\nomen{pre}-categories. As a further requirement to be a proper category we \nomenindex{pre categories}. As a further requirement to be a proper category we
require it to be univalent. Before we can define this, I must introduce two more require it to be univalent. Before we can define this, I must introduce two more
definitions: If we let $p$ be a witness to the identity law, which formally is: definitions: If we let $p$ be a witness to the identity law, which formally is:
% %
\begin{equation} \begin{equation}
\label{eq:identity} \label{eq:identity}
\var{IsIdentity} \defeq \var{IsIdentity} \defeq
\prod_{A\ B \tp \Object} \prod_{f \tp A \to B} \prod_{A\ B \tp \Object} \prod_{f \tp \Arrow\ A\ B}
\id \comp f \equiv f \x f \comp \id \equiv f \id \lll f \equiv f \x f \lll \id \equiv f
\end{equation} \end{equation}
% %
Then we can construct the identity isomorphism $\idIso \tp \identity, Then we can construct the identity isomorphism $\idIso \tp \identity,
@ -190,8 +201,8 @@ This example illustrates nicely how we can use these combinators to reason about
the other homotopic levels. These combinators are however not applicable in the other homotopic levels. These combinators are however not applicable in
situations where we want to reason about other types - e.g. types we have situations where we want to reason about other types - e.g. types we have
defined ourselves. For instance, after we have proven that all the projections defined ourselves. For instance, after we have proven that all the projections
of pre-categories are propositions, then we would like to bundle this up to show of pre categories are propositions, then we would like to bundle this up to show
that the type of pre-categories is also a proposition. Formally: that the type of pre categories is also a proposition. Formally:
% %
\begin{equation} \begin{equation}
\label{eq:propIsPreCategory} \label{eq:propIsPreCategory}
@ -264,7 +275,7 @@ a priori that equality proofs are unique.
The situation is a bit more complicated when we have a dependent type. For The situation is a bit more complicated when we have a dependent type. For
instance, when we want to show that $\IsCategory$ is a mere proposition. instance, when we want to show that $\IsCategory$ is a mere proposition.
$\IsCategory$ is a record with two fields, a witness to being a pre-category and $\IsCategory$ is a record with two fields, a witness to being a pre category and
the univalence condition. Recall that the univalence condition is indexed by the the univalence condition. Recall that the univalence condition is indexed by the
identity-proof. So to follow the same recipe as above, let $a\ b \tp identity-proof. So to follow the same recipe as above, let $a\ b \tp
\IsCategory$ be given, to show them equal, we now need to give two paths. One homogeneous: \IsCategory$ be given, to show them equal, we now need to give two paths. One homogeneous:
@ -423,7 +434,7 @@ equalities and isomorphisms (on arrows). It is worthwhile to dwell on this for a
few seconds. This type looks very similar to univalence for types and is few seconds. This type looks very similar to univalence for types and is
therefore perhaps a bit more intuitive to grasp the implications of. Of course therefore perhaps a bit more intuitive to grasp the implications of. Of course
univalence for types (which is a proposition -- i.e. provable) does not imply univalence for types (which is a proposition -- i.e. provable) does not imply
univalence of all pre-category since morphisms in a category are not regular univalence of all pre category since morphisms in a category are not regular
functions -- in stead they can be thought of as a generalization hereof. The univalence criterion therefore is simply a way of restricting arrows functions -- in stead they can be thought of as a generalization hereof. The univalence criterion therefore is simply a way of restricting arrows
to behave similarly to maps. to behave similarly to maps.
@ -541,7 +552,7 @@ over-bar. So e.g. $\idToIso$ is a function in the underlying category and the
corresponding thing is denoted $\wideoverbar{\idToIso}$ in the opposite corresponding thing is denoted $\wideoverbar{\idToIso}$ in the opposite
category. category.
Showing that this forms a pre-category is rather straightforward. Showing that this forms a pre category is rather straightforward.
% %
$$ $$
h \rrr (g \rrr f) \equiv h \rrr g \rrr f h \rrr (g \rrr f) \equiv h \rrr g \rrr f
@ -826,13 +837,12 @@ that there exists a unique arrow $\pi \tp \Arrow\ X\ (A \x B)$ satisfying
% %
$\pi$ is called the product (arrow) of $f$ and $g$. $\pi$ is called the product (arrow) of $f$ and $g$.
\subsection{Pair category} \subsection{Span category}
\newcommand\pairA{\mathcal{A}} \newcommand\pairA{\mathcal{A}}
\newcommand\pairB{\mathcal{B}} \newcommand\pairB{\mathcal{B}}
Given a base category $\bC$ and two objects in this category $\pairA$ and Given a base category $\bC$ and two objects in this category $\pairA$ and
$\pairB$ we can construct the ``pair category'': \TODO{This is a working title, $\pairB$ we can construct the \nomenindex{span category}:
it is nice to have a name for this thing to refer back to}
The type of objects in this category will be an object in the underlying The type of objects in this category will be an object in the underlying
category, $X$, and two arrows (also from the underlying category) category, $X$, and two arrows (also from the underlying category)
@ -964,7 +974,7 @@ $$
This we get from \ref{eq:productEqPrinc} and the fact that homotopical structure This we get from \ref{eq:productEqPrinc} and the fact that homotopical structure
is cumulative. is cumulative.
This finishes the proof that this is a valid pre-category. This finishes the proof that this is a valid pre category.
\subsubsection{Univalence} \subsubsection{Univalence}
To prove that this is a proper category it must be shown that it is univalent. To prove that this is a proper category it must be shown that it is univalent.
@ -1151,9 +1161,10 @@ gory details.
% %
\subsection{Propositionality of products} \subsection{Propositionality of products}
% %
Now that we have constructed the ``pair category'' I will demonstrate how to use Now that we have constructed the span category\index{span category} I will
this to prove that products are propositional. I will do this by showing that demonstrate how to use this to prove that products are propositional. I will
terminal objects in this category are equivalent to products: do this by showing that terminal objects in this category are equivalent to
products:
% %
\begin{align} \begin{align}
\var{Terminal}\var{Product}\ \ \mathcal{A}\ \mathcal{B} \var{Terminal}\var{Product}\ \ \mathcal{A}\ \mathcal{B}
@ -1178,7 +1189,7 @@ indeed a product. That is, for an object $Y$ and two arrows $y_𝒜 \tp
% %
Since $X, x_𝒜, x_$ is a terminal object there is a \emph{unique} arrow from Since $X, x_𝒜, x_$ is a terminal object there is a \emph{unique} arrow from
this object to any other object, so also $Y, y_𝒜, y_$ in particular (which is this object to any other object, so also $Y, y_𝒜, y_$ in particular (which is
also an object in the pair category). The arrow we will play the role of $f$ and also an object in the span category). The arrow we will play the role of $f$ and
it immediately satisfies \ref{eq:pairCondRev}. Any other arrow satisfying these it immediately satisfies \ref{eq:pairCondRev}. Any other arrow satisfying these
conditions will be equal since $f$ is unique. conditions will be equal since $f$ is unique.
@ -1262,14 +1273,15 @@ Denote the arrow-map of $\EndoR$ as $\fmap$, then this data must satisfy the
following laws: following laws:
% %
\begin{align} \begin{align}
\label{eq:monad-monoidal-laws} \label{eq:monad-monoidal-laws-0}
\begin{split}
\join \lll \fmap\ \join \join \lll \fmap\ \join
&\join \lll \join\ \fmap \\ &\join \lll \join \\
\join \lll \pure\ \fmap &\identity \\ \label{eq:monad-monoidal-laws-1}
\join \lll \pure\ &\identity \\
\label{eq:monad-monoidal-laws-2}
\join \lll \fmap\ \pure &\identity \join \lll \fmap\ \pure &\identity
\end{split}
\end{align} \end{align}
\newcommand\monoidallaws{\ref{eq:monad-monoidal-laws-0}, \ref{eq:monad-monoidal-laws-1} and \ref{eq:monad-monoidal-laws-2}}%
% %
The implicit arguments to the arrows above have been left out and the objects The implicit arguments to the arrows above have been left out and the objects
they range over are universally quantified. they range over are universally quantified.
@ -1279,13 +1291,13 @@ they range over are universally quantified.
The Kleisli-formulation consists of the following data: The Kleisli-formulation consists of the following data:
% %
\begin{align} \begin{align}
\label{eq:monad-kleisli-data}
\begin{split} \begin{split}
\EndoR & \tp \Object\Object \\ \label{eq:monad-kleisli-data}
\pure & \tp % \prod_{X \tp Object} \EndoR & \tp \Object\Object \\
\pure & \tp % \prod_{X \tp Object}
\Arrow\ X\ (\EndoR\ X) \\ \Arrow\ X\ (\EndoR\ X) \\
\bind & \tp % \prod_{X\;Y \tp Object}\Arrow\ X\ (\EndoR\ Y) \bind & \tp % \prod_{X\;Y \tp Object}\Arrow\ X\ (\EndoR\ Y)
\Arrow\ (\EndoR\ X)\ (\EndoR\ Y) \Arrow\ (\EndoR\ X)\ (\EndoR\ Y)
\end{split} \end{split}
\end{align} \end{align}
% %
@ -1298,17 +1310,14 @@ is a regular maps on objects and a pair of arrows.
This data must satisfy: This data must satisfy:
% %
\begin{align} \begin{align}
\label{eq:monad-monoidal-laws} \label{eq:monad-kleisli-laws-0}
\begin{split} \bind\ \pure &\identity_{\EndoR\ X} \\
\bind\ \pure &\identity_{\EndoR\ X} \label{eq:monad-kleisli-laws-1}
\\ \pure \fish f & ≡ f \\
% \prod_{f \tp \Arrow\ X\ (\EndoR\ Y)} \label{eq:monad-kleisli-laws-2}
\pure \fish f & ≡ f
\\
% \prod_{\substack{g \tp \Arrow\ Y\ (\EndoR\ Z)\\f \tp \Arrow\ X\ (\EndoR\ Y)}}
(\bind\ f) \rrr (\bind\ g) &\bind\ (f \fish g) (\bind\ f) \rrr (\bind\ g) &\bind\ (f \fish g)
\end{split}
\end{align} \end{align}
\newcommand\kleislilaws{\ref{eq:monad-kleisli-laws-0}, \ref{eq:monad-kleisli-laws-1} and \ref{eq:monad-kleisli-laws-2}}%
% %
Here likewise the arrows $f \tp \Arrow\ X\ (\EndoR\ Y)$ and $g \tp Here likewise the arrows $f \tp \Arrow\ X\ (\EndoR\ Y)$ and $g \tp
\Arrow\ Y\ (\EndoR\ Z)$ are universally quantified (as well as the objects they \Arrow\ Y\ (\EndoR\ Z)$ are universally quantified (as well as the objects they
@ -1317,10 +1326,6 @@ f \rrr (\bind\ g)$ . (\TODO{Better way to typeset $\fish$?})
\subsection{Equivalence of formulations} \subsection{Equivalence of formulations}
% %
In my implementation I proceed to show how the one formulation gives rise to
the other and vice-versa. For the present purpose I will briefly sketch some
parts of this construction:
The notation I have chosen here in the report The notation I have chosen here in the report
overloads e.g. $\pure$ to both refer to a natural transformation and an arrow. overloads e.g. $\pure$ to both refer to a natural transformation and an arrow.
This is of course not a coincidence as the arrow in the Kleisli formulation This is of course not a coincidence as the arrow in the Kleisli formulation
@ -1329,15 +1334,141 @@ called $\pure$.
In the monoidal formulation we can define $\bind$: In the monoidal formulation we can define $\bind$:
% %
\newcommand\joinX{\wideoverbar{\join}}%
\newcommand\bindX{\wideoverbar{\bind}}%
\newcommand\EndoRX{\wideoverbar{\EndoR}}%
\newcommand\pureX{\wideoverbar{\pure}}%
\newcommand\fmapX{\wideoverbar{\fmap}}%
\begin{align} \begin{align}
\bind \defeq \join \lll \fmap\ f \bind\ f \defeq \joinX \lll \fmap\ f
\end{align} \end{align}
% %
And likewise in the Kleisli formulation we can define $\join$: And likewise in the Kleisli formulation we can define $\join$:
% %
\begin{align} \begin{align}
\join \defeq \bind\ \identity \join \defeq \bindX\ \identity
\end{align} \end{align}
% %
It now remains to show that we can prove the various laws given this choice. I Here $\joinX$ corresponds to the arrow from the natural
refer the reader to my implementation for the details. transformation $\join$. $\bindX$ on the other hand corresponds to a
natural transformation constructed from $\bind$. It now remains to show that
this construction indeed gives rise to a monad. This will be done in two steps.
First we will assume that we have a monad in the monoidal form; $(\EndoR, \pure,
\join)$ and then show that $\EndoR, \pure, \bind$ is indeed a monad in the
Kleisli form. In the second part we will show the other direction.
\subsubsection{Monoidal to Kleisli}
Let $(\EndoR, \pure, \join)$ be given as in \ref{eq:monad-monoidal-data}
satisfying the laws \monoidallaws. For the data of the Kleisli
formulation we pick:
%
\begin{align}
\begin{split}
\EndoR & \defeq \EndoRX \\
\pure & \defeq \pureX \\
\bind\ f & \tp \joinX \lll \fmapX\ f
\end{split}
\end{align}
%
$\EndoRX$ is the object map of the endo-functor $\EndoR$,
$\pureX$ and $\joinX$ are the arrows from the natural
transformations $\pure$ and $\join$ respectively. $\fmapX$ is the
arrow map of the endo-functor $\EndoR$. It now just remains to verify
the laws \kleislilaws. For \ref{eq:monad-kleisli-laws-0}:
%
\begin{align*}
\bind\ \pure &
\join \lll (\fmap\ \pure) && \text{By definition} \\
&\identity && \text{By \ref{eq:monad-monoidal-laws-2}}
\end{align*}
%
For \ref{eq:monad-kleisli-laws-1}:
%
\begin{align*}
\pure \fish f
& \equiv %%%
\pure \ggg \bind\ f && \text{By definition} \\ &
\bind\ f \lll \pure && \text{By definition} \\ &
\joinX \lll \fmapX\ f \lll \pureX && \text{By definition} \\ &
\joinX \lll \pureX \lll f && \text{$\pure$ is a natural transformation} \\ &
\identity \lll f && \text{By \ref{eq:monad-monoidal-laws-1}} \\ &
f && \text{Left identity}
\end{align*}
%
For \ref{eq:monad-kleisli-laws-2}:
\begin{align*}
\bind\ g \rrr \bind\ f &
\bind\ f \lll \bind\ g
\\ &
%% %%%%
\joinX \lll \fmapX\ g \lll \joinX \lll \fmapX\ f
&& \text{\dots} \\ &
\joinX \lll \joinX \lll \fmapX^2\ g \lll \fmapX\ f
&& \text{$\join$ is a natural transformation} \\ &
\joinX\ \lll \fmapX\ \joinX \lll \fmapX^2\ g \lll \fmapX\ f
&& \text{By \ref{eq:monad-monoidal-laws-0}} \\ &
\joinX\ \lll \fmapX\ \joinX\ \lll \fmapX\ (\fmapX\ g) \lll \fmapX\ f
&& \text{} \\ &
\joinX \lll \fmapX\ (\joinX \lll \fmapX\ g \lll f)
&& \text{Distributive law for functors} \\ & \equiv
%%%%
\bind\ (g \fish f)
\end{align*}
\subsubsection{Kleisli to Monoidal}
For the other direction we are given $(\EndoR, \pure, \bind)$ as in
\ref{eq:monad-kleisli-data} satisfying the laws \kleislilaws. For the data of
the monoidal formulation we pick:
%
\begin{align}
\begin{split}
\EndoR & \defeq \EndoRX \\
\pure & \defeq \pureX \\
\join & \defeq \bind\ \identity
\end{split}
\end{align}
%
Where $\EndoRX \defeq (\bind\ (\pure \lll f), \EndoR)$ and $\pureX \defeq
\bind\ \identity$. We must now show the laws \monoidallaws, but we must also
verify that our choice of $\EndoRX$ actually is a functor. I will ommit this
here. In stead we shall see how these two mappings are indeed inverses.
\subsubsection{Equivalence}
To prove that the two formulations are equivalent we must demonstrate that the
two mappings sketched above are indeed inverses of each other. If we name the
first mapping $\toKleisli$ and it's proposed inverse $\toMonoidal$
then we must show:
%
\begin{align}
\label{eq:monad-forwards}
\toKleisli \comp \toMonoidal &\identity \\
\label{eq:monad-backwards}
\toMonoidal \comp \toKleisli &\identity
\end{align}
%
For \ref{eq:monad-forwards} let $(\EndoR, \pure, \join)$ be a monad in the
monoidal form. In my formulation the proof that being-a-monad is a proposition
can be found. With this result in place we get an equality principle for
kleisli-monads that say that to equate two such monads it suffices to equate
their data-part. So it suffices to equate the data-parts of the
\ref{eq:monad-forwards}. Such a proof is a triple equation the three projections
of \ref{eq:monad-kleisli-data}. The first two hold definitionally -- essentially
one just wraps and unwraps the morphism in a functor. For the last equation a
little more work is required:
%
\begin{align*}
\join \lll \fmap\ f &
\fmap\ f \rrr \join \\ &
\bind\ (f \rrr \pure) \rrr \bind\ \identity
&& \text{By definition of $\fmap$ and $\join$} \\ &
\bind\ (f \rrr \pure \fish \identity)
&& \text{By \ref{eq:monad-kleisli-laws-2}} \\ &
\bind\ (f \rrr \identity)
&& \text{By \ref{eq:monad-kleisli-laws-1}} \\ &
\bind\ f
\end{align*}
%
For the other direction we can likewise verify that the maps $\EndoR$, $\bind$,
$\join$, and $\fmap$ are equal. The equality principle for functors gives us
that this is enough to show that the the functor $\EndoR$ we construct is
identical. Similarly for the natural transformations we have that the naturality
condition is a proposition so the paths between the maps are sufficient.

View file

@ -9,19 +9,21 @@ limitations inherent in ITT and -- by extension -- Agda.
Consider the functions: Consider the functions:
% %
\begin{multicols}{2} \begin{multicols}{2}
\noindent \noindent%
\begin{equation*} \begin{equation*}%
f \defeq (n \tp \bN) \mto (0 + n \tp \bN) f \defeq \lambda\ (n \tp \bN) \to (0 + n \tp \bN)
\end{equation*} \end{equation*}%
\begin{equation*} \begin{equation*}%
g \defeq (n \tp \bN) \mto (n + 0 \tp \bN) g \defeq \lambda\ (n \tp \bN) \to (n + 0 \tp \bN)
\end{equation*} \end{equation*}%
\end{multicols} \end{multicols}%
% %
$n + 0$ is \nomen{definitionally} equal to $n$, which we write as $n + 0 = n$. The term $n + 0$ is
This is also called \nomen{judgmental} equality. We call it definitional \nomenindex{definitionally} equal to $n$, which we
equality because the \emph{equality} arises from the \emph{definition} of $+$ write as $n + 0 = n$. This is also called
which is: \nomenindex{judgmental equality}.
We call it definitional equality because the \emph{equality} arises
from the \emph{definition} of $+$ which is:
% %
\begin{align*} \begin{align*}
+ & \tp \bN \to \bN \to \bN \\ + & \tp \bN \to \bN \to \bN \\
@ -29,11 +31,12 @@ which is:
n + (\suc{m}) & \defeq \suc{(n + m)} n + (\suc{m}) & \defeq \suc{(n + m)}
\end{align*} \end{align*}
% %
Note that $0 + n$ is \emph{not} definitionally equal to $n$. $0 + n$ is in Note that $0 + n$ is \emph{not} definitionally equal to $n$. $0 + n$
normal form. I.e.; there is no rule for $+$ whose left-hand-side matches this is in normal form. I.e.; there is no rule for $+$ whose left hand side
expression. We \emph{do}, however, have that they are \nomen{propositionally} matches this expression. We \emph{do}, however, have that they are
equal, which we write as $n + 0 \equiv n$. Propositional equality means that \nomen{propositionally}{propositional equality} equal, which we write
there is a proof that exhibits this relation. Since equality is a transitive as $n + 0 \equiv n$. Propositional equality means that there is a
proof that exhibits this relation. Since equality is a transitive
relation we have that $n + 0 \equiv 0 + n$. relation we have that $n + 0 \equiv 0 + n$.
Unfortunately we don't have $f \equiv g$.\footnote{Actually showing this is Unfortunately we don't have $f \equiv g$.\footnote{Actually showing this is
@ -43,7 +46,8 @@ not true.} There is no way to construct a proof asserting the obvious
equivalence of $f$ and $g$ -- even though we can prove them equal for all equivalence of $f$ and $g$ -- even though we can prove them equal for all
points. This is exactly the notion of equality of functions that we are points. This is exactly the notion of equality of functions that we are
interested in; that they are equal for all inputs. We call this interested in; that they are equal for all inputs. We call this
\nomen{point-wise equality}, where the \emph{points} of a function refers
\nomenindex{point-wise equality}, where the \emph{points} of a function refers
to its arguments. to its arguments.
In the context of category theory functional extensionality is e.g. needed to In the context of category theory functional extensionality is e.g. needed to
@ -51,14 +55,14 @@ show that representable functors are indeed functors. The representable functor
for a category $\bC$ and a fixed object in $A \in \bC$ is defined to be: for a category $\bC$ and a fixed object in $A \in \bC$ is defined to be:
% %
\begin{align*} \begin{align*}
\fmap \defeq X \mto \Hom_{\bC}(A, X) \fmap \defeq \lambda\ X \to \Hom_{\bC}(A, X)
\end{align*} \end{align*}
% %
The proof obligation that this satisfies the identity law of functors The proof obligation that this satisfies the identity law of functors
($\fmap\ \idFun \equiv \idFun$) thus becomes: ($\fmap\ \idFun \equiv \idFun$) thus becomes:
% %
\begin{align*} \begin{align*}
\Hom(A, \idFun_{\bX}) = (g \mto \idFun \comp g) \equiv \idFun_{\Sets} \Hom(A, \idFun_{\bX}) = (\lambda\ g \to \idFun \comp g) \equiv \idFun_{\Sets}
\end{align*} \end{align*}
% %
One needs functional extensionality to ``go under'' the function arrow and apply One needs functional extensionality to ``go under'' the function arrow and apply
@ -67,20 +71,23 @@ the (left) identity law of the underlying category to prove $\idFun \comp g
% %
\subsection{Equality of isomorphic types} \subsection{Equality of isomorphic types}
% %
Let $\top$ denote the unit type -- a type with a single constructor. In the Let $\top$ denote the unit type -- a type with a single constructor.
propositions-as-types interpretation of type theory $\top$ is the proposition In the propositions as types interpretation of type theory $\top$ is
that is always true. The type $A \x \top$ and $A$ has an element for each $a : the proposition that is always true. The type $A \x \top$ and $A$ has
A$. So in a sense they have the same shape (Greek; \nomen{isomorphic}). The an element for each $a \tp A$. So in a sense they have the same shape
second element of the pair does not add any ``interesting information''. It can (Greek;
be useful to identify such types. In fact, it is quite commonplace in \nomenindex{isomorphic}). The second element of the pair does not
mathematics. Say we look at a set $\{x \mid \phi\ x \land \psi\ x\}$ and somehow add any ``interesting information''. It can be useful to identify such
conclude that $\psi\ x \equiv \top$ for all $x$. A mathematician would types. In fact, it is quite commonplace in mathematics. Say we look at
immediately conclude $\{x \mid \phi\ x \land \psi\ x\} \equiv \{x \mid a set $\{x \mid \phi\ x \land \psi\ x\}$ and somehow conclude that
\phi\ x\}$ without thinking twice. Unfortunately such an identification can not $\psi\ x \equiv \top$ for all $x$. A mathematician would immediately
conclude $\{x \mid \phi\ x \land \psi\ x\} \equiv \{x \mid \phi\ x\}$
without thinking twice. Unfortunately such an identification can not
be performed in ITT. be performed in ITT.
More specifically what we are interested in is a way of identifying More specifically what we are interested in is a way of identifying
\nomen{equivalent} types. I will return to the definition of equivalence later
\nomenindex{equivalent} types. I will return to the definition of equivalence later
in section \S\ref{sec:equiv}, but for now it is sufficient to think of an in section \S\ref{sec:equiv}, but for now it is sufficient to think of an
equivalence as a one-to-one correspondence. We write $A \simeq B$ to assert that equivalence as a one-to-one correspondence. We write $A \simeq B$ to assert that
$A$ and $B$ are equivalent types. The principle of univalence says that: $A$ and $B$ are equivalent types. The principle of univalence says that:
@ -88,7 +95,7 @@ $A$ and $B$ are equivalent types. The principle of univalence says that:
$$\mathit{univalence} \tp (A \simeq B) \simeq (A \equiv B)$$ $$\mathit{univalence} \tp (A \simeq B) \simeq (A \equiv B)$$
% %
In particular this allows us to construct an equality from an equivalence In particular this allows us to construct an equality from an equivalence
($\mathit{ua} \tp (A \simeq B) \to (A \equiv B)$) and vice-versa. ($\mathit{ua} \tp (A \simeq B) \to (A \equiv B)$) and vice versa.
\section{Formalizing Category Theory} \section{Formalizing Category Theory}
% %
@ -127,7 +134,7 @@ implementations of category theory in Agda:
\item \item
\url{https://github.com/mortberg/cubicaltt} \url{https://github.com/mortberg/cubicaltt}
A formalization in CubicalTT - a language designed for cubical-type-theory. A formalization in CubicalTT - a language designed for cubical type theory.
Formalizes many different things, but only a few concepts from category Formalizes many different things, but only a few concepts from category
theory. theory.
@ -141,19 +148,27 @@ compare some aspects of this formalization with the existing ones.\TODO{How can
There are alternative approaches to working in a cubical setting where one can There are alternative approaches to working in a cubical setting where one can
still have univalence and functional extensionality. One option is to postulate still have univalence and functional extensionality. One option is to postulate
these as axioms. This approach, however, has other shortcomings, e.g.; you lose these as axioms. This approach, however, has other shortcomings, e.g.; you lose
\nomen{canonicity} (\TODO{Pageno!} \cite{huber-2016}). Canonicity means that any
well-typed term evaluates to a \emph{canonical} form. For example for a closed \nomenindex{canonicity} (\TODO{Pageno!} \cite{huber-2016}). Canonicity means that any
well typed term evaluates to a \emph{canonical} form. For example for a closed
term $e \tp \bN$ it will be the case that $e$ reduces to $n$ applications of term $e \tp \bN$ it will be the case that $e$ reduces to $n$ applications of
$\mathit{suc}$ to $0$ for some $n$; $e = \mathit{suc}^n\ 0$. Without canonicity $\mathit{suc}$ to $0$ for some $n$; $e = \mathit{suc}^n\ 0$. Without canonicity
terms in the language can get ``stuck'' -- meaning that they do not reduce to a terms in the language can get ``stuck'' -- meaning that they do not reduce to a
canonical form. canonical form.
Another approach is to use the \emph{setoid interpretation} of type theory Another approach is to use the \emph{setoid interpretation} of type
(\cite{hofmann-1995,huber-2016}). With this approach one works with theory (\cite{hofmann-1995,huber-2016}). With this approach one works
\nomen{extensional sets} $(X, \sim)$, that is a type $X \tp \MCU$ and an with
equivalence relation $\sim \tp X \to X \to \MCU$ on that type. Under the setoid \nomenindex{extensional sets} $(X, \sim)$, that is a type $X \tp \MCU$
interpretation the equivalence relation serve as a sort of ``local'' and an equivalence relation $\sim\ \tp X \to X \to \MCU$ on that type.
propositional equality. This approach has other drawbacks; it does not satisfy Under the setoid interpretation the equivalence relation serve as a
sort of ``local'' propositional equality. Since the developer gets to
pick this relation it is not guaranteed to be a congruence relation
apriori. So this must be verified manually by the developer.
Furthermore, functions between different setoids must be shown to be
setoid homomorphism, that is; they preserve the relation.
This approach has other drawbacks; it does not satisfy
all propositional equalities of type theory (\TODO{Citation needed}), is all propositional equalities of type theory (\TODO{Citation needed}), is
cumbersome to work with in practice (\cite[p. 4]{huber-2016}) and makes cumbersome to work with in practice (\cite[p. 4]{huber-2016}) and makes
equational proofs less reusable since equational proofs $a \sim_{X} b$ are equational proofs less reusable since equational proofs $a \sim_{X} b$ are
@ -163,14 +178,21 @@ inherently `local' to the extensional set $(X , \sim)$.
\TODO{Talk a bit about terminology. Find a good place to stuff this little \TODO{Talk a bit about terminology. Find a good place to stuff this little
section.} section.}
In the remainder of this paper I will use the term \nomen{Type} to describe -- In the remainder of this paper I will use the term
\nomenindex{Type} to describe --
well, types. Thereby diverging from the notation in Agda where the keyword well, types. Thereby diverging from the notation in Agda where the keyword
\texttt{Set} refers to types. \nomen{Set} on the other hand shall refer to the \texttt{Set} refers to types.
\nomenindex{Set} on the other hand shall refer to the
homotopical notion of a set. I will also leave all universe levels implicit. homotopical notion of a set. I will also leave all universe levels implicit.
And I use the term \nomen{arrow} to refer to morphisms in a category, whereas And I use the term
the terms morphism, map or function shall be reserved for talking about \nomenindex{arrow} to refer to morphisms in a category,
type-theoretic functions; i.e. functions in Agda. whereas the terms
\nomenindex{morphism},
\nomenindex{map} or
\nomenindex{function}
shall be reserved for talking about type theoretic functions; i.e.
functions in Agda.
$\defeq$ will be used for introducing definitions. $=$ will be used to for $\defeq$ will be used for introducing definitions. $=$ will be used to for
judgmental equality and $\equiv$ will be used for propositional equality. judgmental equality and $\equiv$ will be used for propositional equality.
@ -181,12 +203,16 @@ All this is summarized in the following table:
\begin{tabular}{ c c c } \begin{tabular}{ c c c }
Name & Agda & Notation \\ Name & Agda & Notation \\
\hline \hline
\nomen{Type} & \texttt{Set} & $\Type$ \\
\nomen{Set} & \texttt{Σ Set IsSet} & $\Set$ \\ \varindex{Type} & \texttt{Set} & $\Type$ \\
\varindex{Set} & \texttt{Σ Set IsSet} & $\Set$ \\
Function, morphism, map & \texttt{A → B} & $A → B$ \\ Function, morphism, map & \texttt{A → B} & $A → B$ \\
Dependent- ditto & \texttt{(a : A) → B} & $_{a \tp A} B$ \\ Dependent- ditto & \texttt{(a : A) → B} & $_{a \tp A} B$ \\
\nomen{Arrow} & \texttt{Arrow A B} & $\Arrow\ A\ B$ \\
\nomen{Object} & \texttt{C.Object} & $̱ℂ.Object$ \\ \varindex{Arrow} & \texttt{Arrow A B} & $\Arrow\ A\ B$ \\
\varindex{Object} & \texttt{C.Object} & $̱ℂ.Object$ \\
Definition & \texttt{=} & $̱\defeq$ \\ Definition & \texttt{=} & $̱\defeq$ \\
Judgmental equality & \null & $̱=$ \\ Judgmental equality & \null & $̱=$ \\
Propositional equality & \null & $̱\equiv$ Propositional equality & \null & $̱\equiv$

View file

@ -17,7 +17,6 @@
\newcommand{\UU}{\ensuremath{\mathcal{U}}\xspace} \newcommand{\UU}{\ensuremath{\mathcal{U}}\xspace}
\let\type\UU \let\type\UU
\newcommand{\MCU}{\UU} \newcommand{\MCU}{\UU}
\newcommand{\nomen}[1]{\emph{#1}}
\newcommand{\todo}[1]{\textit{#1}} \newcommand{\todo}[1]{\textit{#1}}
\newcommand{\comp}{\circ} \newcommand{\comp}{\circ}
\newcommand{\x}{\times} \newcommand{\x}{\times}
@ -33,7 +32,9 @@
} }
\makeatother \makeatother
\newcommand{\var}[1]{\ensuremath{\mathit{#1}}} \newcommand{\var}[1]{\ensuremath{\mathit{#1}}}
\newcommand{\varindex}[1]{\ensuremath{\mathit{#1}}\index{#1}} \newcommand{\varindex}[1]{\ensuremath{\var{#1}}\index{$\var{#1}$}}
\newcommand{\nomen}[2]{\emph{#1}\index{#2}}
\newcommand{\nomenindex}[1]{\nomen{#1}{#1}}
\newcommand{\Hom}{\varindex{Hom}} \newcommand{\Hom}{\varindex{Hom}}
\newcommand{\fmap}{\varindex{fmap}} \newcommand{\fmap}{\varindex{fmap}}
@ -93,3 +94,5 @@
\newcommand\funExt{\varindex{funExt}} \newcommand\funExt{\varindex{funExt}}
\newcommand{\suc}[1]{\varindex{suc}\ #1} \newcommand{\suc}[1]{\varindex{suc}\ #1}
\newcommand{\trans}{\varindex{trans}} \newcommand{\trans}{\varindex{trans}}
\newcommand{\toKleisli}{\varindex{toKleisli}}
\newcommand{\toMonoidal}{\varindex{toMonoidal}}

View file

@ -113,3 +113,4 @@
} }
\makeatother \makeatother
\usepackage{xspace}