# User:Michiexile/MATH198/Lecture 1

(Difference between revisions)

## Contents

I'm Mikael Vejdemo-Johansson. I can be reached in my office 383-BB, especially during my office hours; or by email to mik@math.stanford.edu.

I encourage, strongly, student interactions.

## 2 Introduction

### 2.1 Why this course?

An introduction to Haskell will usually come with pointers toward Category Theory as a useful tool, though not with much more than the mention of the subject. This course is intended to fill that gap, and provide an introduction to Category Theory that ties into Haskell and functional programming as a source of examples and applications.

### 2.2 What will we cover?

The definition of categories, special objects and morphisms, functors, natural transformation, (co-)limits and special cases of these, adjunctions, freeness and presentations as categorical constructs, monads and Kleisli arrows, recursion with categorical constructs.

Maybe, just maybe, if we have enough time, we'll finish with looking at the definition of a topos, and how this encodes logic internal to a category. Applications to fuzzy sets.

### 2.3 What do we require?

Our examples will be drawn from discrete mathematics, logic, Haskell programming and linear algebra. I expect the following concepts to be at least vaguely familiar to anyone taking this course:

• Sets
• Functions
• Permutations
• Groups
• Partially ordered sets
• Vector spaces
• Linear maps
• Matrices
• Homomorphisms

## 3 Category

### 3.1 Graphs

We recall the definition of a (directed) graph. A graph G is a collection of edges (arrows) and vertices (nodes). Each edge is assigned a source node and a target node.

$source \to target$

Given a graph G, we denote the collection of nodes by G0 and the collection of arrows by G1. These two collections are connected, and the graph given its structure, by two functions: the source function $s:G_1\to G_0$ and the target function $t:G_1\to G_0$.

We shall not, in general, require either of the collections to be a set, but will happily accept larger collections; dealing with set-theoretical paradoxes as and when we have to. A graph where both nodes and arrows are sets shall be called small. A graph where either is a class shall be called large.

If both G0 and G1 are finite, the graph is called finite too.

The empty graph has $G_0 = G_1 = \emptyset$.

A discrete graph has $G_1=\emptyset$.

A complete graph has $G_1 = \{ (v,w) | v,w\in G_0\}$.

A simple graph has at most one arrow between each pair of nodes. Any relation on a set can be interpreted as a simple graph.

• Show some examples.

A homomorphism $f:G\to H$ of graphs is a pair of functions $f_0:G_0\to H_0$ and $f_1:G_1\to H_1$ such that sources map to sources and targets map to targets, or in other words:

• s(f1(e)) = f0(s(e))
• t(f1(e)) = f0(t(e))

By a path in a graph G from the node x to the node y of length k, we mean a sequence of edges $(f_1,f_2,\dots,f_k)$ such that:

• s(f1) = x
• t(fk) = y
• s(fi) = t(fi − 1) for all other i.

Paths with start and end point identical are called closed. For any node x, there is a unique closed path () starting and ending in x of length 0.

For any edge f, there is a unique path from s(f) to t(f) of length 1: (f).

We denote by Gk the set of paths in G of length k.

### 3.2 Categories

We now are ready to define a category. A category is a graph C equipped with an associative composition operation $\circ:G_2\to G_1$, and an identity element for composition 1x for each node x of the graph.

Note that G2 can be viewed as a subset of $G_1\times G_1$, the set of all pairs of arrows. It is intentional that we define the composition operator on only a subset of the set of all pairs of arrows - the composable pairs. Whenever you'd want to compose two arrows that don't line up to a path, you'll get nonsense, and so any statement about the composition operator has an implicit "whenever defined" attached to it.

The definition is not quite done yet - this composition operator, and the identity arrows both have a few rules to fulfill, and before I state these rules, there are some notation we need to cover.

#### 3.2.1 Backwards!

If we have a path given by the arrows (f,g) in G2, we expect $f:A\to B$ and $g:B\to C$ to compose to something that goes $A\to C$. The origin of all these ideas lies in geometry and algebra, and so the abstract arrows in a category are supposed to behave like functions under function composition, even though we don't say it explicitly.

Now, we are used to writing function application as f(x) - and possibly, from Haskell, as
f x
. This way, the composition of two functions would read g(f(x)).

On the other hand, the way we write our paths, we'd read f then g. This juxtaposition makes one of the two ways we write things seem backwards. We can resolve it either by making our paths in the category go backwards, or by reversing how we write function application.

In the latter case, we'd write x.f, say, for the application of f to x, and then write x.f.g for the composition. It all ends up looking a lot like Reverse Polish Notation, and has its strengths, but feels unnatural to most. It does, however, have the benefit that we can write out function composition as $(f,g) \mapsto f.g$ and have everything still make sense in all notations.

In the former case, which is the most common in the field, we accept that paths as we read along the arrows and compositions look backwards, and so, if $f:A\to B$ and $g:B\to C$, we write $g\circ f:A\to C$, remembering that elements are introduced from the right, and the functions have to consume the elements in the right order.

The existence of the identity map can be captured in a function language as well: it is the existence of a function $u:G_0\to G_1$.

Now for the remaining rules for composition. Whenever defined, we expect associativity - so that $h\circ(g\circ f)=(h\circ g)\circ f$. Furthermore, we expect:

1. Composition respects sources and targets, so that:
• $s(g\circ f) = s(f)$
• $t(g\circ f) = t(g)$
2. s(u(x)) = t(u(x)) = x

• We denote by HomC(A,B), or if C is obvious from context, just Hom(A,B), the set of all arrows from A to B. This is the hom-set, and may also be denoted C(A,B).
• In a category, arrows are also called morphisms, and nodes are also called objects. This ties in with the algebraic roots of the field.
• Large/small categories. Finite categories. Concrete categories.
• Sub-category. Full subcategory. Wide subcategory. Products of categories. Duals/opposites of categories.
• Slice categories. Free categories generated by graphs.

### 3.3 Examples

• The empty category.
• The one object/one arrow category 1
• The categories 2 and 1 + 1
• The category Set of sets.
• The catgeory FSet of finite sets.
• The category PFn of sets and partial functions.
• PFn(A,B) is a partially ordered set.
• Every partial order is a category. Each hom-set has at most one element.
• Every monoid is a category. Only one object.
• Kleene closure. Free monoids.
• The category of Sets and injective functions.
• The category of Sets and surjective functions.
• The category of k-vector spaces and linear maps.
• The category with objects the natural numbers and Hom(m,n) the set of $m\times n$-matrices.
• The category of Data Types with Computable Functions.
• Our ideal programming language has:
• Primitive data types.
• Constants of each primitive type.
• Operations, given as functions between types.
• Constructors, producing elements from data types, and producing derived data types and operations.
• We will assume that the language is equipped with
• A do-nothing operation for each data type. Haskell has
id
.
• An empty type 1, with the property that each type has exactly one function to this type. Haskell has
()
. We will use this to define the constants of type t as functions $1\to t$. Thus, constants end up being 0-ary functions.
• A composition constructor, taking an operator $f:A\to B$ and another operator $g:B\to C$ and producing an operator $g\circ f:A\to C$. Haskell has
(.)
.
• This allows us to model a functional programming language with a category.
• The category with objects logical propositions and arrows proofs.

### 3.4 Homework

For a passing mark, a written, acceptable solution to at least 2 of the 4 questions should be given no later than midnight before the next lecture.

1. Prove the general associative law: that for any path, and any bracketing of that path, the same composition may be found.
2. Suppose $u:A\to A$ in some category C.
1. If $g\circ u=g$ for all $g:A\to B$ in the category, then u = 1A.
2. If $u\circ h=h$ for all $h:B\to A$ in the category, then u = 1A.
3. These two results characterize the objects in a category by the properties of their corresponding identity arrows completely.
3. For as many of the examples given as you can, prove that they really do form a category. Passing mark is at least 60% of the given examples.
4. For this question, all parts are required:
1. For which sets is the free monoid on that set commutative.
2. Prove that for any category C, the set Hom(A,A) is a monoid under composition for every object A.

### 3.5 Graphs and paths

A graph is a collection G0 of vertices and a collection G1 of arrows. The structure of the graph is captured in the existence of two functions, that we shall call source and target, both going from G1 to G1. In other words, each arrow has a source and a target.

We denote by [v,w] the collection of arrows with source v and target w.

We extend the notation, and denote by Gi the collection of all paths of length i. Such a path is a sequence $f_1,\dots,f_i$ of arrows, such that for each j, target(fj − 1) = source(fj).

### 3.6 Definition of a category

A category is a graph with some special structure:

• Each [v,w] is a set and equipped with a composition operation $[u,v] \times [v,w] \to [u,w]$. In other words, any two arrows, such that the target of one is the source of the other, can be composed to give a new arrow with target and source from the ones left out.

We write $f:u\to v$ if $f\in[u,v]$.

$u \to v \to w$ => $u \to w$

• The composition of arrows is associative.
• Each vertex v has a dedicated arrow 1v with source and target v, called the identity arrow.
• Each identity arrow is a left- and right-identity for the composition operation.

The composition of $f:u\to v$ with $g:v\to w$ is denoted by $gf:u\to v\to w$. A mnemonic here is that you write things so associativity looks right. Hence, (gf)(x) = g(f(x)). This will make more sense once we get around to generalized elements later on.

### 3.7 Examples

• The empty category with no vertices and no arrows.
• The category 1 with a single vertex and only its identity arrow.
• The category 2 with two objects, their identity arrows and the arrow $a\to b$.
• For vertices take vector spaces. For arrows, take linear maps. This is a category, the identity arrow is just the identity map f(x) = x and composition is just function composition.
• For vertices take finite sets. For arrows, take functions.
• For vertices take logical propositions. For arrows take proofs in propositional logic. The identity arrow is the empty proof: P proves P without an actual proof. And if you can prove P using Q and then R using P, then this composes to a proof of R using Q.
• For vertices, take data types. For arrows take (computable) functions. This forms a category, in which we can discuss an abstraction that mirrors most of Haskell. There are issues making Haskell not quite a category on its own, but we get close enough to draw helpful conclusions and analogies.
• Suppose P is a set equipped with a partial ordering relation <. Then we can form a category out of this set with elements for vertices and with a single element in [v,w] if and only if v<w. Then the transitivity and reflexivity of partial orderings show that this forms a category.

Some language we want settled:

A category is concrete if it is like the vector spaces and the sets among the examples - the collection of all sets-with-specific-additional-structure equipped with all functions-respecting-that-structure. We require already that [v,w] is always a set.

A category is small if the collection of all vertices, too, is a set.