#however this is obviously a functor.
Explore tagged Tumblr posts
as-if-and-only-if · 4 months ago
Text
Consider now the Cainian functor C : Grp → Grp given by G ↦ [G, G], which annihilates all abelian groups,
261 notes · View notes
max1461 · 3 years ago
Note
If you don't mind (obviously feel free to ignore me if you do) can you give an explanation of what the point of universal properties is? My lecturer also mentions them all the time but they've never seemed really... useful for anything.
Well it's been a while, so this is sort of half-remembered, and if I make any mistakes hopefully someone can come along and correct me. I think I can pretty much give you the gist, though.
Universal properties are one way to characterize the behavior of objects in purely category-theoretic terms (that is, without reference to internal structure), which is something that you can always do in category theory because of, IIRC, the Yoneda Lemma. And the argument is that they're generally a lot "cleaner" than working directly with internal structure/a specific construction.
A good example of this in action is linear algebra, where we use universal properties (implicitly) all the time. We know that whenever we have two vector spaces V and W, we can define a linear map from V to W just by specifying how the map acts on the basis vectors. Any map (of sets) from a basis of V into W can be uniquely extended to linear map from V to W. This is like, the fact that all of linear algebra is based on, and it's essentially a restatement of the universal property of free objects for vector spaces (all vector spaces are free objects).
Basically the same thing is true of free groups: for any free group G and any other group H, and any map f from the generating set of G into H, the map f extends uniquely to a group homomorphism from G to H. In fact, all properties of free groups can be derived from this fact (once you've shown that free groups exist, that is). This is much nicer to work with than like, messing around with the actual construction of a free group, which is sort of awful. It also lets us see that something similar is going on with free groups and with vector spaces; that generating sets of a free group and bases of a vector space are in some sense "doing the same thing". In and of itself this is pretty cool.
What's going on here, in category-theoretic terms, is this: we've got some objects A and B in a category C (in the free groups example, C is the category of groups; in the vector space example, C is the category of vector spaces). In this category, there is a certain kind of morphism (group homomorphism; linear maps). However, we can also think of our objects as being in some category D, with a different kind of morphism (in both cases, we can think about our objects as living in the category of sets, with set maps between them). And so we have a functor F: C -> D (here, it's just the forgetful functor).
Now, say that S is the basis, or the generating set, of A. It's an object in D. Since it's a subset of A, there is an inclusion map i: S -> F(A) in D. If we have a morphism f: S -> F(B) in D, that represents an assignment of basis vectors to elements of B, or elements of the generating set to elements in B, respectively. And what the universal property of free objects says is that for any such f, there exists a unique map m: A -> B in the category C such that f = iF(m). In other words, f extends uniquely to a morphism from A to B in C.
A bunch of properties can be phrased in basically this same schema. D doesn't need to be Set, and F doesn't need to be the forgetful functor, and i doesn't need to be set inclusion. As long as the same diagrammatic structure is present, that's a universal property. Or you could also have the same thing with all the arrows reversed and it would still count, because of, uh, well category is just like that. But anyway. It turns out that a ton of things are characterized by universal properties: products, disjoint unions (sums), tensor algebras, categorical limits and colimits. So we have this framework for characterizing all these things purely in terms of the morphisms in and out of them, and that's really useful. And we implicitly reason with universal properties all the time anyway, so it's good to have a formalism for them.
At least, that's what I remember. Hopefully that's helpful!
17 notes · View notes
jessecmckeown · 8 years ago
Text
Funny Determinants
Let $A$ be a field (a perfectly ordinary, algebraist's, field) and $V$ an $A$-space of finite dimension $n$. You probably know that the last exterior power $\Lambda^n V$ is a 1-dimensional $A$-space, but you probably can't tell me a basis for it (unless $A=\mathbb{F}_2$). For any $k,l$ we have a natural map $\Lambda^k V \otimes \Lambda^l V \to \Lambda^{k+l} V$ (natural in $V$, anyways... there's some naturality to be considered w.r.t. $k,l$ as well... but ... ), and in the case $k+l = n = \dim(V)$, this is a perfect pairing: that is, the adjoint map $$\Lambda^k V \simeq \hom_A (\Lambda^l V , \Lambda^n V)$$ is an isomorphism. One wants to insinuate that because such isomorphisms are right-handed mates of natural transformations along the tensor-hom adjunction, they should be natural too; however, the naturality on the left-handed side of the adjunction depends on the left-handed thing sitting in the diagonal of a bifunctor; the thing on the right-hand side is sitting on the diagonal of, for want of any good terminology a (contra,co)-functor. The variances of the two things are all wrong! Someone Else might (as an Exercise, say...) translate this into "extranaturality", but for Today I'll be happy with saying: Naturality of $\Lambda^k V \otimes \Lambda^l V \to \Lambda^{k+l} V$ means that $\Psi : \Lambda^k V \to \hom (\Lambda^l V , \Lambda^{k+l} V)$ satisfies the equations $$ \Psi(\Lambda^k T w ) \circ \Lambda^l T = \Lambda^{k+l} T \circ \Psi(w) \tag{ntrl}$$ If perchance you did choose a basis for $V$ and settle on $l=1$ and $n = k+1$, you could work out that the relevant special case of $\mbox{(ntrl)}$ is, essentially, Cramer's Rule: $(\mathrm{adj} T)^* T = \det (T) I$, but the Thing Of The Day is that the Same Result holds (in some other dimension) for any partition $k+l=n$: $$(\Lambda^k T)^* \Lambda^l T = \det_{n\times n}(T) I_{n C k}$$ (for the right notion of "$()^*$")
Now, the point of these assertions is to make sense of something I heard of on Friday from a couple of videos on the "Mathologer" chanel on Youtube. In the Particular Special Case that $A=\mathbb{R}$, and giving $\mathbb{R}^n$ its usual Inner Product (or what you will...) induces reasonable inner-products also on $\Lambda^k V$ such that, when $T$ is Orthogonal, $\Lambda^k T$ are also Orthogonal. Meaning, we have implied equations $$ T^* T = I \implies \Lambda^{n-k} T = (\Lambda^k T)^*$$ That Equation in the Orthogonal Case has a strange consequence, articulated in 1985 by Peter McMullen, called the Cube Shadow Theorem: The two normal shadows of a Unit Cube on complementary orthogonal subspaces have equal normalized measure. Obviously, the Orthogonal Simpson's Rule isn't enough, on its own, to finish the Cube Shadow Theorem. So you might like to think about the proposition "the shadow of a cube is the Minkowski Sum of the edge shadows".
0 notes