tensorboi
tensorboi
:)
14 posts
a vaguely confused mathematical physics student
Don't wanna be here? Send us removal request.
tensorboi · 1 month ago
Text
my new cohomology theory is multiple cohomology. it's like singular cohomology, but. there's a lot of them
Invent yet another kind of cohomology.
34 notes · View notes
tensorboi · 4 months ago
Text
my math hot take is that mathematics is the study of the internal structure of the human mind. it's invented in the sense that it's in our heads, but it's discovered in the sense that we didn't make it up! (happy to defend this more if asked)
feel free to rb for reach
725 notes · View notes
tensorboi · 4 months ago
Note
i'm not sure whether or not you're aware of this, but here's something i think more people should know: einstein summation notation can almost always be thought of in a more abstract and cleaner way.
here's what i mean by that. instead of thinking of a tensor like T^i_j as a collection of n² components indexed by i and j, think about that symbol as telling you what the tensor can do. in this case, T can take in a vector (indexed by j), and output a vector (indexed by i, or perhaps input a covector indexed by i but whatever). this framework is called abstract index notation, because the indices no longer refer to unspecified components but rather the algebraic shape of your tensor; each index represents an abstract input/output.
immediately, many of the formulas from physics which look daunting become much easier to parse. given a vector v and a covector ω, the expression ω_i v^i doesn't necessarily mean summing over the components of ω and v (even though it can be interpreted that way); all it means is that ω has one vector input which is being filled by v. an equation like v_i = g_ij v^j is much less weird to look at, since the summation approach leaves a variable unsummed; instead, it just says that the metric tensor g has two vector inputs, and filling one of them with a vecto4 v gives you a covector with very similar properties. the definition of the riemann curvature tensor as the unique tensor R for which R^i_jkl u^j v_k w^l = (D_u D_v - D_v D_u - D_[u, v])w^i is exceptionally complicated in terms of components, but reading the indices abstractly elucidates matters: R takes in two vectors u and v along which we directionally differentiate, and another vector w which we parallel transport in this differentiation. lorentz invariance also makes much more sense here: if all slots of a tensor are filled properly, the result is a scalar because that's how tensors work, and of course scalars are lorentz invariants!
i should note that this reading doesn't work if the objects you're looking at aren't tensors, for instance the christoffel symbols Γ^i_jk and tuples of tensors T^α (and well that makes sense, these things are born of component considerations while tensors really aren't). but as long as you're aware of that, the notation becomes much friendlier than it was.
For the math ask game: 8, 23, 47
8. Least favorite notation you’ve ever seen?
As answered here it's Einstein Summation Notation
23. Will P=NP? Why or why not?
From everything I understand about the problem, I don't think so. I also hope it isn't (though even if it is, that doesn't imply that finding the algorithms which solve NP problems in polynomial time is in anyway easy)
47. Just how big is a big number?
As my friend would say: at least 3. More seriously, it'd be one that can be defined using a sequence that can't be computed in finite time. (There's a fancy word that I've forgotten now)
Thanks for the ask!
22 notes · View notes
tensorboi · 4 months ago
Text
excellent post! it's so good to see nuanced discussion of division by zero, especially when the topic has been uniquely plagued by dogma. i especially appreciated the push-back on "infinity is not a number," something i've gotten very tired of hearing lately.
just one thing to add: describing 0/0 as topologically not fitting is technically sort of true, but i think the reality of its topological meaning is even more interesting. if you take the usual topology on RP¹ and then add in 0/0 by taking the only new open set to be the entire wheel, you get a topology in which every operation is continuous, and this seems to be the only natural topology. the interesting thing about this is that it means the closure of the singleton set {0/0} is the entire wheel, meaning 0/0 is "close" to every other point (in algebro-geometric language it's something like a generic point). this actually makes precise a common informal claim about 0/0, namely that it can be whatever you want it to be by taking appropriate limits!
So what's up with dividing by zero anyways - a ramble on algebraic structures
Most everyone in the world (at least in theory) knows how to add, subtract, multiply, and divide numbers. You can always add two numbers, subtract two numbers, and multiply two numbers. But you must **never** divide by zero... or something along those lines. There's often a line of logic that leads to dividing by zero leading to "infinity," whatever infinity means, unless you're doing 0/0, whatever that means either. Clearly this is a problem! We can't have such inconsistencies in our fundamental operations! Why aren't our top mathematicians working on this?
So, that might be a bit of an exaggeration: division by zero isn't really a problem at all and is, for all intents and purposes fairly well understood, but to see why we'll have to take a crash course through algebra (the field of math, not the grade school version). Sorry for those of y'all who have seen fields and projective space before, not much to gain out of this one.
Part I: In the beginning, we had a Set.
As is true with most things in math, the only structure we start with is a set. A set isn't useful for much; all we can do with a single set is say what elements are and aren't in the set. Once you have more than one set, you start getting interesting things like unions or intersections or functions or Cartesian products, but none of those are _really_ that useful (or at least necessary) for understanding algebraic structures at the level we need, so a single set is what we start with and a single set there will be. The story then goes as follows: on the first day the lord said "Let there be an operation!" and it was so. If you want to be a bit of a nerd, a (binary) operation on a set A is formally a map * : A x A -> A, but for our purposes we just need to know that it matches the standard operations most people know (i.e. addition, subtraction, multiplication, but not division) in that for any two numbers a and b, we can do a * b and get another number. Of course, once again this is not very helpful on its own, and so we need to impose some more conditions on this operation for it to be useful for us. Not to worry though, these conditions are almost always ones you know well, if not by name, and come rather intuitively.
The first structure we'll discuss is that of a monoid: a set with an operation that is associative and has an identity. Associativity simply means that (a * b) * c = a * (b * c), and an identity simply means that we have some special element e such that a * e = e * a = a. For two simple examples and one nonexample, we have the natural numbers (with 0) under addition is a monoid: 0 + a *= *a *+ 0 = *a, and any two natural numbers add to another natural number; the integers under multiplication is a monoid: 1 * a = a * 1 = a, and any two integers multiply to another integer; and the integers under subtraction is not a monoid, since subtraction is not associative (a - (b - c) =/= (a-b) - c). In both of these examples, the operation is commutative: in other words, a * b = b * a for every a and b. There are plenty of examples of operations that are not commutative, matrix multiplication or function composition probably being the most famous, but for the structures we're going to be interested in later operations are almost always commutative, so we can just assume that from the start.
Of course, you might wonder where subtraction comes from, if it doesn't fit into a monoid structure (and in particular isn't associative). Not to worry! We can simply view subtraction as another type of addition, and our problems go away. In particular, we add the condition that for every a, we have an inverse element a⁻ ¹ (or -a if our operation is addition) such that a * a⁻ ¹ = a⁻ ¹ * a = e. For fans of universal algebra, just as a binary operation can be thought of as a function, the inverse can be thought of as a function i : A -> A that sends each element to its inverse. This forms a structure we know as a group. While none of the above examples form a group, one of them can be naturally extended to a group: if we simply add negative whole numbers to natural numbers, we get the group of integers over addition, where for any integer a, we have its inverse -a where a + -a = 0. In particular, the subtraction a - b is just a + -b = -b + a, where -b is the additive inverse of b. As we will soon see, division can also be thought of in a similar way, where a/b = a * /b = /b * a where /b is the multiplicative inverse of b. As a side note, the examples above are very specific types of monoids and groups which turn out to be quite far from the general ideas that monoids and groups are trying to encapsulate. Monoids show up often in computer science as they're a good model for describing how a list of commands affects a computer, and groups are better thought of as encapsulating symmetries of an object (think of the ways you can rotate and reflect a square or a cube).
Part II: So imagine if instead of one operation, we have... two...
If you've ever taken introductory algebra, you've probably never heard of monoids and only done groups. This is partially because monoids are much less mathematically interesting than groups are and partially because monoids are just not as useful when thinking about other things. For the purposes of this post, however, the logical steps from Set -> Monoid -> Group are surprisingly similar to the steps Group -> Ring -> Field, so I've chosen to include it regardless.
Just as we started from a set and added an operation to make a monoid, here we start from an additive group (i.e. a group where the operation is addition) and add another operation, namely multiplication, that acts on the elements of the group. Just like in the monoid, we will impose the condition that multiplication is associative and has an identity, namely 1, but we also impose the condition that multiplication meshes nicely with addition in what you probably know as the distributive properties. What we end up with is a ring, something like the integers, where you can add, subtract, and multiply, but not necessarily divide (for example, 2 doesn't have a multiplicative inverse in the integers, as a * 2 = 1 has no solutions). Similarly, when we add in multiplicative inverses to every nonzero element, we get a field, something like the rational numbers or the real numbers, where we can now divide by every nonzero number. In other words, a ring is an additive group with a multiplicative monoid, and a field is an additive group with a subset that is a multiplicative group (in particular the subset that is everything except zero). For those who want to be pedantic, multiplication in a ring doesn't have to be commutative, but addition is, and both addition and multiplication are commutative in a field. A full list of the conditions we impose on the operations of a monoid, group, ring, and field can be found here).
So why can't we have a multiplicative inverse to 0 in a field? As it turns out, this is because 0 * a = 0 for every a, so nothing times 0 gets you to 1. There is technically a structure you can have if 0 = 1, but it turns out there's only the one single element 0 in that structure and nothing interesting happens, so generally fields specifically don't allow 0 = 1. Then, what if instead we relaxed the condition that 0 * a = 0? Similarly, it turns out that this isn't one of the fundamental conditions on multiplication, but rather arises from the other properties (a simple proof is a * 0 = a * (0 + 0) = a * 0 + a * 0 implies 0 = a * 0 - a * 0 = a * 0 + a * 0 - a * 0 = a * 0). If we were to relax this condition, then we lose some of the other nice properties that we built up. This will be a recurring theme throughout the rest of this post, so be wary.
Part III. We can't have everything we want in life.
While all the structures so far have been purely algebraic and purely algebraically motivated, the simplest way to start dividing by zero is actually "geometric," with several different ways of constructing the same space. The construction we'll use is as follows: take any field, particularly the real numbers or the complex numbers. We can always take the cartesian product of a field K with itself to form what's called affine space K^2, which is the set of ordered pairs (a,b) for a, b in K. As a side note, the product of groups, rings, or fields has a natural definition of addition or whatever the underlying group operation is by doing it componentwise, i.e. (a,b) * (c,d) = (a * c, b * d), but our operations will not coincide with this, as you'll see soon. This affine space is a plane - in fact, when we do this to the real numbers, we get the Cartesian plane - within which we can construct lines, some of which we get by considering the set of points (x, y) satisfying the familiar equation y = mx + b for some 'slope' m and 'intercept' b. In particular, we want to characterize all the lines through the origin. This gives us all the lines of the form y = mx, as well as one additional line x = 0. This is the basic construction of what we call the projective line, a space characterizing all the lines through the origin of affine 2-space. The geometric picture of this space is actually a circle: the bottom point representing the number 0; the left and right halves representing negative and positive numbers, repsectively; and the top point representing the number "infinity."
There are a few ways of describing points on the projective line. The formal way of doing so is by using what are called homogenous coordinates. In other words, for any nonzero point (a,b) in affine space, it is surely true that we can find a line through the origin and (a,b). In particular, if a is not zero, then this line takes the form y = (b/a) x where the slope is b/a. Furthermore, any two points (a,b) and (c,d) can actually sit on the same line, in particular whenever c = ka and d = kb for some number k. Thus, we can define homogenous coordinates as the set of points [a : b] for a, b in our field where [a : b] = [ka : kb] by definition, and the point [0 : 0] is not allowed as it doesn't specify any particular line (after all, every line passes through the origin). As is alluded to above, however, this means that whenever a =/= 0, we can take k = 1/a to get [a : b] = [1 : b/a], in other words characterizing each line by its slope. Furthermore, whenever a = 0, we can take k = 1/b to get [0 : b] = [0 : 1]. In other words, the projective line is, as we informally stated above, equivalent to the set of slopes of lines through the origin plus one other point representing the vertical line, the point at "infinity." Since slopes are just numbers in a field, we can add, subtract, multiply, and divide them as we normally do with one exception: the slope of the lines containing [a : b] for any a =/= 0 is b/a, so clearly the line with infinite slope consisting of points [0 : b] implies that b/0 should be infinity. Voila! We can divide by zero now, right? Well... there are two loose ends to tie down. The first is what infinity actually means in this case, since it is among the most misunderstood concepts in mathematics. Normally, when people bandy about phrases such as "infinity isn't a number, just a concept" or "some infinities are different from others" they are usually wrong (but well meaning) and also talking about a different kind of infinity, the ones that arise from cardinalities. Everything in math depends on the context in which it lies, and infinity is no different. You may have heard of the cardinal infinity, the subject of Hilbert's Hotel, describing the size of sets and written primarily with aleph numbers. Similarly, you may also have heard of the ordinal infinity, describing the "place" in the number line greater than any natural number. Our infinity is neither of these: it is to some extent an infinity by name only, called such primarily to take advantage of the intuition behind dividing by zero. It's not "greater" than any other number (in fact, the normal ordering of an ordered fields such as the real numbers breaks down on the projective line), and this is a consequence of the fact that if you make increasingly negative and increasingly positive slopes you end up near the same place: a vertical line. In other words, "negative infinity" and "positive infinity" are the same infinity.
The second loose end is that defining our operations this way is actually somewhat algebraically unsound, at least with respect to the way we think about operations in groups, rings, and fields. As mentioned above, the operation of addition can be lifted to affine space as (a,b) + (c,d) = (a+c,b+d). However, this same operation can't really be used for homogenous coordinates, since [1, 0] = [2, 0] as they lie on the same line (the line with slope 0), but [1, 0] + [1, 1] = [2, 1] while [2, 0] + [1, 1] = [3, 1], and [2, 1] and [3, 1] are not the same line, as they have slopes 1/2 and 1/3, respectively. Dividing by zero isn't even needed to get weirdness here. Luckily, we can simply define new operations by taking inspiration from fractions: b/a + d/c = (bc + ad)/ac, so we can let [a : b] + [c : d] equal [ac : bc + ad] (remembering that homogenous coordinates do to some extent just represent the slope). Luckily, multiplication still works nicely, so we have [a : b] * [c : d] = [ac : bd]. Unluckily, with these definitions, we no longer get a field. In particular, we don't even have an additive group anymore: [a : b] + [0 : 1] = [0 : a] = [0 : 1], so anything plus infinity is still infinity. In other words, infinity doesn't have an additive inverse. Furthermore, despite ostensibly defining infinity as 1/0, the multiplicative inverse of 0, we have that [1 : 0] * [0 : 1] = [0 : 0], by our rules, which isn't defined. Thus, 0 still doesn't have a multiplicative inverse and 0/0 still doesn't exist. It seems like we still haven't really figured out how to divide by zero, after all this. (Once again, if you want to read up on the projective line, which is a special case of projective space, which is a special case of the Grassmannian, in more depth.)
Part IV: I would say wheels would solve all our problems, if not for the fact that they just make more problems.
At this point, to really divide by zero properly, we're going to need to bite the bullet and change what dividing really means. Just as we can think of subtraction as adding the additive inverse (i.e. a - b = a + -b where -b was a number), we can start thinking of division as just multiplying by... something, i.e. a/b = a * /b, where /b is something vaguely related to the multiplicative inverse. We can already start doing this in the projective line, where we can define /[a : b] = [b : a], and it works nicely as [a : b] * [b : a] = [ab : ab] = [1 : 1] whenever neither a nor b is zero. This lets us rigorize the statements 1/infinity = 0, infinity/0 = infinity, and 0/infinity = 0, but doesn't really help us do 0/0 or infinity/infinity. Furthermore, note that because 0/0 =/= 1, /[a : b] isn't really the multiplicative identity of [a : b], it's just the closest we can get.
Enter the wheel! If 0/0 is undefined, then we can simply... define it. It worked so nicely for adding in infinity, after all - the picture of the point we added for infinity is taking a line and curling it up into a circle, and I like circles! Surely adding another point for 0/0 would be able to provide a nice insight just as turning a line into the projective line did for us.
So here's how you make a wheel:
You take a circle.
You add a point in the middle.
Yeah that's it. The new point, usually denoted by ⊥, is specifically defined as 0/0, and really just doesn't do anything else. Just like for infinity, we still have that a + ⊥ = ⊥ and a * ⊥ = ⊥ for all a (including infinity and ⊥). It doesn't fit into an order, it doesn't fit in topologically, it is algebraically inert both with respect to addition and multiplication. It is the algebraic formalization of the structure that gives you NaN whenever you fuck up in a calculator and the one use of it both inside and outside mathematics is that it lets you be pedantic whenever your elementary school teacher says "you can't divide by zero" because you can go "yeah you can it's just ⊥ because i've been secretly embedding all my real numbers into a wheel this whole time" (supposing you can even pronounce that).
Part V: So what was the point of all this anyways
The wheel is charming to me because it is one of the structures in mathematics where you can tell someone just asked a question of "what if this was true," built some space where it was, and just started toying with it to see what happens. It's a very human and very beautiful thing to see someone go against conventional knowledge and ask "what breaks when you allow 0/0" even if conventional knowledge does tend to be right most of the time. In this sense, perhaps the uselessness of the wheel is the point, that even despite how little ⊥ does from a mathematical lens, some people still took the time to axiomatize this system, to find a list of conditions that were both consistent and sufficient to describe a wheel, and genuinely do actual work seeing how it fits in within the universe of algebraic structures that it stays in.
While a wheel may not be used for much (it might be describable in universal algebra while a field isn't, though I'm not too well versed in universal algebra so I'm not actually entirely sure), every other structure discussed above is genuinely well studied and applicable within many fields inside and outside of math. For more viewpoints on what the projective line (and in general the projective sphere) is used for, some keywords to help you on your way are compactification of a set if you care about the topological lens, the real projective line or the Riemann Sphere if you care more about the analysis side, or honestly the entirety of classical algebraic geometry if that's your thing.
Another structure that might be interesting to look at is the general case of common meadows, an algebraic structure (M, 0, 1, +, -, *, /) where the condition of / being involutive (i.e. /(/x) is not always x) is relaxed, unlike a wheel where it is always involutive. Note that these structures are called meadows because the base structure they worked on is a field (get it? not our best work I promise mathematicians are funnier than this). These structures are at the very least probably more interesting than wheels, though I haven't checked them out in any amount of detail either so who knows, perhaps there isn't much of substance there either.
132 notes · View notes
tensorboi · 6 months ago
Text
surprised nobody seems to have mentioned algebraic geometry! that was both the hardest and most interesting maths course i have ever taken, just because it's such a bizarre way of thinking about geometry (something i'm very comfortable with in the manifold setting).
least favourite would probably have to be tree theory; i hate discreteness and had successfully avoided it until i had to learn about geometric group theory lmao
Math enthusiasts of tumblr. What math subjects have you studied and which ones were your favorite? Which ones were your least favorite? Which ones were the hardest?
205 notes · View notes
tensorboi · 7 months ago
Text
the scary part is when random numbers start popping up where there definitely shouldn't be any, and now you have to deal with those. like ok i suppose 4 is a cosmically important number when discussing differentiable structures? and there are exactly five exceptional lie algebras?? awesome let's just try our best to sidestep that
personally I love when maths stops including any numbers at all, that bit's my favourite part
4 notes · View notes
tensorboi · 8 months ago
Text
complex numbers become much less mysterious when you frame things in terms of rotation from the start. like nah, i'm not coping with the fact that equations don't have solutions, i literally just want a system where addition corresponds to translation and multiplication corresponds to rotation and scaling. i mean are you going to tell me that rotations aren't real?
Patiently explain to someone annoying why imaginary numbers are in fact relevant and not actually imaginary.
188 notes · View notes
tensorboi · 8 months ago
Text
You know, I despise the how prime is defined for most people. It's like the whole "a vector is something with direction and magnitude" but worse.
"A prime is a number whose divisors are 1 and itself" because it's lead to this annoying yet understandable confusion about whether 1 is prime or not. Strictly speaking, this definition is part of the definition of an irreducible element (number) except that we also exclude units from this definition.
What's a unit? A unit is any number (element of a ring in general) which has a multiplicative inverse. 1 is a unit and hence we don't count it as irreducible.
So what's the actual definition of a prime? An p element of a ring is prime if it is not a unit and when p divides a·b, it divides at least one of a or b. I encourage you to think about this in the context of primes that we're used to. Notice again that we exclude units.
One can show that a prime is always irreducible however in general irreducible does not imply prime. For the ring of integers, it does which why the naive definition most people know is still okay on the surface.
The solution really is to explain to people when they're being taught what prime numbers are why we exclude 1. One reason you may give is that it ruins unique factorisation by prime numbers. But also make it clear that prime and composite numbers aren't the only kinds. I think the binary is also part of the issue
97 notes · View notes
tensorboi · 8 months ago
Text
ok i actually have an answer here omg. my favourite construction is... the functorial description! essentially, to each manifold M you assign a vector bundle TM (which basically bundles up all your tangent spaces into one big space), and to each map f: M -> N you assign a bundle map Tf: TM -> TN which is basically the derivative. here are the technical details (duck if you hate category theory, because you're about to get blasted with it):
There is a functor T from the category of smooth manifolds to smooth vector bundles satisfying the following two properties:
1. On Euclidean spaces: the vector bundle TR^n over R^n is the trivial bundle R^n × R^n, and the output of a map f: R^n -> R^m is the derivative Tf = (f, df): R^n × R^n -> R^m × R^m.
2. Given a manifold M and an open submanifold U, the vector bundle TU is just the restriction of the vector bundle TM to U. Furthermore, if f: M -> N is a map between manifolds and f|U is the restriction of f to U, then the map T(f|U) is just the restriction of Tf to U.
This functor is unique up to natural equivalence. The vector bundle TM is called the tangent bundle of M, and the bundle map Tf is called the pushforward of f (sometimes denoted by df or f_*).
i love this description for so many reasons:
firstly, all of the others slot into this description! for instance equivalence classes of curves make the tangent bundle, and if c is a curve on M and f: M -> N then f ○ c is a curve on N, so Tf([c]) = [f ○ c].
secondly, it demonstrates the generic properties of the tangent bundle so clearly. in the case of euclidean spaces, you really are just sticking a copy of R^n to each point of the space. moreover, locally, the pushforward is literally just the derivative.
thirdly, you immediately get a topology out of it! with all the other descriptions, you have to fiddle a bit to make a topology once you bundle them up. but here it practically falls out: your tangent bundle is locally just TR^n ≈ R^2n, which has a natural topology, and you patch them together via transition functions. it's so clean and elegant!
finally: it's coordinate free, but the coordinates are always there when you need them! not once in the definition do you even mention a coordinate chart, but the entire idea is so readily compatible.
if you want to learn the specifics of this definition, check out spivak vol 1!
50 notes · View notes
tensorboi · 8 months ago
Text
Why is the Pythagorean theorem true, really? (and a digression on p-adic vector spaces)
ok so if you've ever taken a math class in high school, you've probably seen the Pythagorean theorem at least a few times. It's a pretty useful formula, pretty much essential for calculating lengths of any kind. You may have even seen a proof of it, something to do with moving around triangles or something idk. If that's as far as you've gotten then you are probably unbothered by it.
Then, if you take a math class in university, you'll probably see the notion of an abstract vector space: it's a place where you can move things and scale them. We essentially use these spaces as models for the physical space we live in. A pretty important thing you can't do yet, though, is rotate things or say how long they are! We need to put more structure on our vector spaces to do that, called a norm.
Here's the problem, though: there are a *lot* of different choices of norm you can put on your vector space! You could use one which makes Pythagoras' theorem true; but you could also use one which makes a³ + b³ = c³ instead, or a whole host of other things! So all of a sudden, the legitimacy of the most well-known theorem is called into question: is it really true, or did we just choose for it to be true?
And if you were expecting me to say "then you learn the answer in grad school" or something, I am so sorry: almost nobody brings it up! So personally, I felt like I was going insane until very recently.
(Technical details: the few that do bring it up might say that the Pythagorean norm is induced from another thing called an inner product, so it's special in that way. But also, that doesn't really get us anywhere: you can get a norm where a⁴ + b⁴ = c⁴ if you are allowed to take products of 4 vectors instead!)
How is this resolved, then? It turns out the different norms are not created equal, and the Pythagorean norm has a very special property the others lack: it looks the same in every direction, and lengths don't change when you rotate them. (A mathematician would say that it is isotropic.) Now, all of a sudden, things start to make sense! We *could* choose any norm we like to model our own universe, but why are we going to choose one which has preferred directions? In the real world, there isn't anything special about up or down or left or right. So the Pythagorean norm isn't some cosmic law of the universe, nor is it some random decision we made at the beginning of time; it's just the most natural choice.
But! That's not even the best part! If you've gone even further in your mathematical education, you'll know about something called p-adic numbers. All of our vector spaces so far have been over the field of real numbers, but the p-adic numbers can make vector fields just as well. So... are the Pythagorean norms also isotropic in p-adic spaces? Perhaps surprisingly, the answer is no! It turns out that the isotropic norms in p-adic linear algebra are the ∞-norms, where you take the maximum coordinate (rather than summing squares)! So the Pythagorean theorem looks very different in p-adic spaces; instead of a² + b² = c², it looks more like a^∞ + b^∞ = c^∞.
If you're burning to know more details on this, like I am right now as I'm learning it, this link and pregunton's linked questions go into more details about this correspondence: https://math.stackexchange.com/questions/4935985/nature-of-the-euclidean-norm
The interesting thing is that these questions don't have well-known answers, so there is probably even more detail that we have yet to explore!
tl;dr: the pythagorean theorem is kind of a fact of the universe, but not really, but it kinda makes sense for it to be true anyway. also we change the squares to powers of infinity in p-adic numbers and nobody really knows why
51 notes · View notes
tensorboi · 1 year ago
Text
holy shit i'm seeing into the fucking matrix
i just realised you don't even need the real numbers to do real analysis
you can just call a function f: Q -> Q "real-continuous" if it's continuous and sends cauchy sequences to cauchy sequences
what the fuck
5 notes · View notes
tensorboi · 1 year ago
Text
random ramblings on tensors
okay so everyone who does differential geometry or physics has probably heard the following """definition""" of a tensor:
A tensor is something that transforms like a tensor.
people say this definition is well-defined and makes sense, but that feels like coping to me: the definition leaves a lot to be desired. not only does it make no attempt to define what "transforms like a tensor" means, but it's also not what most working mathematicians actually imagine tensors to be, even if you add in what "transforms" means. instead, you can think of tensors in this way:
A tensor on a vector space V is an R-valued multilinear map on V and V*;
or in this way:
A tensor on a vector space V is a pure-grade element of the tensor algebra over V.
there are some nice category-theoretic formulations (specifically the tensor product is left-adjoint to Hom), but otherwise that's pretty much the deal with tensors. these two definitions are really why they come up, and why we care about them.
so that got me wondering: how did we end up in this position? if you're like me, you probably spent hours poring over wikipedia and such, desperately trying to understand what a tensor is so you could read some equation. so why is it that such a simple idea is presented in such a convoluted way?
i guess there's three answers to this, somewhat interrelated. the first is just history: tensors were found and discovered useful before the abstract vector space formalism was ironed out (it was mainly riemann, levi-civita and ricci who pioneered the use of tensors in the late 1800s, and i think modern linear algebra was being formalised in parallel). the people actually using tensors in their original application were not thinking about it in our simple and clean way, because they literally didn't have the tech yet.
the second answer is background knowledge. to understand the definition of a tensor in terms of transforming components, all you need is high-school algebra (and i guess multivariable calc if you're defining tensor fields). however, to define a tensor geometrically, you need to know about vector spaces and dual spaces and the canonical isomorphism from V to V**; and to define a tensor algebraically (in a satisfactory way imo), you need to have a feeling for abstract algebra and universal properties. so if someone asks what a tensor is, you'll probably be inclined to use the first because of its comparative simplicity (at first glance).
the third answer is very related to the second, and it's about who's answering the questions. think about it: the people who are curious about tensors probably want to understand some application of tensors; they might want to understand the einstein field equations, or stress and strain in continuum physics, or backpropagation from machine learning. so who are they going to ask? not a math professor, it's not even really a math question. they're going to ask someone applying the science, because that's where their interest is for now. and, well, if an applied scientist is measuring/predicting a tensor, are they thinking about it as a map between abstract spaces? or as an array of numbers that they can measure?
to be honest, none of this matters that much, since i'm pretty happy with my understanding of tensors for now. i guess the point of this post is just to vent my frustrations about a topic that turned out to just be "some tools from math were advertised really badly"
2 notes · View notes
tensorboi · 1 year ago
Text
this is common knowledge, but it was still quite a while before i saw this definition:
A vector space over a field K is an abelian group V with a left ring action by K (i.e. a group homomorphism K -> End(V)).
the nice thing about this phrasing is that it really makes clear what addition and scalar multiplication are meant to do: we can think of the addition as "fundamental," and then the scalar multiplication as an "enrichment" of addition. it's also clear from this that modules are not generalisations of vector spaces, but rather that vector spaces are specialisations of modules; this makes the statement "all vector spaces are free" much more surprising!
Define a concept different from how the textbook does it.
78 notes · View notes
tensorboi · 1 year ago
Text
god de rham's theorem might be the most beautiful thing i know so far. you spend so much time thinking of topology and analysis as different fields, and like who can blame you? topology is weird open sets and analysis is very symbolic. but then there are connections here and there; most notably certain differential equations don't have solutions unless you delete points, which is essentially the content of the poincare lemma.
then BAM, de rham's theorem makes it all explicit. differential forms are really just duals to shapes, and that's the natural way to think about them. all of a sudden, the calculus you thought was so symbolic is completely geometric: d is the (co)boundary operator, d²=0 just means the boundary of the boundary is 0, the poincare lemma is just telling you that simply connected spaces aren't interesting homologically. how can anyone not love this theorem omg
Math people, reblog with your fav theorem and why.
I'll start, the Wedderburn-Artin theorem is a beautiful structure theorem on semisimple rings which says they decompose uniquely as a product of matrix rings over division rings. This is a beautiful result but it also underlies a lot of very cool theory like Brauer Theory, Galois Cohomology and the theory of Galois and Étale Algebras.
What's yours?
187 notes · View notes