#reflexive symmetric transitive relations
Explore tagged Tumblr posts
Text
When I see "transitive", I can't help think of equivalence relations (a maths thing), which I know are defined by three properties: they are transitive (A~B and B~C => A~C), reflexive (A~B => B~A), and the other one (A~A is true), but I can't remember the name of the other one.
Then I look up equivalence relations and find the "other one" is actually "reflexive", what I think is called "reflexive" is actually called "symmetric", and it's fine.
if shes your girl then why have i slowly been replacing her parts until there’s nothing left of her original body? is she then still your girl?
#equivalence relations#transitive#reflexive#symmetric#the tilde is pronounced twiddles#because that's easier than always saying is equivalent to
160K notes
·
View notes
Text
Universal Algebraic Geometry, Part 1
In this post I recall a number of basic ideas from universal algebra in order to start setting up a framework for studying the algebraic geometry of arbitrary algebraic objects.
For our purposes, an algebra is just a set equipped with some operations. We assume that the set of operations has been fixed. Given an algebra A alongside an operation ♡ we write ♡ᴬ for the definition of that operation on the algebra A, or just ♡ when ambiguity does not result. Our notion of a homomorphism is as follows:
Definition: A homomorphism f from an algebra A to an algebra B is a function f : A -> B such that for each n-ary operation ♡ alongside elements x₁, ..., xₙ ∈ A it follows that f(♡ᴬ(x₁, ..., xₙ)) = ♡ᴮ(f(x₁), ..., f(xₙ))
Kernels and Congruences
Definition: A congruence θ on an algebra A is an equivalence relation such that for any (x₁, y₁) ∈ θ, ..., (xₙ, yₙ) ∈ θ and n-ary operation ♡ it follows that (♡ᴬ(x₁, ..., xₙ), ♡ᴬ(y₁, ..., yₙ)) ∈ θ. We write Cgr(A) for the set of congruences on A.
Example: The maximal congruence ∇ on A is just A × A.
Example: The minimal congruence ∆ on A is just { (x, x) | x ∈ A }.
Example: Any arbitrary intersection of congruences on A is a congruence on A. In particular, we can form the intersection of all congruences on A containing any arbitrary subset R of A × A in order to obtain the minimum congruence containing R. In particular, choosing R to be the union of two congruences, we get a meet of congruences.
In the case of a group, it follows that (x, y) ∈ θ if and only if (x - y, 0) ∈ θ hence a congruence is completely determined by the set { x | (x, 0) ∈ θ }. In other words, by a normal subgroup. So congruences are just the data of a normal subgroup. Similar remarks apply to ideals of rings.
Kernels of homomorphisms are a particularly important source of congruences on algebras:
Definition: The kernel of a homomorphism f : A -> B is denoted by ker(f) and defined by setting ker(f) := { (x, y) | f(x) = f(y) }.
Lemma: Given a homomorphism f : A -> B it follows that ker(f) is a congruence on A. Proof:
Given x ∈ A it follows that f(x) = f(x) hence (x, x) ��� ker(f).
Given x, y ∈ A such that (x, y) ∈ ker(f) it follows that f(x) = f(y), which is to say f(y) = f(x) hence (y, x) ∈ ker(f).
Given x, y, z ∈ A it such that (x, y) ∈ ker(f) and (y, z) ∈ ker(f) it follows that f(x) = f(y) = f(z) hence (x, z) ∈ ker(f).
Given (x₁, y₁) ∈ ker(f), ..., (xₙ, yₙ) ∈ ker(f) and n-ary operation ♡ it follows that f(xₖ) = f(yₖ) for each k, thus by the homomorphism property f(♡ᴬ(x₁, ..., xₙ)) = ♡ᴮ(f(x₁), ..., f(xₙ)) = ♡ᴮ(f(y₁), ..., f(yₙ)) = f(♡ᴬ(y₁, ..., yₙ)) showing that (♡ᴬ(x₁, ..., xₙ),♡ᴬ(y₁, ..., yₙ)) ∈ ker(f).
QED.
Theorem: Given a homomorphism f : A -> B alongside a congruence θ on B it follows that the preimage f*(θ) := { (x, y) ∈ A | (f(x), f(y)) ∈ θ } is a congruence on A. Proof: 1. Given x ∈ A, as θ is reflexive it follows that (f(x), f(x)) ∈ θ hence x ∈ f*(θ). 2. Given (x, y) ∈ f*(θ) it follows that (f(x), f(y)) ∈ θ. As θ is symmetric it follows that (f(y),f(x)) ∈ θ, thus (y, x) ∈ f*(θ). 3. Given (x,y) ∈ f*(θ) and (y,z) ∈ f*(θ) it follows that (f(x), f(y)) ∈ θ and (f(y), f(z)) ∈ θ. As θ is transitive it follows that (f(x), f(z)) ∈ θ hence (x, z) ∈ f*(θ). 4. Given (x₁, y₁) ∈ f*(θ), ..., (xₙ, yₙ) ∈ f*(θ) alongside an n-ary operator ♡ it follows that (f(x₁), f(y₁)) ∈ θ, ..., (f(xₙ), f(yₙ)) ∈ θ, hence as θ is a congruence (♡(f(x₁), ..., f(xₙ)), ♡(f(y₁), ..., f(yₙ))) ∈ θ. Being that f is a homomorphism, this means (f(♡(x₁, ..., xₙ)), f(♡(y₁, ..., yₙ))) ∈ θ showing that (♡(x₁, ..., xₙ), ♡(y₁, ..., yₙ)) ∈ f*(θ). QED.
Therefore, every homomorphism f : A -> B induces a map f* : Cgr(B) -> Cgr(A). In fact, congruences organise themselves into a lattice, and this map f* is a homomorphism of lattices. But for the moment, we will not worry about that.
Composite Kernel Lemma: Given homomorphisms f : A -> B and g : B -> C it follows that
ker(g ∘ f) = f*(ker(g)).
Proof:
Observe that (x, y) ∈ ker(g ∘ f) if and only if g(f(x)) = g(f(y)), which is the case if and only if (f(x), f(y)) ∈ ker(g), which is the case if and only if (x, y) ∈ f*(ker(g)).
QED. Corollary: ker(f) = f*(∆). Proof: ker(id ∘ f) = f*(ker(id)) = f*(∆). QED.
Quotients of Algebras
Given a congruence θ on an algebra A it is the possible to construct the quotient algebra A/θ whose elements are equivalence classes of elements of A under the congruence θ. Sending every element to its equivalence class under θ gives a homomorphism π : A -> A/θ which we refer to as the canonical projection of θ.
Theorem: Given a congruence θ on an algebra A it follows for every homomorphism f : A -> B with θ ⊆ ker(f) that there exists a unique injective morphism g : A/θ -> B such that f = g ∘ π.
Proof: Observe that π(x) = π(y) iff (x, y) ∈ θ, which as θ ⊆ ker(f) implies f(x) = f(y). Therefore, it is well-defined to construct g by setting g(π(x)) = f(x). One can check that this works. QED.
Homomorphism Theorem: Given a surjective homomorphism f : A -> B it follows that there exists an isomorphic g : A/ker(f) -> B such that g ∘ π = f.
Proof:
Applying the prior Theorem there exists a unique injective morphism g : A/ker(f) -> B such that g ∘ π = f. Observe that as f is surjective, it then follows that g is surjective.
QED.
Hereditary Classes of Algebras
Recall that the prime ideals 𝔭 of a ring R are exactly those ideals 𝔭 such that R/𝔭 is an integral domain. Notice that being an integral domain amounts to satisfying a universal axiom, namely xy = 0 implies x = 0 or y = 0. Hence, integral domains are closed under moving to subalgebras, and the property of being an integral domain is preserved by isomorphisms.
Definition: A class K of algebras is hereditary iff it is closed under subalgebras and isomorphisms. Given a universal class K we say that a congruence θ on an algebra A is a K-congruence iff the quotient algebra A/θ belongs to the class K.
Theorem: Given a hereditary class K alongside a homomorphism f : A -> B it follows for every K-congruence θ on B that f*(θ) is a K-congruence on A.
Proof:
Consider a homomorphism f : A -> B alongside a K-congruence θ on B. It follows that f*(θ) is a congruence on A, so it suffices to demonstrate that it is a K-congruence. Recalling our canonical projection π : B -> B/θ and our Composite Kernel Lemma, it follows that ker(π ∘ f) = f*(ker(π)) = f*(θ). Therefore, the morphism π ∘ f : A -> B/θ factorises through a unique injective morphism g : A/f*(θ) -> B/θ. This means that A/f*(θ) is isomorphic to a subalgebra of B/θ. As θ is a K-congruence it follows that B/θ belongs to K, which as K is a hereditary class then implies that A/f*(θ) belongs to K, so that f*(θ) is then a K-congruence.
QED.
Corollary: Preimages of prime ideals are prime ideals.
38 notes
·
View notes
Text
Cardinal Numbers
Nyaa!~
I have too many drafts :c
Hi everyone! Today, we are going to learn about infinite cardinal numbers! I hope this'll be fun! ^^
I was also planning to explain something about the axiom of choice, but I decided that it's better if that becomes its own blog-post.
So, first: what is a cardinal number?
✦ Motivation ✦
Well, cardinal numbers are numbers that represent the size of a set of things. For example, {owo, uwu} is a set of cardinality 2: it has two elements, owo and uwu.
However, we can't simply count the elements of a set to figure out its cardinality. What if we come across a set like {0, 1, 2, 3, ...}? I mean... you can try counting the elements, but it's gonna take you a veryy long time.
So! What do we do?
Well,,, first, observe that cardinals can't exist in a vacuum. If we say that {owo, uwu} has cardinality 2, what does that actually mean? ‘2’ is just a symbol we use to describe the cardinality of {owo, uwu}, but has no meaning on its own. Where it gets interesting is when we compare the cardinality of {owo, uwu} to {10, 3}: they're the same! Now, that is interesting.
So, we use ‘2’ to describe the cardinality of sets like {owo, uwu} and {10, 3}, and we use this symbol so we can more easily recognize when two sets have the same cardinality, without having to search for statements like “this set has the same size as this other set” exhaustively having to apply transitivity of (cardinal) equality.
But what makes {owo, uwu} and {10, 3} so similar that they can be considered ‘equinumerous’?
Well,,, observe that the cardinality of a set doesn't change when we replace some of the elements with other elements: the cardinality of a set is completely independent from what the members of that set actually are. If we have a set like {catgirl lucy, my dearest friend, Semi}, and I replace ‘my dearest friend’ with ‘my worst enemy’ (so we get {catgirl lucy, my worst enemy, Semi}), the size of the set doesn't change!
I hope these two observations are enough to motivate the definition of a cardinal number:
~ Cardinal Equality ~
[Definition] Two sets, A and B, have the same cardinality, denoted |A| = |B|, iff there exists a bijection f: A → B.
There are three kinds of functions that will be important to us in this blog-post:
An injection is a function f: A → B from some set A to some set B so that, for all x,y ∈ A, if f(x) = f(y), then x = y.
A surjection is a function f: A → B from some set A to some set B so that, for all y ∈ B, there is some x ∈ A so that f(x) = y.
A bijection is a function f: A → B from some set A to some set B that is both an injection and a surjection.
A bijection thus pairs every element of A with a unique element of B, and vice versa.
So now, we need to show that this definition of cardinal numbers actually makes sense. I.e., that it is an equivalence relation:
An equivalence relation is a relation ~ such that:
~ is reflexive: x ~ x for all x
~ is symmetric: if x ~ y then y ~ x
~ is transitive: if x ~ y and y ~ z, then x ~ z
...
What?
OK, fine... I'll give you the answer: reflexivity of cardinal equality is witnessed by the identity function on a set, symmetry is from the functions inverse and transitivity comes from composition. I do want you to start thinking about how to solve these problems in the future!
Given an equivalence relation ~ on some class C, we can define a function F with domain C so that, for all x,y ∈ C, F(x) = F(y) iff x ~ y. If ~ fails one of the conditions of an equivalence relation, such a function does not exist.
This time, I will leave you to ponder why~
So! We can choose some function |·| that maps sets X to objects |X| such that |X| = |Y| iff X and Y have the same cardinality, the choice of mapping doesn't really matter.
The cardinality of ℕ, the set of natural numbers, is denoted ℵ₀ (‘aleph-nought’), this is also the cardinality of ℤ, the set of integers, and ℚ, the set of rational numbers. ℵ₀ is an infinite cardinal (not 0, 1, 2, 3, etc), but it isn't the only infinite cardinal!
|P(ℕ)| = 𝔠, the continuum, is another infinite cardinal. It is the cardinality of the set of subsets of ℕ, i.e. the set of sets of natural numbers.
Why are these cardinals different? Well, given some function f: ℕ → P(ℕ), try to find some subset of ℕ that is not in the range of f~ Thus, show that f cannot be surjective!~
Here is a hint for when you get stuck: if there is some x for which A and B disagree on whether they contain it (x is in A but not in B, or x is in B but not in A), then A and B are different.
Now, that we have gone over the basics of cardinal equality, let's look at something more exciting~ cardinal comparison:
⊰ Cardinal Comparison ⊱
[Definition] For two sets A and B, |A| ≤ |B| iff there exists an injection f: A → B.
≤ is a partial order on cardinal numbers:
≤ is reflexive: x ≤ x for every cardinal x
≤ is transitive: if x ≤ y and y ≤ z, then x ≤ z
≤ is antisymmetric: if x ≤ y and y ≤ x, then x = y
Reflexivity and transitivity are easy exercises for the reader~ (the proof is literally just the same as refl and trans for cardinal equality :p). Antisymmetry, on the other hand, is a lot more difficult!
That cardinals are antisymmetric is known as the Shröder-Bernstein theorem. The proof may be hard to follow, so good luck! :D
Let A and B be sets and suppose there are injections f: A → B and g: Β → A. We want to define a bijection h: A → B.
We can see that f and g form chains a₁ ↦{f} b₁ ↦{g} a₂ ↦{f} b₂ ↦{g} ... alternating between elements of A and elements of B. Since g is an injection, for every a ∈ A, we either have that it comes from some b (i.e. g(b) = a), or from nothing. The same holds for f. So every element in A and B belongs to a unique chain ... ↦ * ↦ * ↦ ... defined by f and g. Some of these chains have a sudden start (i.e. some a ∈ A or some b ∈ B that doesn't have an inverse under g or f), and some of these chains are infinite.
For a ∈ A, we can define h(a) based on what kind of chain a belongs to. If the chain a belongs to starts in A, we can set h(a) = f(a). If the chain a belongs to starts with some element in B, we can set h(a) = g⁻¹(a) (i.e. we set h(a) to an element b in B such that g(b) = a). If a belongs to a chain that goes infinitely far left, we do whatever. I'm just going to choose h(a) = f(a).
I'll leave it to you to verify that h is indeed a bijection.
We can also define strict comparison:
[Definition] For cardinals x and y, x < y iff x ≤ y and x ≠ y.
Since ℵ₀ ≠ 𝔠, as shown in the previous chapter, and n ↦ {n} forms an injection from ℕ to P(ℕ), we have ℵ₀ < 𝔠. In fact, for every cardinal x, there is some cardinal y so that x < y.
Here is some basic terminology for partial orders:
[Definition] Let ≤ be a partial order on some class P and let A ⊂ P be a subclass of P. A maximum element of A is some x ∈ A so that, for all y ∈ A, x ≤ y → x = y. A minimal element of A is some x ∈ A so that, for all y ∈ A, y ≤ x → x = y. The greatest element of A, if it exists, is some x ∈ A so that, for all y ∈ A, y ≤ x. The least element of A, if it exists, is some x ∈ A so that, for all y ∈ A, x ≤ y. An upper bound of A is some x ∈ P so that, for all y ∈ A, y ≤ x. A lower bound of A is some x ∈ P so that, for all y ∈ A, x ≤ y. The infimum of A, denoted inf(A), if it exists, is the greatest lower bound of A. The supremum of A, denoted sup(A), if it exists, is the least upper bound of A.
Given x,y ∈ P, the meet of x and y, often denoted x ∧ y, is the infimum of {x,y}. The join of x and y, often denoted x ∨ y, is the supremum of {x,y}. If every pair of elements in a poset (partial ordered set) has a meet and a join, then that poset is called a lattice. x and y are comparable iff x ≤ y or y ≤ x, x ⊥ y is used to denote that x and y are incomparable. Examples of lattices are: the powerset lattice (P(X),⊂) of any set X, where x ∧ y = x ∩ y and x ∨ y = x ∪ y, linear orders like ℚ, where x ∧ y = min(x,y) and x ∨ y = max(x,y), etc.
Without assuming the axiom of choice, two cardinals needn't have a meet or join. Now you know random useless stuff about posets!!
Oh, and: ℵ₀ is a minimum infinite cardinal. Proof is left as an exercise~
❈ Cardinal Arithmetic ❈
Cardinal numbers are numbers, so it'd make sense if we could do arithmetic on them. And we can!
[Definition] For sets A and B, |A| + |B| is the cardinality of the set A ⊔ B, i.e. the disjoint union of A and B. Members of A ⊔ B are (0,a) and (1,b) for a ∈ A and b ∈ B.
[Definition] For sets A and B, |A| · |B| is the cardinality of the Cartesian product A × B of A and B. Members of A × B are ordered pairs (a,b) for a ∈ A and b ∈ B.
Arithmetic on finite cardinals works as you'd expect: 6 + 3 = 9 and 12 · 2 = 24. Addition and multiplication on infinite cardinals, however, is actually quite boring, as x+y and xy are simply max(x,y). Well, that is, if you assume AC. Without AC, cardinal arithmetic can actually get very interesting (and weird af).
I am not that familiar with cardinal arithmetic without the axiom of choice, so you'll have to do more research on that on your own if you're interested!
Cardinal addition and multiplication are commutative, x+y = y+x, xy = yx, associative, (x+y)+z = x+(y+z), (xy)z = x(yz), and multiplication distributes over addition, x(y+z) = xy+xz.
Personally, I think cardinal exponentiation is more interesting than addition or multiplication:
[Definition] For sets A and B, |A|^|B| is the cardinality of the set of functions from B to A.
For cardinals x and y, if x ≥ 2, then we can prove that x^y > y. 𝔠, the cardinality of the continuum, is equal to 2^ℵ₀ and to ℵ₀^ℵ₀. Also, it's the cardinality of the set of real numbers ℝ. Here's a vid I found that explains why.
I think I might be getting too eepyy to write. I'll write more tomorrow.
[zzz]
Good morning!
Cardinal exponentiation has all the properties you'd think it has: x^y · x^z = x^(y+z) and (x^y)^z = x^(yz). Also, 0⁰ = 1.
There are also infinite sums and infinite products:
[Definition] Σ_(i ∈ I) A_i is the set of tuples (i,a) for i ∈ I and a ∈ A_i. Σ_(i ∈ I) |A_i| = |Σ_(i ∈ I) A_i| is the cardinality of this set.
[Definition] Π_(i ∈ I) A_i is the set of functions f with I as domain and f(i) ∈ A_i for all i. Π_(i ∈ I) |A_i| = |Π_(i ∈ I) A_i| is the cardinality of this set.
Σ_(i ∈ {1,...,k}) x_i = x₁ + ... + xₖ and Π_(i ∈ {1,...,k}) x_i = x₁ · ... · xₖ, so this is a valid extension of sums and products. The sum of x many y's, i.e. Σ_(i ∈ I) x for |I| = y, is the same as the product of x and y. The product of x many y's, Π_(i ∈ y) x, is equal to x^y. If X is a family of sets, I also sometimes write ΣX and ΠX for Σ_(A ∈ X) A and Π_(A ∈ X) A.
Here is a fun fact that idk where to place in this blog post: a Dedekind infinite cardinal is a cardinal x = |A| for which there exists an injection f: A → A that is not surjective. In other words, x+1 > x. Without the axiom of choice, infinite Dedekind finite cardinal numbers (aka mediate/Dedekind cardinals) can exist, and these cardinals are incomparable to ℵ₀.
Here is a fun question: is the product of any number of non-zero cardinals always non-zero?
Well?~
◈ Axiom of Choice ◈
The axiom of choice (AC) states that the product of any number of non-zero cardinals is always non-zero. I.e. for any family X of non-empty sets, ΠX is non-empty (a member of ΠX is called a choice function for X).
Eh... I didn't really plan what to write in this chapter besides what the axiom of choice is. Oh, well!
...
Wait-
Wait, WHAT THE F---
So.. apparently, you cannot take infinite products of cardinals without assuming the axiom of choice. ℵ₀ is the cardinality of ℕ, but ℕ is not the only set with cardinality ℵ₀. If A has cardinality ℵ₀, then in how many ways does it have cardinality ℵ₀? Well, if a bijection f: A → ℕ exists, then 𝔠 many of those bijections exist. If we take ℵ₀ different sets A₀, A₁, A₂, ... all with cardinality ℵ₀, we have, for each k, multiple bijections f: Aₖ → ℕ. We would expect |Π_(k ∈ ℕ) Aₖ| = ℵ₀^ℵ₀ = 𝔠, but how do we choose a bijection f: Aₖ → ℕ for each natural number k? If we only had a finite amount of Aₖ, this wouldn't be a problem. However, we have an infinite amount of Aₖ... so we need to make an infinite amount of choices. So, we can define the set X = {{f | f: Aₖ → ℕ is a bijection} | k ∈ ℕ}, and a choice function for X gives us a bijection for each Aₖ! But... we need AC to prove such a choice function exists.
...
HOW AM I SUPPOSED TO LEARN AD WHEN THIS F----D UP SH-- HAPPENS?
..sorry for screaming at you.
Luckily, infinite cardinal addition still works as normal without AC... probably.
Maybe.
Nope, it doesn't.
【 Aleph Cardinals 】
As you can see, we have run out of symbols to put on the sides of the chapter titles.
Anyways! So, a well order is an order (A,≤) where:
≤ is a partial order on A.
≤ is total/linear: for all x and y in A, x and y are comparable by ≤.
≤ is well-founded: all non-empty S ⊂ A have a minimum element.
Well, ig those random facts about posets weren't completely useless.
I might make a blog post about ordinals later, which'll go more in-depth on well-orders and linear orders.
[Definition] An aleph cardinal is an infinite cardinal number |A| for which there exists a well-order (A,≤).
An example of an aleph cardinal is ℵ₀: ℵ₀ = |ℕ| and the usual order on ℕ is a well-order. The n-th aleph cardinal is written as ℵₙ, starting at 0th of course :3
The statement ‘ℵ₁ = 𝔠’ is known as the continuum hypothesis (CH), which is both unprovable and undisprovable. I might make a blog post about forcing in the future! ^^
Although cardinals (without AC) don't need to be linearly ordered, aleph cardinals are linearly ordered and even well-ordered. A proof of this is left as an exercise~ Because, well, of course it is :P
The well-ordering principle states that every set A has a well-order (A,≤). Equivalently, every infinite cardinal is an aleph cardinal.
It turns out: AC and the well-ordering principle are equivalent! Although AC, when written in its original form, seems more obviously true (to me at least), I think the well-ordering principle is a lot more useful.
I might go more in depth on why this is in my post about ordinal numbers. (why AC and the well ordering principle are equivalent)
❖ Cofinality ❖
There are two definitions of cofinality: one for cardinals, and one for ordinals. Since this blog-post is about cardinals, I'll give the one for cardinals:
[Definition] The cofinality of a cardinal x = |A| is the minimum cardinality of a partition X of A into sets of cardinality <x.
I often use cf(x) to denote the cofinality of x and cof(x) to denote the class of cardinals with cofinality x, though people also often use cof(x) to denote the cofinality of x. A partition of a set A is a set X such that all members of X are subsets of A, none of the members of X intersect and the union of all members of X is A.
We have cf(0) = 0, cf(1) = 1, cf(2) = 2 and for every finite n > 2, cf(n) = 2. 3 = |{a,b,c}| can be partitioned into {{a},{a,b}}, the partition {{a,b,c}} doesn't work as this has a set {a,b,c} of cardinality 3, which is not < 3.
[Definition] A cardinal κ is regular iff cf(κ) = κ.
Equivalently, every partition of κ either has cardinality κ or an element of cardinality κ.
We have cf(cf(x)) = cf(x) for every cardinal number x, so cf(x) is always regular.
0, 1 and 2 are examples of finite regular cardinals, though often, 2 is not regarded as regular. ℵ₀ also is a regular cardinal.
A cardinal that is not regular is called a singular cardinal. An example of a singular cardinal is ℵ_ω, which is the sum of ℵₙ for finite n. cf(ℵ_ω) = ℵ₀ as {ℵ₀,ℵ₁,ℵ₂,...} is a partition of ℵ_ω = Σ_(n ∈ ω) ℵₙ into ℵ₀ many cardinals each of which is <ℵ_ω.
Here is the ordinal definition of cofinality, which is (a lot) more often used if you have the axiom of choice:
[Definition] The cofinality of an order (A,≤) is the minimum cardinality of a cofinal subset S ⊂ A. S is cofinal if ∀x ∈ A ∃y ∈ S, x ≤ y.
Assuming the axiom of choice, the cofinality of a cardinal κ is often defined as the cofinality of the minimum ordinal α such that |α| = κ. An ordinal is the order-type of a well-order, and one ordinal is less than another if there exists an order preserving injection from one to the other. cf(κ) with the above definition agrees with the one I gave earlier for infinite κ, but if κ > 0 is finite, then cf(κ) is 1 instead of 2. I might talk more about this in my blog-post about ordinal numbers. For now, we'll just use the partition definition of cofinality.
If x is infinite Dedekind finite, then cf(x) is 2, which I think is kinda funni. We can also have cf(𝔠) = 2 if we omit choice :3 Though with choice, cf(𝔠) is uncountable (i.e. >ℵ₀).
I don't know how to end this blog, so here is some random terminology:
A cardinal x is finite iff κ is 0, 1, 2, 3, etc.
A cardinal x is countable iff it's ≤ℵ₀. It is uncountable if it's not ≤ℵ₀.
An alpeh cardinal is an infinite well-orderable cardinal.
A strong limit cardinal is a cardinal x for which, for all y < x, we have 2^y < x.
ℶ_0 = ℵ_0, ℶ_(n+1) = 2^(ℶ_n) and ℶ_λ = sup{ℶ_α | α < λ} for limit ordinal λ. A cardinal of the form ℶ_n is called a beth cardinal and ℶ₁ = 𝔠.
Assuming AC, x⁺ is the next cardinal after x, called the successor cardinal. AC is needed to ensure there exists a cardinal directly after κ.
The continuum hypothesis (CH) states that ℵ₁ = 𝔠. The generalized continuum hypothesis (GCH) states that 2^x = x⁺ for all infinite x. Both are unprovable and undisprovable from the usual axioms of ZFC.
Assuming AC, a weak limit cardinal (or simply limit cardinal) is a cardinal x for which, for all y < x, y⁺ < x. Assuming GCH, weak and strong limit cardinals are the same.
A strongly inaccessible cardinal (or simply, inaccessible cardinal) is a regular strong limit cardinal. Assuming AC, a weakly inaccessible cardinal is a regular weak limit cardinal. The existence of an inaccessible cardinal implies the consistency of ZFC, and thus, by Gödel's incompleteness theorems, ZFC cannot prove the existence of an inaccessible cardinal.
That's all I had to say about cardinals (for now), bye!~
18 notes
·
View notes
Text
A partial equivalence relation is a reflexive, symmetric, and transitive binary relation ≡, which allows you to compare some, but not all pairs of elements x, y of the domain. A total equivalence relation is similar, but we also require that for all x, y we have that x ≡ y or y ≡ x.
70 notes
·
View notes
Text
youtube
Class 12 Math | Basics of Relations and functions | JEE Mains, CUET Prep
Understanding the Basics of Relations and Functions is essential for building a strong mathematical foundation, especially for Class 12 students preparing for competitive exams like JEE Mains, CUET, and other entrance tests. In this comprehensive and engaging video tutorial by Sethi Swift Learn, we break down the key concepts of Relations and Functions in the most simplified way to ensure conceptual clarity and long-term retention.
This session introduces you to the core fundamentals including the definition of relations and functions, types of relations (reflexive, symmetric, transitive, equivalence), and various types of functions like one-one, onto, and many-one. We also cover domain, codomain, and range—which are crucial to understanding function mapping and their applications in higher-level math problems.
Whether you're just starting out or revising the topic, this video serves as a step-by-step guide. The concepts are explained with the help of visual examples and solved questions so that you can confidently tackle problems in your board exams as well as in JEE Mains and CUET.
Our expert-led teaching approach focuses on:
Conceptual clarity with real-time examples
NCERT and previous year-based questions
JEE-level problem-solving techniques
Tips and tricks to solve relation & function problems faster
By the end of this session, you'll be able to identify and differentiate various types of relations and functions and apply this knowledge in problem-solving efficiently.
This video is ideal for:
Class 12 Students (CBSE & State Boards)
JEE Mains Aspirants
CUET and Other Competitive Exam Candidates
Don't forget to like, share, and subscribe to our channel for more such subject-wise breakdowns and in-depth lessons. Start mastering Relations and Functions with confidence today!
📌 Subscribe for more: Sethi Swift Learn 📌 Watch the full video here: https://youtu.be/yaNC_1ZcwwE?si=JHLJ6RQRSv_fxw94
Let me know if you want short captions, tags, or thumbnail ideas for this video too! ��📘
0 notes
Text
Homework 3 CSCI 301 solved
Use the method of proof by contradiction. 1. [4 points] Prove that √6 is irrational. 2. [4 points] If �, � �ℤ , then �) − 4� − 2 ≠ 0. 3. [4 points] Suppose Suppose A≠ Ø. Since Ø ⊆A×A, the set R= Ø is a relation on A. Is R reflexive? Symmetric? Transitive? If a property does not hold, say why. 4. [4 points] Define a relation R on Z as xRy if and only if 4 | (x + 3y) Prove R is an equivalence…
0 notes
Text
Homework 3 CSCI 301
Use the method of proof by contradiction. 1. [4 points] Prove that √6 is irrational. 2. [4 points] If �, � �ℤ , then �) − 4� − 2 ≠ 0. 3. [4 points] Suppose Suppose A≠ Ø. Since Ø ⊆A×A, the set R= Ø is a relation on A. Is R reflexive? Symmetric? Transitive? If a property does not hold, say why. 4. [4 points] Define a relation R on Z as xRy if and only if 4 | (x + 3y) Prove R is an equivalence…
0 notes
Text
Pat Ballew adds "Equal The mathematical use of equal means that two things are related in a transitive, symmetric, and reflexive way in relation to some specified properties. The meaning is rooted in the Latin word aeques for level. Chaucer's 1400 Treatise on the Astrolabe uses the term. Cajori's History of Mathematical Notation gives many early symbols used in the West, both before and after Recorde wrote the now ubiquitous "=" for the symbol in 1557, but even for another century, many mathematicians used no symbol at all, or used a collection of other symbols.In fact, after 1557, Recorde's symbol did not appear again in print until 1618. Viete, writing 1n 1571 used the same symbol for difference (the absolute value of a-b) and this symbol was widely repeated in this use across Europe. Another early symbol for equality was the script conjunction of ae."

When Was The Equals Sign (=) Invented?
The equals sign (=) was introduced by Robert Recorde in 1557 to denote equality between mathematical expressions.
More: https://riverainventions.com/who-invented-mathematical-symbols/
35 notes
·
View notes
Text
Q10 Exercise1.1 I Class 12 Maths NCERT Chapter 1 Relations and Functions | NCERT solutions
NCERT Class 12Chapter: Relations and FunctionsExercise 1.1Question 10: Give an example of a relation. Which is(i) Symmetric but neither reflexive nor transitive.(ii) Transitive but neither reflexive nor symmetric.(iii) Reflexive and symmetric but not transitive.(iv) Reflexive and transitive but not symmetric.(v) Symmetric and transitive but not reflexive.
youtube
View On WordPress
#edusquaremaths#equivalence#equivalence classes#equivalence relation#equivalence relation class 12#equivalence relations#reflexive#reflexive relation#reflexive relation class 12#reflexive symmetric transitive relations#reflexive symmetric transitive relations class 12#reflexive symmetric transitive relations in hindi#symmetric#symmetric reflexive transitive relations#symmetric relation#transitive#transitive relation#Youtube
0 notes
Text
Equivalence relation
#equivalencerelations Definition 🙂 An equivalence relation on a set A, is a relation “R” in A which is #reflexive, #symmetric, and #transitive.
Definition 🙂 An equivalence relation on a set A, is a relation “R” in A which is reflexive, symmetric, and transitive. Reflexive relation ARB is a relation in set A, then R is called Reflexive relation R is reflexive, i.e., for all a belongs to R (a, a) also belongs to R a=1 & b=a then (1,1) also belongs to R ⭐⭐⭐⭐ Rating: 4 out of 5. Symmetric relation (mirror) Let R be a relation in set…
View On WordPress
0 notes
Text
Word rhyming is an equivalence relation
Take the definition that two words rhyme if and only if they end with the same sound.
Reflexive: Every word rhymes with itself.
Well, if two words are the same, all their sounds have to match, including the final one, so this point holds.
Symmetric: If A rhymes with B, B rhymes with A.
This one’s really hard to prove, because it’s so obvious. If A rhymes with B, then the final sounds of A and B are the same. They will still be the same if we swap the words around. Please don’t make me explain it more, I’ll cry.
Transitive: If A rhymes with B and B rhymes with C, then A rhymes with C.
Call the sound at the end of word A ‘&’. If A rhymes with B, then B also has to end with ‘&’. If B rhymes with C, and B ends with ‘&’, then C also has to end with ‘&’. This means that both A and C end with ‘&’, and so A rhymes with C.
There we go. The argument no one cares about but me has been made. Rhyme is an equivalence relation. You can all go home.
405 notes
·
View notes
Text
If your relation(ship) is
Reflexive
Symmetric
Transitive
It's not romantic, it's an equivalence
105 notes
·
View notes
Text
Forcing II: Electric Boogaloo
In this part, I'll explain more about forcing. I'll explain things that I think are important for a good understanding of forcing. Here is a list of things that I'll discuss:
The κ-chain condition;
κ-closed forcing notions;
Forcing equivalence;
Smth smth boolean algebra smth;
More about Cohen forcing;
Lévy collapse;
Sum forcing & product forcing.
Maybe I'll explain more than what's listed above, maybe I'll explain less ¯\(˙˘˙)/¯ Idk, I just started writing this.
Anyways, I hope this will be fun!! In future parts, I might try to explain symmetric extensions, iterated forcing and Namba forcing, though I don't really understand them myself yet.
Future Lucy here. To keep this blog post "short", I decided I'd only explain what different properties forcing conditions can have and how those properties affect the forcing. I won't go into detail about specific forcing notions, meaning that I won't explain Cohen forcing more, Lévy collapse, sum forcing or product forcing. Those might get their own blog post.
A forcing poset is a structure (P,≤,1) with a set P, a binary relation ≤ and an element 1 ∈ P, such that:
≤ is reflexive ∀x ∈ P x ≤ x;
≤ is transitive ∀x,y,z ∈ P x ≤ y ∧ y ≤ z → x ≤ z;
≤ is antisymmetric ∀x,y,z ∈ P x ≤ y ∧ y ≤ x → x = y;
1 is the greatest element ∀x ∈ P x ≤ 1;
P enjoys the splitting condition ∀x ∈ P ∃y,z ∈ P y ≤ x ∧ z ≤ x ∧ ∄w ∈ P w ≤ y ∧ w ≤ z.
The first three axioms (reflexivity, transitivity & antisymmetry) describe a partial ordered set, which needn't have a greatest element nor enjoy the splitting condition.
Members of a forcing poset are called forcing conditions. x ≤ y should be read as x extends y. Two forcing conditions x and y are incompatible, denoted x ⊥ y, if ∄z z ≤ x ∧ z ≤ y. A set O ⊂ P is open if ∀x ∈ O ∀y ∈ P y ≤ x → y ∈ O. A set D ⊂ P is dense if ∀x ∈ P ∃y ∈ D y ≤ x. A set A ⊂ P is an antichain¹ if ∀x,y ∈ A x ≠ y → x ⊥ y. A set C ⊂ P is a chain if ∀x,y ∈ C x ≤ y ∨ y ≤ x. This terminology shouldn't be new to you if you've read my previous post on forcing (except for chains, those are new).
¹Usually, when P is a partial ordered set, an antichain is a set of pairwise incomparable elements. x and y are incomparable if neither x ≤ y nor y ≤ x. Incompatibility is a stronger condition which is more useful in forcing, so a subset of a forcing poset P is called an antichain if it is a set of incompatible elements. When the distinction is important, a set of incompatible elements is sometimes called a strong antichain, but I'll use antichain to refer to those sets.
A set G ⊂ P is a generic filter if:
1 ∈ G;
∀x ∈ G ∀y ∈ P x ≤ y → y ∈ G;
∀x,y ∈ G ∃z ∈ G z ≤ x ∧ z ≤ y;
For all dense D ⊂ P in the ground model V, G ∩ D ≠ ∅.
The ground model is the model of set theory that we start from. The forcing poset is in the ground model, while the generic filter typically lies outside of it. The model V[G] is called the forcing extension, which results from adding the generic object G to V in a specific way. All objects, except for the generic filter, lie inside of the ground model unless otherwise specified.
A maximal antichain is an antichain A ⊂ P such that, for every x ∈ P, there is some y ∈ A so that x and y are incompatible. In other words, A cannot be extended to a larger antichain B. It is called a maximal antichain as a maximal antichain is a maximal element in the poset of antichains of P ordered under inclusion. Maximal elements shouldn't be confused with greatest elements: x is maximal if ∄y x < y while x is the greatest element if ∀y y ≤ x. By Zorn's lemma, every antichain can be extended to a maximal antichain.
Maximal antichains can be seen as partitions of unity: all elements of a maximal antichain are incompatible, while they "add up" to the greatest element 1. If A is a maximal antichain, then every generic filter G on P intersects A at exactly one point, a proof of this is left as an exercise to the reader.
Given a forcing condition p, ↓p = {q ∈ P | q ≤ p} forms itself a forcing poset, with the order ≤ of P on its elements and with p as the greatest element. Since ↓p is a forcing poset, we can relativize the notion of a dense set and an antichain to ↓p: D ⊂ ↓p is dense below p if D is dense in (↓p,≤,p) and A ⊂ ↓p is a (maximal) antichain below p if A is a (maximal) antichain in (↓p,≤,p). If G is a generic filter on P and p ∈ P, then G must intersect every set in the ground model that is dense below p, and it intersects every set that is a maximal antichain below p exactly once.
The κ-c.c and κ-Closed Forcing
The κ-chain condition is as follows:
[Definition] For a regular cardinal κ, a forcing poset P has the κ-chain condition (κ-c.c) if every antichain in P has cardinality <κ. P has the countable chain condition (c.c.c) if every antichain in P is countable.
The c.c.c is equivalent to the ω₁-c.c. It can be verified easily that the ω-chain condition is impossible due to the splitting condition. I think of the κ-chain condition as restricting the width of the forcing poset. The reason why κ is chosen to be regular is because, for a singular cardinal λ, the λ-chain condition is equivalent to the λ⁺-chain condition, and λ⁺, the successor cardinal of λ, is regular. It's easy to verify that for κ ≤ λ, the κ-chain condition implies the λ-chain condition. And if |P| < κ, then P has the κ-chain condition. The κ-chain condition is useful as it limits what cardinals can be collapsed in the forcing extension.
[Theorem] Let κ be a regular cardinal, let P be a forcing poset, suppose P has the κ-c.c, let G be a generic filter on P, let A,B ∈ V be sets in the ground model and let f: A → B be a function in the forcing extension V[G]. Then, there is a function F: A → P(B) in the ground model such that for every a ∈ A, |F(a)| < κ and f(a) ∈ F(a).
Thus, functions in the forcing extension (with domain and codomain in the ground model) can be approximated with functions in the ground model given that P satisfies the κ-chain condition. A version of this theorem with the c.c.c was stated in my first post about forcing. A proof of the more general form (using the κ-c.c rather than the c.c.c) is left as an exercise to the reader. Assume proofs of any further theorems are left as an exercise unless otherwise specified.
cf(α) for an ordinal α denotes the cofinality of α, i.e. the smallest cardinality of a cofinal subset of α.
[Theorem] Let κ ≤ λ ≤ μ be uncountable cardinals with κ and λ regular and cf(μ) = λ. Let P be a forcing poset with the κ-c.c and let G be a generic filter on P. Then, cf^V[G](μ), the cofinality of μ in the forcing extension, is (still) λ.
This means that forcing posets with the κ-c.c do not collapse cofinalities ≥κ. This implies that any (regular) cardinal λ ≥ κ in the ground model stays a (regular) cardinal in the forcing extension. Forcing posets with the c.c.c do not collapse any cardinals or cofinalities.
Another desirable property a forcing poset can have is being κ-closed.
[Definition] Given a regular cardinal κ, a forcing poset P is κ-closed if for all chains C ⊂ P with |C| < κ, there is some lower-bound² of C in P.
²A lower-bound of a set X ⊂ P is some p ∈ P such that, for all q ∈ X, we have p ≤ q. An upper-bound is the same thing, but with q ≤ p instead. Least and greatest elements of X refer to lower-bounds and upper-bounds of X that are themselves elements of X.
It can be easily verified that every forcing poset is ω-closed. If κ ≤ λ and P is λ-closed, then it must be κ-closed as well. An example of an ω₁-closed forcing poset is the forcing poset of uncountable subsets of ω₁ ordered under inclusion. I think of κ-closed forcing posets as forcing posets that are long.
[Theorem] Let κ be a regular cardinal and α < κ be some ordinal. Let P be a κ-closed forcing poset and let G be a generic filter on P. Let s = (a_i | i < α) be a length α sequence in the forcing extension V[G] with elements a_i ∈ V in the ground model. Then, s is already in the ground model.
The κ-chain condition restricts how much P can add above the point κ, while being κ-closed restricts how much P can add below κ. For example, if P is ω₁-closed then it cannot add any more reals (subsets of ω) as reals can be encoded as length ω sequences of 0s and 1s. κ-closed forcing posets cannot collapse any cofinalities below κ:
[Theorem] (i) Let κ ≤ λ be ordinals with κ a regular cardinal and cf(λ) ≥ κ. Let P be a κ-closed forcing poset and let G be a generic filter on P. Then, the cofinality of λ in the forcing extension, cf^V[G](λ), is (still) at least κ. (ii) Let κ, λ and μ be ordinals with κ ≤ λ regular cardinals and cf(μ) = κ. Let P be a λ-closed forcing poset and let G be a generic filter on P, then the cofinality of μ in the forcing extension, cf^V[G](μ), is (still) κ.
In (i), cf(λ) can still decrease, but only to a certain point (that being κ).
Forcing Equivalence and BA's
Forcing by two different forcing posets might result in the same forcing extension. For example, if P = fin(ω, 2) is the forcing poset of finite partial functions from ω to 2 ordered under extension and Q = 2^<ω is the forcing poset of finite sequences of 0s and 1s ordered under extension, then for G a generic filter over P, {p ∈ 2^<ω | p ∈ G} is a generic filter over Q, and for H a generic filter over Q, {p ∈ fin(ω,2) | ∃q ∈ H p|dom(p) = h} is a generic filter over P. This observation gives rise to the notion of forcing equivalence:
[Definition] Two forcing posets P and Q are forcing equivalent, denoted P ≈ Q, iff there exists a P-name σ ∈ V^P and Q-name τ ∈ V^Q such that:
P ⊩ 'σ is a generic filter on &Q';
Q ⊩ 'τ is a generic filter over &P';
P ⊩ '&τ^σ = &G';
Q ⊩ '&σ^τ = &G'.
For x ∈ V in the ground model, I use &x = {(&y,1) | y ∈ x} to denote the name of x, and I use &G = {(&p,p) | p ∈ P} to denote the name of the generic filter. For a forcing condition p, a formula φ and names τ₁ .. τₙ, p ⊩ φ(τ₁ .. τₙ) means that, for any generic G that contains p, V[G] ⊧ φ(τ₁^G .. τₙ^G). For a name σ and some generic filter G, σ^G = {τ^G | ∃p ∈ G (τ,p) ∈ σ} is the evaluation of σ with respect to G. I sometimes write P ⊩ φ(τ₁ .. τₙ) for 1 ⊩ φ(τ₁ .. τₙ) where 1 is the greatest element of the forcing poset P. V^P ⊂ V denotes the class of P-names. The above four statements are thus equivalent to:
Given a generic filter G on P, σ^G is a generic filter on Q;
Given a generic filter H on Q, τ^G is a generic filter on P;
Given a generic filter G on P, τ^σ^G = G;
Given a generic filter H on Q, τ^σ^H = H.
Since we have an internal definition of forcing (as discussed in my previous post), we can verify if P and Q are forcing equivalent in V without having to look at the generic filters outside of V.
[Theorem] Let (P,≤,1) be a forcing poset and let D ⊂ P be a dense set. Then, (P,≤,1) and (D∪{1},≤,1) are forcing equivalent.
Of course, a proof of this theorem is left as an exercise to the reader.
[Definition] A boolean algebra is a structure (A,∧,∨,¬,0,1) with a set A, , binary operators ∧ and ∨ called meet and join resp., a unary operator ¬ called complement and elements 0,1 ∈ A called bottom and top resp. that satisfies the following axioms:
Associativity, (a ∧ b) ∧ c = a ∧ (b ∧ c) and (a ∨ b) ∨ c = a ∨ (b ∨ c);
Commutativity, a ∧ b = b ∧ a and a ∨ b = b ∨ a;
Absorption, a ∨ (a ∧ b) = a ∧ b and a ∧ (a ∨ b) = a ∨ b;
Identity, a ∧ 1 = a and a ∨ 0 = a;
Distributivity, a ∧ (b ∨ c) = (a ∧ b) ∨ (a ∧ c) and a ∨ (b ∧ c) = (a ∨ b) ∧ (a ∨ c);
Complements, a ∧ ¬a = 0 and a ∨ ¬a = 1.
An example of a boolean algebra is (P(ω),∩,∪,c,∅,ω), with subsets of ω as elements, intersection as meet, union as join, complement as complement, the empty set as bottom element and ω as top element.
Given a boolean algebra A, we can define a partial order ≤ on A as follows: a ≤ b iff a = a ∧ b (iff a ∨ b = b). In this partial order, 0 is the smallest element and 1 is the greatest element. From the partial order ≤ on A, we can reconstruct the boolean algebra by taking a ∧ b to be the greatest lower-bound of a and b, a ∨ b to be the least upper-bound of a and b, ¬a to be the unique element such that a ∧ ¬a = 0 and a ∨ ¬a = 1, 0 to be the least element and 1 to be the greatest element. Given a set X in a poset A, we can define ⋀X to be the greatest lower-bound of X and ⋁X to be the least upper-bound of X. In a boolean algebra, ⋀{x,y} = x ∧ y, ⋁{x,y} = x ∨ y, ⋀∅ = 0 and ⋁∅ = 1. If for any subset X of A, ⋀X and ⋁X exists, then A is called a complete boolean algebra.
One theorem regarding forcing posets that I like is the following:
[Theorem] Every forcing poset (P,≤,1) is forcing equivalent to a complete boolean algebra (A,∧,∨,¬,0,1) with its bottom element (0) removed.
I encourage you to prove this theorem yourself (it might take ~20 minutes to an hour, idk ¯\(˙˘˙)/¯). I'll also give a proof as a different proof of this theorem might be insightful.
[Proof] For any set of forcing conditions X, we want a forcing condition ⋁X that forms a supremum of X and, if all elements of X are compatible, a forcing condition ⋀X that forms an infimum of X. We also want a way to negate forcing conditions that aren't 1. My first idea was to use sets of forcing conditions as the new forcing conditions, as sets have intersections, unions and complements. My plan was to use open sets, but that doesn't quite work: if {p,q} is a maximal antichain in P, then ↓p ∪ ↓q is an open set and 1 ∉ ↓p ∪ ↓q. ↓1 and ↓p ∪ ↓q are equivalent (a generic filter on P intersects one iff it intersects the other), yet they're not equal. However, we can see that the open set ↓p ∪ ↓q sort-of "surrounds" 1: every open set that intersects ↓1 must also intersect ↓p ∪ ↓q. The idea is now to use open sets that contain all points it surrounds, as this'd mean that you have a single set that is equivalent to any given set. So we allow ↓1 but not ↓p ∪ ↓q as ↓p ∪ ↓q surrounds 1 yet does not contain 1. The concept of "surrounding a point" gives rise to the definition of a regular open set:
[Definition] A set R is regular open iff R is equal to the interior of its closure. Equivalently, R is the interior of a closed set.
In a topological space (S,τ), a closed set is the complement of an open set. The interior of a set X, denoted int(X), is the largest open set that is included in X. The closure of X, denoted X̅ (X with a bar on top) or cl(X), is the smallest closed set that includes X.
We're working in the topological space P where open sets are downwards closed sets. A set C ⊂ P is closed iff C is upwards closed, i.e. ∀p ∈ C ∀q ∈ P p ≤ q → q ∈ C. The closure of a set X ⊂ P is the upwards closure ↑X = {p | ∃q ∈ X q ≤ p}. The interior of X ⊂ P is the set int(X) = {p | ∀q ≤ p q ∈ X}. Thus, a regular open in P is a set R such that R = int(R̅) = {p | ∀q ≤ p ∃r ∈ R r ≤ q}. For example, as ∀q ≤ 1 ∃r ∈ ↓p ∪ ↓q r ≤ q and 1 ∉ ↓p ∪ ↓q, ↓p ∪ ↓q is not a regular open. int(cl(↓p ∪ ↓q)) = ↓1 and ↓1 is regular open.
Let RO be the set of all regular open subsets of P. (RO,⊂) forms a poset. For a family R of regular opens, ⋀R = ⋂R and ⋁R = {p | ∀q ≤ p ∃r ∈ ⋃R r ≤ q}, and for a regular open R, ¬R = {p | ∀q ≤ p q ∉ R}. R ∧ S = ⋀{R,S} = R ∩ S and R ∨ S = ⋁{R,S} = {p | ∀q ≤ p ∃r ∈ R ∪ S r ≤ q}. (RO,∧,∨,¬,∅,P) forms a complete boolean algebra.
So now we need to show that (P,≤,1) and (RO\{∅},⊂,P) are forcing equivalent. Given a generic filter G on P, we can define the generic filter H on RO\{∅} as the set of all regular opens that intersect G. We thus define the P-name σ = {(R,p) | p ⊩ "R ∈ &(RO\{∅}) and intersects &G = {(&p,p) | p ∈ P}"}. Conversely, given a generic filter H on P, we can define the generic filter G on P as the set of all forcing conditions p such that H contains a regular open R with ∀q ∈ R q ≤ p. We can thus define RO\{∅}-name τ = {(p,R) | R ⊩ "p ∈ &P and ∃R ∈ &H ∀q ∈ R q ≤ p"} where &H = {(&R,R) | R ∈ RO\{∅}}. We can see that τ^σ^G = G and σ^τ^H = H for all generic G on P and H on RO\{∅}. ∎
That took a long time to write -.- I'll take a break now, maybe I'll write about canonical names next. Or maybe mixing.
Mixing
You can mix things.
[Theorem] Let P be a forcing poset, let p ∈ P be a forcing condition, let φ be a formula and let τ₁ .. τₙ be P-names. Then, if p ⊩ ∃x φ(x,τ₁ .. τₙ), then ∃σ p ⊩ φ(σ,τ₁ .. τₙ).
This post is getting too long. Good night -.- Dream of fish <>< Yum
3 notes
·
View notes
Text
An innocuous little math thing that I find really neat for some reason: a partial equality relation (symmetric and transitive) almost implies a full equality relation (symmetric, transitive, and reflexive). If a=b, then by symmetry b=a, and by transitivity a=a, and then you've got reflexivity. The only way to have a partial equality relation that is not a full equality relation is if there exists some element that just doesn't appear in the relation at all
50 notes
·
View notes
Text
Relations and Functions: Class 12 Maths Video Tutorials — Mathyug
Are you a Class 12 student gearing up for your Maths exams? Do the concepts of relations and functions seem daunting? Fret not! Ashish Sir from MathYug has designed an exceptional self-study course that covers everything you need to master these topics. This comprehensive course includes video tutorials, detailed notes, and assignments that ensure thorough understanding and practice.
What You’ll Learn
Ashish Sir’s course on MathYug is meticulously structured to provide a deep understanding of relations and functions, a crucial component of Class 12 Mathematics. The course is divided into two main parts:
1. Relations
In this segment, you will explore various types of relations. Here’s a brief overview of what you’ll learn:
Empty Relations: Understanding the concept of relations where no element of a set is related to any element of another set.
Universal Relations: Learning about relations where every element of a set is related to every element of another set.
Trivial Relations: Delving into the simplest form of relations that only relate to the identical pairs.
Reflexive Relations: Understanding relations where every element is related to itself.
Symmetric Relations: Exploring relations where if one element is related to another, the second element is also related to the first.
Transitive Relations: Learning about relations where if one element is related to a second, and the second to a third, then the first is related to the third.
Equivalence Relations: Discovering relations that are reflexive, symmetric, and transitive.
Equivalence Classes: Understanding the partition of a set into disjoint subsets where each subset is an equivalence class.
To ensure comprehensive understanding, the course provides practice questions from diverse sources such as NCERT Textbook exercises, NCERT Examples, Board’s Question Bank, RD Sharma, and NCERT Exemplar. You can download the PDF of assignments within the course for additional practice.
2. Functions
The second part of the course focuses on functions, including:
One to One and Onto Functions: Understanding functions where each element of one set is paired with a unique element of another set, and functions that map elements from one set onto every element of another set.
Composite Functions: Learning how to combine two functions to form a new function.
Inverse of a Function: Exploring how to find the function that reverses the effect of the original function.
Similar to the relations segment, this part also includes practice questions from various reputable sources. The assignments are designed to reinforce your understanding and are available for download within the course.
Sample Videos
To give you a glimpse of the quality of instruction, Ashish Sir has shared sample videos covering some of these essential topics. Watching these will help you understand his teaching methodology and the depth of content provided.
youtube
youtube
youtube
youtube
If you’re aiming to excel in your Class 12 Maths exams, Ashish Sir’s course on MathYug is your go-to resource. With detailed video tutorials, comprehensive notes, and extensive assignments, you will gain a solid understanding of relations and functions. Start your journey towards mastering these crucial topics today!
For more information and to access the course, visit MathYug. Happy learning!
#relations and functions#relations and functions class 12#class 12 maths#class 12 maths ncert solutions#relations and functions class 12 tutorials#relations and functions class 12 videos#mathyug#Youtube
0 notes
Text
youtube
Class 12 Maths | Relations and functions | JEE Mains, CUET Prep
Understanding Relations and Functions is crucial for Class 12 students preparing for JEE Mains, CUET, and board exams. In this comprehensive session, we dive deep into the key concepts, definitions, types of relations, types of functions, domain, range, and codomain, along with practical problem-solving techniques.
This video is designed to help you grasp the fundamental principles and apply them effectively in competitive exams. With step-by-step explanations, solved examples, and important shortcut tricks, this session will strengthen your conceptual understanding and improve your problem-solving speed.
🔥 What You Will Learn in This Session?
✔️ Basics of Relations and Functions ✔️ Types of Relations – Reflexive, Symmetric, Transitive, and Equivalence Relations ✔️ Types of Functions – One-One, Onto, Into, and Many-One Functions ✔️ Domain, Codomain, and Range with Examples ✔️ Solved Problems with Step-by-Step Explanations ✔️ Important JEE Mains & CUET Questions with Tricks
This session is ideal for students preparing for competitive exams as it covers all the essential topics required to ace the JEE Mains and CUET with confidence. Whether you are revising for your boards or gearing up for entrance exams, this video provides clarity, accuracy, and effective strategies to solve problems efficiently.
📌 Watch Now: https://youtu.be/vXv8TspXPqw?si=qQUBnP3SWVKWzJfM
Don't forget to like, share, and subscribe for more expert guidance! 🚀 Stay ahead in your preparation with our high-quality content.
0 notes