#Peano arithmetic
Explore tagged Tumblr posts
Text
Thanks! Hmm... I'm not sure that dropping this (P4) allows quite the branching that you were looking for in the OP, because omitting that axiom allows a number to have multiple predecessors, while I think what you were looking for in the OP is to allow a number to have multiple successors? This is why I suggested making S a relation symbol (with an extra condition so that there is always at least one successor) rather than a function symbol. Either change could be interesting though.
How might we define addition if numbers can have multiple successors? One question would be whether we want addition to still be a function, or whether it too should be multi-valued? If addition is to be single-valued, I suppose that instead of x+S(y)=S(x+y) , I suppose we would say something about some logical combination of the statements S(y,z), S(x+y,w), x+z=w ? Maybe, "for all z such that S(y,z), there exists a w such that S(x+y,w) and x+z=w"? Or, a more neutral and weaker statement, "(for all x,y) there exist z,w such that x+z=w, S(x+y,w), and S(y,z)". Oh, wait, if we are assuming that addition is a function, then having x+z=w makes there be no separate choice of what w to use... So, I guess it would just be a combination of the statements, S(y,z), S(x+y,x+z) So, either, "for all x,y, there exists a z such that S(y,z) and S(x+y,x+z)", or, "for all x,y, for all z, if S(y,z), then S(x+y,x+z)". I'm not sure which of these would be more interesting.
There should be divergent number lines. Sure you have the good old fashioned one, two, three… etc. but after three you can choose to go to four or you could go to grond and continue onwards from there, each line branching out on their own twisting ways.
27 notes
·
View notes
Text
something i've spent years grappling with, a law of fiction writing i've tried over and over to skirt or exempt myself from, is that the craft of composing a novel is not a linear function.
there's this endlessly seductive notion, if you've failed to write a book or six, then you think alright, maybe i'm not cut out to produce a full length novel. but i've still gotten somewhere. so, if i can write 30% of a novel before stalling out, surely it must be possible to deliberately write only 30% of a novel.
(it must be possible - you've done it!)
but it's not going to be that satisfying to write only the beginning 30%, so what if you... evened it out a little bit? pretty easy to take a page and summarize it in a paragraph 30% as long, so why not write a summary of a novel? compress the pacing, cut out some connective tissue, but certainly creating this blurry, scaled-down blueprint should be easier than writing the full novel!
i'm starting to think this endeavor might be doomed by its very conception. if it took all of one's striving just to fail to write a novel, then what fruit comes from the pen that has given up before a single word has been written? is it a surprise if one fails once more to produce the three-tenths fraction, and only yields something more meager still?
#this has been sitting in my drafts more months#there was a tangent about peano arithmetic and the halting problem i never finished so i just deleted it#and i'm posting this because it's a neat thought#but it's funny to read back over this and remember#an opaque heart literally exists#aurora moonrise too but i flew too close to the sun there and turned a complete novel-summary into an incomplete series-summary#my thoughts
3 notes
·
View notes
Text
giuseppe peano voice: why was 0’’’’’’ afraid of 0’’’’’’’
3 notes
·
View notes
Text
Self-referencing functions
Hey mathblr, let me tell you about one of our favorite foundational systems for mathematics! It's designed to allow for unlimited self-reference, which is neat since self-reference is usually thought of as a big no-no in foundational systems. It turns out that it actually doesn't matter at all, because the power of self-reference is completely exhausted by the partial computable functions. The theory ends up being equivalent to Peano Arithmetic.
What are the axioms?
The theory is two-typed: the first type is for the natural numbers, and the second type is for functions between numbers. For convenience, numbers will be represented by lowercase variables, and uppercase variables represent functions. To prevent logical contradictions, we permit that some functions will fail to evaluate, so we include a non-number object ☒ called "null" for such cases. The axioms about numbers are basically what you'd expect, and we only need one axiom about functions.
The < relation is a strict total order between numbers.
Each nonempty class has a minimum: axiomatize the "min" operator with φ(n) ⇒ ∃m,(φ(m) ∧ min{k:φ(k)}=m≤n) for each predicate φ, and relatedly min{k:φ(k)}=☒ ⇔ ∀n, ¬φ(n).
Numbers exist: ∃n,n=n
There's no largest number: ∀n,∃k,n
There's no infinite number: ∀n,n=0 ∨ ∃k,n=S(k)
Every functional expression represents a function object that exists: ∃F, ∀(a,b,c), F(a,b,c)=Ψ for any function term Ψ. The term Ψ may mention F.
To clarify the fifth axiom, we define 0:=min{n : n=n}, and relatedly S(k):=min{n : k<n} is the successor function. The sixth axiom allows us to construct self-referencing functions using any "function term". Basically, a term is any expression which evaluates numerically. Formally, a "function term" is any well-formed formula generated from the following formation rules.
"n" is a term; any number variable.
"F(Θ,Φ,Ψ)" is a term, whenever Θ,Φ,Ψ are terms.
"Φ<Ψ" is a term, whenever Φ,Ψ are terms.
"min{n : Ψ}" is a term, whenever Ψ is a term.
In the third rule, we seem to be using the boolean relation < as if it were a numerical operator. To clarify this, we use the programmer convention that true=1 and false=0, hence (n<k)=1 whenever n<k is true, and otherwise it's zero. Similarly in the fourth rule, when we use the numerical function term Ψ as the argument to the "min" operator, we interpret Ψ as being false whenever it's 0, and true whenever it's positive. Formally, we can use the following definitions.
(n<k) = min{b : k=0 ∨ ((n<k ⇔ b=1) ∧ n≠☒≠k)} min{n : Ψ(n)} = min{n : 0<Ψ(n) ∧ ∀(k<n),Ψ(k)=0}
Okay, what can it do?
The formation rules on functions actually gives us a TON of versatility. For example, the "<" relation can be used to encode literally all boolean logic. Here's how you might do that.
¬x = (x<1) (x≤y) = ¬(y<x) x⇒y = (¬¬x ≤ ¬¬y) x∨y = (¬x ⇒ y) x∧y = ¬(¬x ∨ ¬y) (x=y) = ((x≤y)∧(y≤x)) [p?x:y] = min{z : (p∧(z=x))∨(¬p∧(z=y))}
That last one is the ternary conditional operator, which can be used to implement casewise definitions. If you wanna get really creative, you can implement bounded quantification as an operator, which can then be used to define the supremum/maximum operator!
∃[t<x, F(t)] = (min{t : t=x ∨ ¬F(t)}<x) ∀[t<x, F(t)] = ¬∃[t<x, ¬F(t)] sup{F(t) : t<x} = min{y : ∀[t<x, F(t)≤y]}
Of course, none of this is even taking advantage of the self-reference that our rules permit. For example, we could implement addition and multiplication using their recursive definitions, provided we define the predecessor operation first. Alternatively, we can use the supremum operator as a little shortcut.
x+y = [y ? sup{succ(x+t) : t<y} : x] x*y = sup{(x*t)+x : t<x} x^y = [y ? sup{(x^t)*x : t<y} : 1]
Using the axioms we established, basically as a simple induction, it can be proved that these operations are total and obey their ordinary recursive definitions. So, our theory is at least as strong as Peano Arithmetic. It's not hard to believe that our functions can represent any partial computable function, and it's only a little harder to prove it formally. Conversely, all our axioms are true when restricted to the domain of partial computable functions, so it's consistent that all our functions are computable. In particular, there's a straightforward way to interpret each function term as a computer program. Since PA can quantify over computable functions, our theory is exactly as strong as PA. In fact, it's basically just a definitorial extension of PA. Pretty neat, right?
Set theory jumpscare
Hey didn't you think it was weird how we never asserted the axiom of induction? We asserted wellfoundedness with the minimization operator, which is basically equivalent, but we also had to deny infinite numbers for induction to work. What if we didn't do that? What if we did the opposite? Axiom of finity unfriended, our domain of discourse is now the ordinal numbers. New axioms just dropped.
There's an infinite number: ∃w, 0≠w ∧ ∀k, S(k)≠w
Supremums: (∀(x≤a),∃y,φ(x,y)) ⇒ ∃b,∀(x≤a),∃(y≤b),φ(x,y)
Unlimited Cardinals: ∀a, ∃b, #(a)<#(b), where #(n) denotes the cardinality operation.
Each of the above axioms basically just assert the existence of larger and larger ordinal numbers, continuing the pattern set out by the third and fourth axioms from before. Similar to how the previous theory could represent all computable functions, this theory can represent all the ordinal recursive functions. These are the functions which are representable using an Ordinal Turing Machine (OTM). Conversely, it's consistent that all functions are ordinal recursive, since each function term can be interpreted as a program that's executable by an OTM. Moreover, just like how the previous theory was exactly as strong as PA, this theory is exactly as strong as ZFC.
It takes a lot of work to interpret ZFC, but basically, a set can be represented by its wellfounded and extensional membership graph. The membership graphs can, in turn, be encoded by our ordinal recursive functions. Using the Supremums axiom, it can be shown that the resulting universe of sets obeys a version of the Axiom of Replacement, which can be used to prove the Reflection Theorems, ultimately leading to the Specification Axiom. By adapting similar techniques relative to some regular cardinal, it can then be shown that every set admits a powerset. Lastly, since our functions are basically generated from infinitary computer code, they can be encoded by finite strings having ordinal numbers as symbols. Those finite strings are wellorderable, which induces a global choice function, proving the Axiom of Choice. Excluding a few loose ends, this covers all the ZFC axioms, giving the desired interpretation.
In the finitistic version of this theory, we made the observation that the theory was basically just a definitorial expansion of PA. In the infinitary case however, we unfortunately cannot say the same about ZFC. This ultimately comes down to the fact that our theory provides explicit and definable choice functions, meanwhile ZFC cannot. Although ZFC guarantees that choice functions exist, it cannot prove the existence of a definable choice function. This is because ZFC is an inferior theory has no clue where its sets come from, or what they really look like. Our theory, built from unlimited self-reference, and interpreted under the banner of ordinal recursive functions, is instead equivalent to the theory ZFC+"V=L".
52 notes
·
View notes
Text
Yes, this follows from Peano arithmetic. However, Peano arithmetic - induction + conservative induction should be a weaker system than Peano arithmetic. In particular, you could imagine building a model where you glue an disjoint copy of \mathbb{N} where induction fails to the standard model of \mathbb{N}, and insist that those elements in the extra copy appear only after all the elements in the original model. Then my conservative induction principle should hold, but the general induction principle should fail. I guess a key point is that M has to be understood as part of the schema, just like P, rather than something being quantified over.
In an attempt to provide a consistent foundation for mathematics, I present to you my conservative induction principle:
Suppose P is a property and M is a natural number such that 1. P(0) holds 2. for all n < M, P(n) implies P(n + 1) then for all n <= M, P(n) holds
27 notes
·
View notes
Text
If in the consideration of a simply infinite system N set in order by a transformation φ we entirely neglect the special character of the elements; simply retaining their distinguishability and taking into account only the relations to one another in which they are placed by the order-setting transformation φ, then are these elements called natural numbers or ordinal numbers or simply numbers, and the base-element 1 is called the base-number of the number-series N . With reference to this freeing the elements from every other content (abstraction) we are justified in calling numbers a free creation of the human mind. The relations or laws which are [...] always the same in all ordered simply infinite systems, whatever names may happen to be given to the individual elements [...], form the first object of the science of numbers or arithmetic.
Richard Dedekind, Was sind und was sollen die Zahlen? (1888), tr. Wooster Woodruff Beman as The Nature and Meaning of Numbers in Essays on the Theory of Numbers (1901). Peano derived his 1889 axiomatization (now standard) from Dedekind's.
19 notes
·
View notes
Text
I EXIST TO FIGHT THE ESTABLISHMENT!
I EXIST TO FIGHT THE ESTABLISHMENT!
I EXIST TO FIGHT THE ESTABLISHMENT!

OH YEAH.YEAH.YEAH
AGAIN! AGAIN! AGAIN!
AGAIN! AGAIN! AGAIN!
AGAIN! AGAIN! AGAIN!
PEANO ARITHMETIC NUMBER #1 YEEEEEEEAHHHHHH WOOOOOOOOOOO
RECURSION 4EVER YEEEEAAAAAAAAHHH
8 notes
·
View notes
Text
I believe that the following philosophical argument in favor of the second order Peano axioms as ultimately "correct" works:
We know from Gödel that no effectively definable formal system can capture the full behavior of the "true" natural numbers. That is, it's impossible, as finitistic beings, to give a formal definition which precisely characterizes the standard natural numbers. We will always "leave out some details" in the definition, among these the Gödel sentence in the given system and so on.
This makes the meaning of the phrase "the standard natural numbers" itself philosophically problematic. In the context of a given meta-theory (say ZFC), we can take the standard naturals to be some particular meta-theoretic construction (say, the von Neumann ordinals). In this context, the incompleteness theorems as internalized in the meta-theory say that no effectively definable formal system as internalized in the meta-theory can prove all the true facts about our chosen standard model. But of course this doesn't save us, because the incompleteness theorems "on the outside" of the meta-theory say that it can't prove everything there is to know about the "true" external standard model of the naturals, whatever it is.
Of course this last part is possibly bullshit and may rely on some kind of Platonism to make sense. So to be a conservative as possible one should stick to just asserting the meta-theory-internal version of the incompleteness theorems. After that you can, if you want, let them inspire by implication a sort of fog of uncertainty in the reader about what fucked up epistemic shit is going on "outside" the meta-theory, even though that perhaps does not make sense (or perhaps it does...). Of course you can make "outside the meta-theory" make sense by internalizing the meta-theory in a meta-meta-theory, but then you just get the same situation one level up.
So, ok, the point is that you are never going to be able to write down a formal system that unambiguously defines what you mean by "the true standard model of the naturals", such that exactly the statements which can be derived from this system (=definition) are exactly the true ones. Which sucks! That's lame, because math is supposed to involved being precise about what we mean by shit.
There are a couple of ways out. One is to just take some effectively definable formal system like first order PA and say "this is what we mean by the naturals, we mean the shit that can be proved from this. Yeah that leaves a lot of stuff hanging, a lot of statements about arithmetic of-ambiguous-truth-value, but whatever". Because, you know, PA is not categorical, so it has many inequivalent models. Or you can say "I will take second order PA as internalized in ZFC (so basically, the von Neumann ordinals) as my definition of the naturals". Which I think is more powerful(?) but still suffers from the same problem when you look at it "from the outside" of ZFC. Actually, you can do that for any (expressive enough) meta-theory M, you can put second-order PA inside it and take that as your naturals.
With the stage set, a brief digression:
I think that, informally, we should all be able to agree on the following about the "true" set of natural numbers, if such a thing can be said to exist (and imo it sort of must, because it's implicitly invoked in a meta-way when we define formal systems to begin with, and so on):
1. The number 0 is a natural number 2. If n is a natural number, then the successor of n (that is, n+1) is also a natural number 3. If m and n are two natural numbers and they have the same successor (that is, n+1 = m+1), then m = n 4. There is no natural number whose successor is 0 5. If P is some property which might or might not hold of a natural number, and we know that P holds of 0, and we furthermore know that whenever P holds of one number it must hold for the next number, then we know that P must hold for every natural number
Some people are philosophical uncomfortable with the last one, but I think it's intuitively undeniable. Like imagine a fucking... guy hopping from one number to the next, and he never stops. Can you pick a number he never gets to? No you fucking can't. You believe in induction.
So, ok, back to models and shit: both first order and second order PA try to formalize this intuition, and the key way that they differ is in terms of what a "property" (mentioned in (5)) is. First order PA says that a "property" is a first order formula. This is very powerful because we can effectively define the set of first order formulas over a given language. They are finite objects and we can work with them direction. From this flows all the nice properties of first order logic, like completeness and so on. But this effectively definability also makes it susceptible to the incompleteness theorems, and so first order PA ends up "leaving stuff out".
Second order PA defers the notion of a "property" to the meta-theory. It basically says "a property is whatever you think it is, big guy ;)" to ZFC or whatever theory it's being formulated in. ZFC thinks a property is a ZFC-set. Meta theory M thinks a property is an M-set. And second order PA as formalized in M agrees. Mathematically this makes second order PA harder to study as an object in itself. But philosophically I think it's kind of desirable?
First of all because, at a basic level, "property" seems like a much more fundamental notion to me than "natural number", and one I am much more willing to accept an intuition based definition of. Like, I don't know what you mean if you say "the true natural numbers". That seems pretty wishy-washy! But if you say "the real-world, ordinary definition of 'a property'", I can kinda be like "yeah, properties of things. I know how to reason about those!". And then second order PA, because it's categorical, will tell me "great: since you know what a property is, here's what a natural number is". And that's something I can work with.
This was overly long-winded I think. But in other words, what I am basically advocating for is conceptualizing second order PA as a function from "notions of property" to "notions of the natural numbers". And because models of PA are unique up to isomorphism (in whatever (sufficiently powerful) meta-theory you formalize it in, not "from the outside" of course) this means you can take up SOPA as your definition of the natural numbers and then "lug it around with you" into whatever different foundational system or meta-theory you fancy. And when you lug it into the real world, where "properties" mean actual properties of things, you get the real, true natural numbers.
This is all purely philosophizing of course. But I think this is about the situation.
48 notes
·
View notes
Text
This is a little web app thingy that's meant to help you learn the syntax of Lean (the proof assistant) by working through some basic Peano arithmetic stuff.
I'm not really interested in Lean, but if you have my exact kind of autism (and haven't actually done this stuff yourself before) this is extremely addictive. I've spent the past ~4 hours in the Gamer Zone, russling my bertrands rather than getting anything useful done.
2 notes
·
View notes
Text
semi-daily math post since people asked—
you may have heard about historical arguments in mathematics— irrational numbers, imaginary numbers, even quaternions— but one of the more modern divides is over something called the axiom of choice. an axiom is one of the base assumptions of a system of logic��� things that we presume to be true so that we can rely on their logic to create new conclusions. our common system of logic is called zermelo-fraenkel set theory. (if you choose to accept the axiom of choice, it’s abbreviated ZFC to include that.) set theory is extremely foundational and has to do with how we group collections of abstract mathematical objects; one axiom in ZFC, for example, is ‘if we have two sets, there exists a union of the sets.’ for example, the union of {x,y} and {y,z} is {x,y,z}.
the axiom of choice essentially states that given an infinite collection of sets, you can make a new set by choosing one element from each of those sets. kinda abstract. kinda not as abstract as you’d think, too? but once you start thinking about choosing from infinite sets without a ‘rule’ to follow— infinite arbitrary choices— it can get dicey. it was originally controversial because some of its conclusions were kind of counterintuitive; for example, the banach-tarski paradox, which lets you divide an ideal sphere (so, infinitely divisible) into complex parts such that you can manipulate those parts into two identical spheres of the same volume as the original. there’s even a common math joke about it by jerry bona— “the axiom of choice is obviously true, the well-ordering principle obviously false, and who can tell about zorn’s lemma?” poking fun at the fact that… those three things are all equivalent to the same thing, the axiom of choice, just presented in different ways that make them seem either very intuitive or very counterintuitive!
these days the axiom of choice is widely used. i wouldn’t say ‘widely accepted,’ exactly, because axioms aren’t exactly ‘true’ or ‘false’; they’re a basis of logic we either decide to use or decide not to use based on whether it’s useful for us. (people study other systems of logic too! look up peano arithmetic). that being said, apparently it’s useful enough to have justified its existence to most mathematicians :-)
43 notes
·
View notes
Text
The Philosophy of Arithmetic
The philosophy of arithmetic examines the foundational, conceptual, and metaphysical aspects of arithmetic, which is the branch of mathematics concerned with numbers and the basic operations on them, such as addition, subtraction, multiplication, and division. Philosophers of arithmetic explore questions related to the nature of numbers, the existence of mathematical objects, the truth of arithmetic propositions, and how arithmetic relates to human cognition and the physical world.
Key Concepts:
The Nature of Numbers:
Platonism: Platonists argue that numbers exist as abstract, timeless entities in a separate realm of reality. According to this view, when we perform arithmetic, we are discovering truths about this independent mathematical world.
Nominalism: Nominalists deny the existence of abstract entities like numbers, suggesting that arithmetic is a human invention, with numbers serving as names or labels for collections of objects.
Constructivism: Constructivists hold that numbers and arithmetic truths are constructed by the mind or through social and linguistic practices. They emphasize the role of mental or practical activities in the creation of arithmetic systems.
Arithmetic and Logic:
Logicism: Logicism is the view that arithmetic is reducible to pure logic. This was famously defended by philosophers like Gottlob Frege and Bertrand Russell, who attempted to show that all arithmetic truths could be derived from logical principles.
Formalism: In formalism, arithmetic is seen as a formal system, a game with symbols governed by rules. Formalists argue that the truth of arithmetic propositions is based on internal consistency rather than any external reference to numbers or reality.
Intuitionism: Intuitionists, such as L.E.J. Brouwer, argue that arithmetic is based on human intuition and the mental construction of numbers. They reject the notion that arithmetic truths exist independently of the human mind.
Arithmetic Truths:
A Priori Knowledge: Many philosophers, including Immanuel Kant, have argued that arithmetic truths are known a priori, meaning they are knowable through reason alone and do not depend on experience.
Empiricism: Some philosophers, such as John Stuart Mill, have argued that arithmetic is based on empirical observation and abstraction from the physical world. According to this view, arithmetic truths are generalized from our experience with counting physical objects.
Frege's Criticism of Empiricism: Frege rejected the empiricist view, arguing that arithmetic truths are universal and necessary, which cannot be derived from contingent sensory experiences.
The Foundations of Arithmetic:
Frege's Foundations: In his work "The Foundations of Arithmetic," Frege sought to provide a rigorous logical foundation for arithmetic, arguing that numbers are objective and that arithmetic truths are analytic, meaning they are true by definition and based on logical principles.
Russell's Paradox: Bertrand Russell's discovery of a paradox in Frege's system led to questions about the logical consistency of arithmetic and spurred the development of set theory as a new foundation for mathematics.
Arithmetic and Set Theory:
Set-Theoretic Foundations: Modern arithmetic is often grounded in set theory, where numbers are defined as sets. For example, the number 1 can be defined as the set containing the empty set, and the number 2 as the set containing the set of the empty set. This approach raises philosophical questions about whether numbers are truly reducible to sets and what this means for the nature of arithmetic.
Infinity in Arithmetic:
The Infinite: Arithmetic raises questions about the nature of infinity, particularly in the context of number theory. Is infinity a real concept, or is it merely a useful abstraction? The introduction of infinite numbers and the concept of limits in calculus have expanded these questions to new mathematical areas.
Peano Arithmetic: Peano's axioms formalize the arithmetic of natural numbers, raising questions about the nature of induction and the extent to which the system can account for all arithmetic truths, particularly regarding the treatment of infinite sets or sequences.
The Ontology of Arithmetic:
Realism vs. Anti-Realism: Realists believe that numbers and arithmetic truths exist independently of human thought, while anti-realists, such as fictionalists, argue that numbers are useful fictions that help us describe patterns but do not exist independently.
Mathematical Structuralism: Structuralists argue that numbers do not exist as independent objects but only as positions within a structure. For example, the number 2 has no meaning outside of its relation to other numbers (like 1 and 3) within the system of natural numbers.
Cognitive Foundations of Arithmetic:
Psychological Approaches: Some philosophers and cognitive scientists explore how humans develop arithmetic abilities, considering whether arithmetic is innate or learned and how it relates to our cognitive faculties for counting and abstraction.
Embodied Arithmetic: Some theories propose that arithmetic concepts are grounded in physical and bodily experiences, such as counting on fingers or moving objects, challenging the purely abstract view of arithmetic.
Arithmetic in Other Cultures:
Cultural Variability: Different cultures have developed distinct systems of arithmetic, which raises philosophical questions about the universality of arithmetic truths. Is arithmetic a universal language, or are there culturally specific ways of understanding and manipulating numbers?
Historical and Philosophical Insights:
Aristotle and Number as Quantity: Aristotle considered numbers as abstract quantities and explored their relationship to other categories of being. His ideas laid the groundwork for later philosophical reflections on the nature of number and arithmetic.
Leibniz and Binary Arithmetic: Leibniz's work on binary arithmetic (the foundation of modern computing) reflected his belief that arithmetic is deeply tied to logic and that numerical operations can represent fundamental truths about reality.
Kant's Synthetic A Priori: Immanuel Kant argued that arithmetic propositions, such as "7 + 5 = 12," are synthetic a priori, meaning that they are both informative about the world and knowable through reason alone. This idea contrasts with the empiricist view that arithmetic is derived from experience.
Frege and the Logicization of Arithmetic: Frege’s attempt to reduce arithmetic to logic in his Grundgesetze der Arithmetik (Basic Laws of Arithmetic) was a foundational project for 20th-century philosophy of mathematics. Although his project was undermined by Russell’s paradox, it set the stage for later developments in the philosophy of mathematics, including set theory and formal systems.
The philosophy of arithmetic engages with fundamental questions about the nature of numbers, the existence of arithmetic truths, and the relationship between arithmetic and logic. It explores different perspectives on how we understand and apply arithmetic, whether it is an invention of the human mind, a discovery of abstract realities, or a formal system of rules. Through the works of philosophers like Frege, Kant, and Leibniz, arithmetic has become a rich field of philosophical inquiry, raising profound questions about the foundations of mathematics, knowledge, and cognition.
#philosophy#knowledge#epistemology#learning#education#chatgpt#ontology#metaphysics#Arithmetic#Philosophy of Mathematics#Number Theory#Logicism#Platonism vs. Nominalism#Formalism#Constructivism#Set Theory#Frege#Kant's Synthetic A Priori#Cognitive Arithmetic
2 notes
·
View notes
Note
I absolutely agree with you. The natural numbers start at 1. I like to use these functions to do arithmetic:
n +₁ 1 = n
n +₁ S(m) = S(n) +₁ m
n ×₁ 1 = 1
n ×₁ S(m) = n +₁ (n ×₁ m)
Hope that helps!
Huh oh sweet this was what introduced me to the Peano axioms. They’re cool!
Something sad about them though is originally when formulating them Peano defined the axioms using 1 as the first natural number, but then for some reason he got them wrong when writing Formulario mathematico??? Hope someone corrected his mistake, I’d hate for people to get the wrong idea.
#text post#randyposting#maths#math#mathematics#ask#ask answered#shitpost#I am God’s strongest soldier#brought to this world to preach the good word of 0 not being natural
4 notes
·
View notes
Text
An epistemological gap in maths
There's something I find really philosophically interesting about mathematical formalism. The fact that the infinite variety of objects we talk about in maths, and all the facts we can discover about them, can all be attained by assuming some basic facts about these things we call 'sets'. Every enlightening concept, every stroke of genius that grants us some new proof can be boiled down to a sequence of symbols so mechanical that a computer can verify it - that a computer could discover on its own, eventually, given infinite runtime.
This game of manipulating these symbols on a page somehow does the same thing as the intuitive visualization of a polyhedron having a side removed and being squashed onto the plane like a slut. That these two modes of thought (can it even be called thought?) are so different, yet identical. One's rigid, mechanical, one so smooth, alive, yet both meaning the same thing. It's like fucking a machine.
Yet I can't help but feel there's some sleight of hand going on with this 'meaning' - what does it mean for something to mean something? Consider the theorem that a+b=b+a. You can prove this rigorously by induction, first defining the set N, then the successor function, going through the Peano axioms, etc. to produce a proof so watertight, infallible, that no conscious thought is needed to check that it is right. Yet if you showed this to a 6-year-old who was unconvinced of this fact, would that help them? Of course not. I think even if we imagine an infinitely intelligent 6-year-old, whose program of math has worked from the ground up, and has an expert knowledge of set theory but no arithmetic, this still wouldn't work.
The way you would convince them would be along the line of 'suppose you have 3 things in one hand and 5 in the other, the number of things you have is 3+5. Now imagine you had the 5 and 3 things in the opposite hand. That would be 5+3, but it's the same things, so the quantity must be the same.' This appeals to intuitive facts about the world - that objects continue to exist when we move them around, that they can be swapped and still be the same, that this notion we have of 'quantity' isn't something that depends on where things are. And I think these intuitive notions are indispensible if you actually want to convince someone of this.
When you do the proof with sets, you are proving things about sets. We can prove that the Peano axioms are true of this set 'N' and intuitively reason that the Peano axioms make something a good model of what we call 'quantity', but you're still just proving something about the model. It takes an intuitive, unverifiable leap of intelligence to get from a truth about the model to a truth about the world. A leap that I don't think you could convince a complete skeptic to take.
This, on the face of it, is similar to science. You can prove model planets with points and mathematically predict their trajectories and where they should appear in the sky as a result. But these flecks of light we see in the sky will only actually show up there if the model is accurate enough. But I don't think it's the same, because addition - the intuitive version, in the real world, not "+: N^2->N" - is still an ideal object, not something in the real world - you can't have 10^1000 of something, but it's an intuitive object that we somehow all arrive at, and all seem to agree on how it works, even in the case of numbers that we can't possibly come across in the real world.
And we arrive at this consensus without any rigorous proof to show us what is correct - this is a necessary step before we can even apply rigour - but because we all have the same experience of the world, and somehow all extract the same concept of addition from that experience. But we don't even think about the fact that we're doing it. How does this epistemological leap get made?
We face the same problem, even worse, when we come to the real numbers, and how we take this as a model of e.g. 'things a thermometer can show', or how we construct more objects, like circles, from it, and take this set {(x,y) in R^2: x^2+y^2=1} as a model of our pre-existing ideal concept of a circle. I say this is worse, because R is even more epistemically fraught (you tell me there are these numbers which take an infinite amount of information to describe, so you can't possibly define them, that can be found in all the 'gaps' between the numbers we can describe, and these numbers are somehow more numerous than infinity? and you expect me to believe they exist?) and it takes a lot more to comprehend what is going on with them, set-theoretically, so it takes more intelligence to make the leap from our intuitive understanding of circles to the symbols on a page describing circles in ZFC.
Whenever we prove a theorem in ZFC, like 'the area of a disc of radius 1 is half its circumference' we are proving something about these ZFCircles, not our primordial conceptions of circles. We only accept that it's true of primordial circles because we accept ZFCircles are a good model of them. But when we do a pre-rigorous proof of this fact, like by cutting the circle into smaller and smaller triangles, we are proving something about primordial circles, directly. Maybe this relates to why proofs like that are more satisfying than pushing symbols around.
I think that the equivalence of the symbol-pushing proof and the intuitive proof contains a similar irreducible conceptual leap as the jump from primordial circles to ZFCircles. We have to accept that the rules for manipulation of symbols in ZFC accurately model valid truth-finding procedures in actual thought. And we have to accept that some complicated statement about measures involving the set {(x,y) in R^2: x^2+y^2=1} with about 20 layers of nested definitions is an accurate model of the statement 'the area of a disc of radius 1 is half its circumference'. And I think this conceptual leap is fascinating and terrifying. However much we convert maths to the mechanical, we need a leap of faith to be able to confirm that the mechanical result even gives us anything of meaning in the first place. For a machine to be genuine sex, we have to find it at least somehow similar to sex with humans - this epistemological gap terrifies me.
I think that this is often what is being talked about when people have concerns about the 'truth' of the axioms of ZFC, but I also think that notion is used to describe other things too, and that causes confusion.
This is a leap that is crucial for human society - we need to accept that some complicated set of equations are an accurate model for the laws of physics, and the shape of a planned suspension bridge, and we need to trust that computations which those equations giving us an outcome which we interpret as 'not falling down' correspond to the bridge in real life not falling down.
But something else concerning is happening here. The leap from addition to ZFC addition requires some intelligence. The leap from primordial circles to ZFCircles requires more intelligence than that, and the leap from the bridge to the equations that we say describe it requires a great deal of understanding, which is part of why engineers need to be trained.
It seems like the more complicated the things we want to do with maths, the bigger this gap of intelligence becomes, to the point that very few people can actually understand it well enough to be able to judge whether our model corresponds to reality. It seems like as society becomes more complex, we have to take more on faith, not just as individuals trusting other individuals, but as society as a whole.
4 notes
·
View notes
Text
just heard back from the studio that they won’t make incomplete unless i agree to have chris pratt voice the lead. yeah incomplete my animated children’s movie about the magical world of Peano Arithmetic whos inhabitants are the sentences of that language. yeah they want him to voice G, better known as Gödel’s sentence, which he constructed in his proof of the incompleteness of that system, our clever starry eyed protagonist who seems to be able to prove any sentence but himself. which is of course troublesome because everyone in that world wants to be allowed into the elite metropolis of “Tarski’s Truth Set” into which entry is only granted to those who have proven their own veracity. yeah so the inciting incident occurs when G discovers his own undecidability, which causes mass panic and chaos throughout Peano Arithmetic. they wanna put james corden in it too! yeah as G’s best friend and bumbling sidekick, the trivially refutable 1 = 2. no i actually dont have a problem with the names they threw out for the love interest, a reasonably complicated but ultimately provable sentence who draws a line over her equals sign every morning because she finds Tarski too stuffy and prefers to live in “The Complement”, their name for the areas outside of Tarski, they mentioned sarah silverman who wouldnt have been my first choice but yeah i think she could probably pull it off. they also wanna do sir ian mckellen for the axioms, yeah the antagonists of the film who rule over and maintain the strict hierarchies of their world and who upon learning about G’s undecidability seek to induct not G, G’s negation and rival who, yeah would also be voiced by chris pratt, yeah i know, among their ranks to make G trivially refutable. no i mean i love sir ian mckellen dont get me wrong its just like, its an awful lot of characters for one guy to play i just dont know if he has enough voices in his back pocket to make them all feel distinct. yeah especially the axiom of induction who gets a whole subplot about his inferiority complex over being in a distinct group from all the other arithmetic axioms. oh good question, yeah all the axioms have to be voiced by the same person so that said person can play Peano in the climax, a lego movie-esque talk to god scene in which G watches Peano scold Gödel for ruining his beautiful formalization of arithmetic, and Gödel explains that it really just enriches our understanding of mathematics to see the incompleteness of the system. this scene is also where G learns that he actually is true, even though that fact cannot be shown within the confines of his world. yeah that does mean that they’re gonna have chris pratt play kurt gödel. no man i know it fucking sucks but they wont make it otherwise. oh yeah and modus ponens is gonna be disney’s first openly gay character.
21 notes
·
View notes
Text
For anyone who doesn't know (which is everyone), we also write stuff on Math Stack Exchange sometimes, asking and answering questions. If you liked our post about the mathematics of unknowability, you might like some of the answers we post there. We tend to post a lot of stuff about logic, computability, set theory, and occasionally analysis/topology. Below is a list of some MSE answers we're proud of.
About computability and arithmetic
We categorized Busy Beaver on the arithmetical hierarchy
Sigma summation and Pi product notation are expressive enough to define most computer programs
Primitive Recursion can perform most instances of finitary wellfounded recursion
Functions with polynomial-time decidable graphs can dominate any computable function
An explicit demonstration of Godel's Beta lemma, shows how Peano Arithmetic can perform recursive definitions.
About logic and set theory
Showing how Hilbert's Epsilon doesn't fit within Intuitionist Logic
Finitary set theory can still define an infinite wellorder, without Specification
The proof power of finitary set theory is strictly weaker without the axiom of Specification
The proof power of ZFC is strictly weaker without the axiom of Specification
A classic proof establishing the existence of order isomorphisms between wellorders
A limitative result on ZF without Choice, gives exact conditions on when ZF is unable to establish an injection between exponents
A more complicated improvement of the previous result
About analysis, geometry, and topology
A functional definition of sine and cosine that doesn't appeal to arclength or calculus
The sequence sin(n)^n densely fills the interval [-1,1]
The sequence (sin(n)^n)/n is summable, and a generalization
Sets of concentric loops are topologically homeomorphic to concentric circles
Showing how non-Euclidean geometries can construct models of Euclidean geometry
18 notes
·
View notes
Text
The Peano Axioms: Building Blocks of Arithmetic
https://principlesofcryptography.com/number-theory-primer-an-axiomatic-study-of-natural-numbers-peano-axioms/
0 notes